From 0dbcc1a5b2c03527996c8ef8b418ed1cc004d7e2 Mon Sep 17 00:00:00 2001 From: Agustin Bettati Date: Thu, 5 Sep 2024 17:33:43 +0200 Subject: [PATCH 01/16] adding changelog entry for 1.18.1 to avoid confusion (#2561) --- CHANGELOG.md | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/CHANGELOG.md b/CHANGELOG.md index ef0091d313..7df0ad1466 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -12,6 +12,10 @@ FEATURES: ## 1.18.1 (August 26, 2024) +NOTES: + +* resource/mongodbatlas_advanced_cluster: Documentation adjustment in resource and migration guide to clarify potential `Internal Server Error` when applying updates with new sharding configuration ([#2525](https://github.com/mongodb/terraform-provider-mongodbatlas/pull/2525)) + ## 1.18.0 (August 14, 2024) BREAKING CHANGES: From 2a4e4acd720c68fd58a9e4c2566905d24f750c4b Mon Sep 17 00:00:00 2001 From: Oriol Date: Mon, 9 Sep 2024 09:14:28 +0200 Subject: [PATCH 02/16] feat: Support `replica_set_scaling_strategy` in `mongodbatlas_advanced_cluster` (#2539) * wip: initial changes * don't change cluster resource * changelog * docs * test * specific test * test and implement in old schema * separate tests for old and new schema * fix update --- .changelog/2539.txt | 11 ++ docs/data-sources/advanced_cluster.md | 1 + docs/data-sources/advanced_clusters.md | 1 + docs/resources/advanced_cluster.md | 1 + .../data_source_advanced_cluster.go | 7 + .../data_source_advanced_clusters.go | 5 + .../resource_advanced_cluster.go | 36 ++++ .../resource_advanced_cluster_test.go | 163 ++++++++++++++++++ 8 files changed, 225 insertions(+) create mode 100644 .changelog/2539.txt diff --git a/.changelog/2539.txt b/.changelog/2539.txt new file mode 100644 index 0000000000..6c788d9038 --- /dev/null +++ b/.changelog/2539.txt @@ -0,0 +1,11 @@ +```release-note:enhancement +resource/mongodbatlas_advanced_cluster: supports replica_set_scaling_strategy attribute +``` + +```release-note:enhancement +data-source/mongodbatlas_advanced_cluster: supports replica_set_scaling_strategy attribute +``` + +```release-note:enhancement +data-source/mongodbatlas_advanced_clusters: supports replica_set_scaling_strategy attribute +``` \ No newline at end of file diff --git a/docs/data-sources/advanced_cluster.md b/docs/data-sources/advanced_cluster.md index 9044c4c884..b268bdc876 100644 --- a/docs/data-sources/advanced_cluster.md +++ b/docs/data-sources/advanced_cluster.md @@ -103,6 +103,7 @@ In addition to all arguments above, the following attributes are exported: * `version_release_system` - Release cadence that Atlas uses for this cluster. * `advanced_configuration` - Get the advanced configuration options. See [Advanced Configuration](#advanced-configuration) below for more details. * `global_cluster_self_managed_sharding` - Flag that indicates if cluster uses Atlas-Managed Sharding (false) or Self-Managed Sharding (true). +* `replica_set_scaling_strategy` - (Optional) Replica set scaling mode for your cluster. ### bi_connector_config diff --git a/docs/data-sources/advanced_clusters.md b/docs/data-sources/advanced_clusters.md index ee67dd01fd..fdec83bf58 100644 --- a/docs/data-sources/advanced_clusters.md +++ b/docs/data-sources/advanced_clusters.md @@ -105,6 +105,7 @@ In addition to all arguments above, the following attributes are exported: * `version_release_system` - Release cadence that Atlas uses for this cluster. * `advanced_configuration` - Get the advanced configuration options. See [Advanced Configuration](#advanced-configuration) below for more details. * `global_cluster_self_managed_sharding` - Flag that indicates if cluster uses Atlas-Managed Sharding (false) or Self-Managed Sharding (true). +* `replica_set_scaling_strategy` - (Optional) Replica set scaling mode for your cluster. ### bi_connector_config diff --git a/docs/resources/advanced_cluster.md b/docs/resources/advanced_cluster.md index 2d2d534ec8..0ee28db680 100644 --- a/docs/resources/advanced_cluster.md +++ b/docs/resources/advanced_cluster.md @@ -397,6 +397,7 @@ This parameter defaults to false. * `timeouts`- (Optional) The duration of time to wait for Cluster to be created, updated, or deleted. The timeout value is defined by a signed sequence of decimal numbers with an time unit suffix such as: `1h45m`, `300s`, `10m`, .... The valid time units are: `ns`, `us` (or `µs`), `ms`, `s`, `m`, `h`. The default timeout for Advanced Cluster create & delete is `3h`. Learn more about timeouts [here](https://www.terraform.io/plugin/sdkv2/resources/retries-and-customizable-timeouts). * `accept_data_risks_and_force_replica_set_reconfig` - (Optional) If reconfiguration is necessary to regain a primary due to a regional outage, submit this field alongside your topology reconfiguration to request a new regional outage resistant topology. Forced reconfigurations during an outage of the majority of electable nodes carry a risk of data loss if replicated writes (even majority committed writes) have not been replicated to the new primary node. MongoDB Atlas docs contain more information. To proceed with an operation which carries that risk, set `accept_data_risks_and_force_replica_set_reconfig` to the current date. Learn more about Reconfiguring a Replica Set during a regional outage [here](https://dochub.mongodb.org/core/regional-outage-reconfigure-replica-set). * `global_cluster_self_managed_sharding` - (Optional) Flag that indicates if cluster uses Atlas-Managed Sharding (false, default) or Self-Managed Sharding (true). It can only be enabled for Global Clusters (`GEOSHARDED`). It cannot be changed once the cluster is created. Use this mode if you're an advanced user and the default configuration is too restrictive for your workload. If you select this option, you must manually configure the sharding strategy, more info [here](https://www.mongodb.com/docs/atlas/tutorial/create-global-cluster/#select-your-sharding-configuration). +* `replica_set_scaling_strategy` - (Optional) Replica set scaling mode for your cluster. Valid values are `WORKLOAD_TYPE`, `SEQUENTIAL` and `NODE_TYPE`. By default, Atlas scales under `WORKLOAD_TYPE`. This mode allows Atlas to scale your analytics nodes in parallel to your operational nodes. When configured as `SEQUENTIAL`, Atlas scales all nodes sequentially. This mode is intended for steady-state workloads and applications performing latency-sensitive secondary reads. When configured as `NODE_TYPE`, Atlas scales your electable nodes in parallel with your read-only and analytics nodes. This mode is intended for large, dynamic workloads requiring frequent and timely cluster tier scaling. This is the fastest scaling strategy, but it might impact latency of workloads when performing extensive secondary reads. [Modify the Replica Set Scaling Mode](https://dochub.mongodb.org/core/scale-nodes) ### bi_connector_config diff --git a/internal/service/advancedcluster/data_source_advanced_cluster.go b/internal/service/advancedcluster/data_source_advanced_cluster.go index c81b5e3ceb..9dcf142be1 100644 --- a/internal/service/advancedcluster/data_source_advanced_cluster.go +++ b/internal/service/advancedcluster/data_source_advanced_cluster.go @@ -246,6 +246,10 @@ func DataSource() *schema.Resource { Type: schema.TypeBool, Computed: true, }, + "replica_set_scaling_strategy": { + Type: schema.TypeString, + Computed: true, + }, }, } } @@ -313,6 +317,9 @@ func dataSourceRead(ctx context.Context, d *schema.ResourceData, meta any) diag. if err := d.Set("disk_size_gb", GetDiskSizeGBFromReplicationSpec(clusterDescLatest)); err != nil { return diag.FromErr(fmt.Errorf(ErrorClusterAdvancedSetting, "disk_size_gb", clusterName, err)) } + if err := d.Set("replica_set_scaling_strategy", clusterDescLatest.GetReplicaSetScalingStrategy()); err != nil { + return diag.FromErr(fmt.Errorf(ErrorClusterAdvancedSetting, "replica_set_scaling_strategy", clusterName, err)) + } zoneNameToOldReplicationSpecIDs, err := getReplicationSpecIDsFromOldAPI(ctx, projectID, clusterName, connV220240530) if err != nil { diff --git a/internal/service/advancedcluster/data_source_advanced_clusters.go b/internal/service/advancedcluster/data_source_advanced_clusters.go index b9e7b1f877..0c7563a6e7 100644 --- a/internal/service/advancedcluster/data_source_advanced_clusters.go +++ b/internal/service/advancedcluster/data_source_advanced_clusters.go @@ -259,6 +259,10 @@ func PluralDataSource() *schema.Resource { Type: schema.TypeBool, Computed: true, }, + "replica_set_scaling_strategy": { + Type: schema.TypeString, + Computed: true, + }, }, }, }, @@ -353,6 +357,7 @@ func flattenAdvancedClusters(ctx context.Context, connV220240530 *admin20240530. "termination_protection_enabled": cluster.GetTerminationProtectionEnabled(), "version_release_system": cluster.GetVersionReleaseSystem(), "global_cluster_self_managed_sharding": cluster.GetGlobalClusterSelfManagedSharding(), + "replica_set_scaling_strategy": cluster.GetReplicaSetScalingStrategy(), } results = append(results, result) } diff --git a/internal/service/advancedcluster/resource_advanced_cluster.go b/internal/service/advancedcluster/resource_advanced_cluster.go index 31d71545ad..35afb32014 100644 --- a/internal/service/advancedcluster/resource_advanced_cluster.go +++ b/internal/service/advancedcluster/resource_advanced_cluster.go @@ -336,6 +336,11 @@ func Resource() *schema.Resource { Optional: true, Computed: true, }, + "replica_set_scaling_strategy": { + Type: schema.TypeString, + Optional: true, + Computed: true, + }, }, Timeouts: &schema.ResourceTimeout{ Create: schema.DefaultTimeout(3 * time.Hour), @@ -442,6 +447,9 @@ func resourceCreate(ctx context.Context, d *schema.ResourceData, meta any) diag. if v, ok := d.GetOk("global_cluster_self_managed_sharding"); ok { params.GlobalClusterSelfManagedSharding = conversion.Pointer(v.(bool)) } + if v, ok := d.GetOk("replica_set_scaling_strategy"); ok { + params.ReplicaSetScalingStrategy = conversion.StringPtr(v.(string)) + } // Validate oplog_size_mb to show the error before the cluster is created. if oplogSizeMB, ok := d.GetOkExists("advanced_configuration.0.oplog_size_mb"); ok { @@ -527,6 +535,17 @@ func resourceRead(ctx context.Context, d *schema.ResourceData, meta any) diag.Di if err := d.Set("disk_size_gb", clusterOldSDK.GetDiskSizeGB()); err != nil { return diag.FromErr(fmt.Errorf(ErrorClusterAdvancedSetting, "disk_size_gb", clusterName, err)) } + cluster, resp, err := connV2.ClustersApi.GetCluster(ctx, projectID, clusterName).Execute() + if err != nil { + if resp != nil && resp.StatusCode == http.StatusNotFound { + d.SetId("") + return nil + } + return diag.FromErr(fmt.Errorf(errorRead, clusterName, err)) + } + if err := d.Set("replica_set_scaling_strategy", cluster.GetReplicaSetScalingStrategy()); err != nil { + return diag.FromErr(fmt.Errorf(ErrorClusterAdvancedSetting, "replica_set_scaling_strategy", clusterName, err)) + } zoneNameToZoneIDs, err := getZoneIDsFromNewAPI(ctx, projectID, clusterName, connV2) if err != nil { @@ -553,6 +572,9 @@ func resourceRead(ctx context.Context, d *schema.ResourceData, meta any) diag.Di if err := d.Set("disk_size_gb", GetDiskSizeGBFromReplicationSpec(cluster)); err != nil { return diag.FromErr(fmt.Errorf(ErrorClusterAdvancedSetting, "disk_size_gb", clusterName, err)) } + if err := d.Set("replica_set_scaling_strategy", cluster.GetReplicaSetScalingStrategy()); err != nil { + return diag.FromErr(fmt.Errorf(ErrorClusterAdvancedSetting, "replica_set_scaling_strategy", clusterName, err)) + } zoneNameToOldReplicationSpecIDs, err := getReplicationSpecIDsFromOldAPI(ctx, projectID, clusterName, connV220240530) if err != nil { @@ -779,6 +801,16 @@ func resourceUpdate(ctx context.Context, d *schema.ResourceData, meta any) diag. if err := waitForUpdateToFinish(ctx, connV2, projectID, clusterName, timeout); err != nil { return diag.FromErr(fmt.Errorf(errorUpdate, clusterName, err)) } + } else if d.HasChange("replica_set_scaling_strategy") { + request := &admin.ClusterDescription20240805{ + ReplicaSetScalingStrategy: conversion.Pointer(d.Get("replica_set_scaling_strategy").(string)), + } + if _, _, err := connV2.ClustersApi.UpdateCluster(ctx, projectID, clusterName, request).Execute(); err != nil { + return diag.FromErr(fmt.Errorf(errorUpdate, clusterName, err)) + } + if err := waitForUpdateToFinish(ctx, connV2, projectID, clusterName, timeout); err != nil { + return diag.FromErr(fmt.Errorf(errorUpdate, clusterName, err)) + } } } else { req, diags := updateRequest(ctx, d, projectID, clusterName, connV2) @@ -912,6 +944,10 @@ func updateRequest(ctx context.Context, d *schema.ResourceData, projectID, clust if d.HasChange("paused") && !d.Get("paused").(bool) { cluster.Paused = conversion.Pointer(d.Get("paused").(bool)) } + + if d.HasChange("replica_set_scaling_strategy") { + cluster.ReplicaSetScalingStrategy = conversion.Pointer(d.Get("replica_set_scaling_strategy").(string)) + } return cluster, nil } diff --git a/internal/service/advancedcluster/resource_advanced_cluster_test.go b/internal/service/advancedcluster/resource_advanced_cluster_test.go index beba9619ae..57b6846f59 100644 --- a/internal/service/advancedcluster/resource_advanced_cluster_test.go +++ b/internal/service/advancedcluster/resource_advanced_cluster_test.go @@ -716,6 +716,62 @@ func TestAccClusterAdvancedClusterConfig_geoShardedTransitionFromOldToNewSchema( }) } +func TestAccAdvancedCluster_replicaSetScalingStrategy(t *testing.T) { + var ( + orgID = os.Getenv("MONGODB_ATLAS_ORG_ID") + projectName = acc.RandomProjectName() + clusterName = acc.RandomClusterName() + ) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acc.PreCheckBasic(t) }, + ProtoV6ProviderFactories: acc.TestAccProviderV6Factories, + CheckDestroy: acc.CheckDestroyCluster, + Steps: []resource.TestStep{ + { + Config: configReplicaSetScalingStrategy(orgID, projectName, clusterName, "WORKLOAD_TYPE"), + Check: checkReplicaSetScalingStrategy("WORKLOAD_TYPE"), + }, + { + Config: configReplicaSetScalingStrategy(orgID, projectName, clusterName, "SEQUENTIAL"), + Check: checkReplicaSetScalingStrategy("SEQUENTIAL"), + }, + { + Config: configReplicaSetScalingStrategy(orgID, projectName, clusterName, "NODE_TYPE"), + Check: checkReplicaSetScalingStrategy("NODE_TYPE"), + }, + }, + }) +} + +func TestAccAdvancedCluster_replicaSetScalingStrategyOldSchema(t *testing.T) { + var ( + orgID = os.Getenv("MONGODB_ATLAS_ORG_ID") + projectName = acc.RandomProjectName() + clusterName = acc.RandomClusterName() + ) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acc.PreCheckBasic(t) }, + ProtoV6ProviderFactories: acc.TestAccProviderV6Factories, + CheckDestroy: acc.CheckDestroyCluster, + Steps: []resource.TestStep{ + { + Config: configReplicaSetScalingStrategyOldSchema(orgID, projectName, clusterName, "WORKLOAD_TYPE"), + Check: checkReplicaSetScalingStrategy("WORKLOAD_TYPE"), + }, + { + Config: configReplicaSetScalingStrategyOldSchema(orgID, projectName, clusterName, "SEQUENTIAL"), + Check: checkReplicaSetScalingStrategy("SEQUENTIAL"), + }, + { + Config: configReplicaSetScalingStrategyOldSchema(orgID, projectName, clusterName, "NODE_TYPE"), + Check: checkReplicaSetScalingStrategy("NODE_TYPE"), + }, + }, + }) +} + func checkAggr(attrsSet []string, attrsMap map[string]string, extra ...resource.TestCheckFunc) resource.TestCheckFunc { checks := []resource.TestCheckFunc{checkExists(resourceName)} checks = acc.AddAttrChecks(resourceName, checks, attrsMap) @@ -1917,3 +1973,110 @@ func checkGeoShardedTransitionOldToNewSchema(useNewSchema bool) resource.TestChe }, ) } + +func configReplicaSetScalingStrategy(orgID, projectName, name, replicaSetScalingStrategy string) string { + return fmt.Sprintf(` + resource "mongodbatlas_project" "cluster_project" { + org_id = %[1]q + name = %[2]q + } + + resource "mongodbatlas_advanced_cluster" "test" { + project_id = mongodbatlas_project.cluster_project.id + name = %[3]q + backup_enabled = false + cluster_type = "SHARDED" + replica_set_scaling_strategy = %[4]q + + replication_specs { + region_configs { + electable_specs { + instance_size ="M10" + node_count = 3 + disk_size_gb = 10 + } + analytics_specs { + instance_size = "M10" + node_count = 1 + disk_size_gb = 10 + } + provider_name = "AWS" + priority = 7 + region_name = "EU_WEST_1" + } + } + } + + data "mongodbatlas_advanced_cluster" "test" { + project_id = mongodbatlas_advanced_cluster.test.project_id + name = mongodbatlas_advanced_cluster.test.name + use_replication_spec_per_shard = true + } + + data "mongodbatlas_advanced_clusters" "test" { + project_id = mongodbatlas_advanced_cluster.test.project_id + use_replication_spec_per_shard = true + } + `, orgID, projectName, name, replicaSetScalingStrategy) +} + +func configReplicaSetScalingStrategyOldSchema(orgID, projectName, name, replicaSetScalingStrategy string) string { + return fmt.Sprintf(` + resource "mongodbatlas_project" "cluster_project" { + org_id = %[1]q + name = %[2]q + } + + resource "mongodbatlas_advanced_cluster" "test" { + project_id = mongodbatlas_project.cluster_project.id + name = %[3]q + backup_enabled = false + cluster_type = "SHARDED" + replica_set_scaling_strategy = %[4]q + + replication_specs { + num_shards = 2 + region_configs { + electable_specs { + instance_size ="M10" + node_count = 3 + disk_size_gb = 10 + } + analytics_specs { + instance_size = "M10" + node_count = 1 + disk_size_gb = 10 + } + provider_name = "AWS" + priority = 7 + region_name = "EU_WEST_1" + } + } + } + + data "mongodbatlas_advanced_cluster" "test" { + project_id = mongodbatlas_advanced_cluster.test.project_id + name = mongodbatlas_advanced_cluster.test.name + use_replication_spec_per_shard = true + } + + data "mongodbatlas_advanced_clusters" "test" { + project_id = mongodbatlas_advanced_cluster.test.project_id + use_replication_spec_per_shard = true + } + `, orgID, projectName, name, replicaSetScalingStrategy) +} + +func checkReplicaSetScalingStrategy(replicaSetScalingStrategy string) resource.TestCheckFunc { + clusterChecks := map[string]string{ + "replica_set_scaling_strategy": replicaSetScalingStrategy} + + // plural data source checks + additionalChecks := acc.AddAttrSetChecks(dataSourcePluralName, nil, + []string{"results.#", "results.0.replica_set_scaling_strategy"}...) + return checkAggr( + []string{}, + clusterChecks, + additionalChecks..., + ) +} From c7d9d8a8d33099c68ef606eb644c875db5e6dfbf Mon Sep 17 00:00:00 2001 From: svc-apix-bot Date: Mon, 9 Sep 2024 07:16:32 +0000 Subject: [PATCH 03/16] chore: Updates CHANGELOG.md for #2539 --- CHANGELOG.md | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/CHANGELOG.md b/CHANGELOG.md index 7df0ad1466..95a011e88d 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -10,6 +10,12 @@ FEATURES: * **New Data Source:** `data-source/mongodbatlas_project_ip_addresses` ([#2533](https://github.com/mongodb/terraform-provider-mongodbatlas/pull/2533)) +ENHANCEMENTS: + +* data-source/mongodbatlas_advanced_cluster: supports replica_set_scaling_strategy attribute ([#2539](https://github.com/mongodb/terraform-provider-mongodbatlas/pull/2539)) +* data-source/mongodbatlas_advanced_clusters: supports replica_set_scaling_strategy attribute ([#2539](https://github.com/mongodb/terraform-provider-mongodbatlas/pull/2539)) +* resource/mongodbatlas_advanced_cluster: supports replica_set_scaling_strategy attribute ([#2539](https://github.com/mongodb/terraform-provider-mongodbatlas/pull/2539)) + ## 1.18.1 (August 26, 2024) NOTES: From cac7b872a9b718c7b2badfe0763bc3009b7cad56 Mon Sep 17 00:00:00 2001 From: Oriol Date: Mon, 9 Sep 2024 18:12:53 +0200 Subject: [PATCH 04/16] fix: Sets correct `zone_id` when `use_replication_spec_per_shard` is false and refactors `replica_set_scaling_strategy` handling with old schema of advanced cluster (#2568) * set replica_set_scaling_strategy when using old schema * pass cluster directly instead of calling API twice * fix bug where wrong map was passed to FlattenAdvancedReplicationSpecsOldSDK * bug fix changelog entry * Update .changelog/2568.txt Co-authored-by: Leo Antoli <430982+lantoli@users.noreply.github.com> --------- Co-authored-by: Leo Antoli <430982+lantoli@users.noreply.github.com> --- .changelog/2568.txt | 3 +++ .../advancedcluster/data_source_advanced_cluster.go | 9 ++++++++- .../advancedcluster/data_source_advanced_clusters.go | 9 +++++++-- .../service/advancedcluster/resource_advanced_cluster.go | 8 ++------ .../advancedcluster/resource_advanced_cluster_test.go | 2 -- 5 files changed, 20 insertions(+), 11 deletions(-) create mode 100644 .changelog/2568.txt diff --git a/.changelog/2568.txt b/.changelog/2568.txt new file mode 100644 index 0000000000..91888a23e3 --- /dev/null +++ b/.changelog/2568.txt @@ -0,0 +1,3 @@ +```release-note:bug +data-source/mongodbatlas_advanced_clusters: Sets correct `zone_id` when `use_replication_spec_per_shard` is false +``` diff --git a/internal/service/advancedcluster/data_source_advanced_cluster.go b/internal/service/advancedcluster/data_source_advanced_cluster.go index 9dcf142be1..47c7861e0b 100644 --- a/internal/service/advancedcluster/data_source_advanced_cluster.go +++ b/internal/service/advancedcluster/data_source_advanced_cluster.go @@ -287,8 +287,15 @@ func dataSourceRead(ctx context.Context, d *schema.ResourceData, meta any) diag. if err := d.Set("disk_size_gb", clusterDescOld.GetDiskSizeGB()); err != nil { return diag.FromErr(fmt.Errorf(ErrorClusterAdvancedSetting, "disk_size_gb", clusterName, err)) } + clusterDescNew, _, err := connV2.ClustersApi.GetCluster(ctx, projectID, clusterName).Execute() + if err != nil { + return diag.FromErr(fmt.Errorf(errorRead, clusterName, err)) + } + if err := d.Set("replica_set_scaling_strategy", clusterDescNew.GetReplicaSetScalingStrategy()); err != nil { + return diag.FromErr(fmt.Errorf(ErrorClusterAdvancedSetting, "replica_set_scaling_strategy", clusterName, err)) + } - zoneNameToZoneIDs, err := getZoneIDsFromNewAPI(ctx, projectID, clusterName, connV2) + zoneNameToZoneIDs, err := getZoneIDsFromNewAPI(clusterDescNew) if err != nil { return diag.FromErr(err) } diff --git a/internal/service/advancedcluster/data_source_advanced_clusters.go b/internal/service/advancedcluster/data_source_advanced_clusters.go index 0c7563a6e7..99f67a3743 100644 --- a/internal/service/advancedcluster/data_source_advanced_clusters.go +++ b/internal/service/advancedcluster/data_source_advanced_clusters.go @@ -373,12 +373,16 @@ func flattenAdvancedClustersOldSDK(ctx context.Context, connV20240530 *admin2024 log.Printf("[WARN] Error setting `advanced_configuration` for the cluster(%s): %s", cluster.GetId(), err) } - zoneNameToOldReplicationSpecIDs, err := getReplicationSpecIDsFromOldAPI(ctx, cluster.GetGroupId(), cluster.GetName(), connV20240530) + clusterDescNew, _, err := connV2.ClustersApi.GetCluster(ctx, cluster.GetGroupId(), cluster.GetName()).Execute() + if err != nil { + return nil, diag.FromErr(err) + } + zoneNameToZoneIDs, err := getZoneIDsFromNewAPI(clusterDescNew) if err != nil { return nil, diag.FromErr(err) } - replicationSpecs, err := FlattenAdvancedReplicationSpecsOldSDK(ctx, cluster.GetReplicationSpecs(), zoneNameToOldReplicationSpecIDs, cluster.GetDiskSizeGB(), nil, d, connV2) + replicationSpecs, err := FlattenAdvancedReplicationSpecsOldSDK(ctx, cluster.GetReplicationSpecs(), zoneNameToZoneIDs, cluster.GetDiskSizeGB(), nil, d, connV2) if err != nil { log.Printf("[WARN] Error setting `replication_specs` for the cluster(%s): %s", cluster.GetId(), err) } @@ -405,6 +409,7 @@ func flattenAdvancedClustersOldSDK(ctx context.Context, connV20240530 *admin2024 "termination_protection_enabled": cluster.GetTerminationProtectionEnabled(), "version_release_system": cluster.GetVersionReleaseSystem(), "global_cluster_self_managed_sharding": cluster.GetGlobalClusterSelfManagedSharding(), + "replica_set_scaling_strategy": clusterDescNew.GetReplicaSetScalingStrategy(), } results = append(results, result) } diff --git a/internal/service/advancedcluster/resource_advanced_cluster.go b/internal/service/advancedcluster/resource_advanced_cluster.go index 35afb32014..4a27840365 100644 --- a/internal/service/advancedcluster/resource_advanced_cluster.go +++ b/internal/service/advancedcluster/resource_advanced_cluster.go @@ -547,7 +547,7 @@ func resourceRead(ctx context.Context, d *schema.ResourceData, meta any) diag.Di return diag.FromErr(fmt.Errorf(ErrorClusterAdvancedSetting, "replica_set_scaling_strategy", clusterName, err)) } - zoneNameToZoneIDs, err := getZoneIDsFromNewAPI(ctx, projectID, clusterName, connV2) + zoneNameToZoneIDs, err := getZoneIDsFromNewAPI(cluster) if err != nil { return diag.FromErr(err) } @@ -630,11 +630,7 @@ func getReplicationSpecIDsFromOldAPI(ctx context.Context, projectID, clusterName } // getZoneIDsFromNewAPI returns the zone id values of replication specs coming from new API. This is used to populate zone_id when old API is called in the read. -func getZoneIDsFromNewAPI(ctx context.Context, projectID, clusterName string, connV2 *admin.APIClient) (map[string]string, error) { - cluster, _, err := connV2.ClustersApi.GetCluster(ctx, projectID, clusterName).Execute() - if err != nil { - return nil, fmt.Errorf("error reading advanced cluster for fetching zone ids (%s): %s", clusterName, err) - } +func getZoneIDsFromNewAPI(cluster *admin.ClusterDescription20240805) (map[string]string, error) { specs := cluster.GetReplicationSpecs() result := make(map[string]string, len(specs)) for _, spec := range specs { diff --git a/internal/service/advancedcluster/resource_advanced_cluster_test.go b/internal/service/advancedcluster/resource_advanced_cluster_test.go index 57b6846f59..b80a633c00 100644 --- a/internal/service/advancedcluster/resource_advanced_cluster_test.go +++ b/internal/service/advancedcluster/resource_advanced_cluster_test.go @@ -2057,12 +2057,10 @@ func configReplicaSetScalingStrategyOldSchema(orgID, projectName, name, replicaS data "mongodbatlas_advanced_cluster" "test" { project_id = mongodbatlas_advanced_cluster.test.project_id name = mongodbatlas_advanced_cluster.test.name - use_replication_spec_per_shard = true } data "mongodbatlas_advanced_clusters" "test" { project_id = mongodbatlas_advanced_cluster.test.project_id - use_replication_spec_per_shard = true } `, orgID, projectName, name, replicaSetScalingStrategy) } From b840bb4ec02d4b3760333887c959f1e1cfba7654 Mon Sep 17 00:00:00 2001 From: Oriol Date: Mon, 9 Sep 2024 18:13:18 +0200 Subject: [PATCH 05/16] doc: Includes sync_creation into mongodbatlas_online_archive resource documentation (#2567) --- docs/resources/online_archive.md | 1 + 1 file changed, 1 insertion(+) diff --git a/docs/resources/online_archive.md b/docs/resources/online_archive.md index 0a8f3a86d0..dd89b0c542 100644 --- a/docs/resources/online_archive.md +++ b/docs/resources/online_archive.md @@ -113,6 +113,7 @@ resource "mongodbatlas_online_archive" "test" { * `schedule` - Regular frequency and duration when archiving process occurs. See [schedule](#schedule). * `partition_fields` - (Recommended) Fields to use to partition data. You can specify up to two frequently queried fields (or up to three fields when one of them is `date_field`) to use for partitioning data. Queries that don’t contain the specified fields require a full collection scan of all archived documents, which takes longer and increases your costs. To learn more about how partition improves query performance, see [Data Structure in S3](https://docs.mongodb.com/datalake/admin/optimize-query-performance/#data-structure-in-s3). The value of a partition field can be up to a maximum of 700 characters. Documents with values exceeding 700 characters are not archived. See [partition fields](#partition). * `paused` - (Optional) State of the online archive. This is required for pausing an active online archive or resuming a paused online archive. If the collection has another active online archive, the resume request fails. +* `sync_creation` - (Optional) Flag that indicates whether the provider will wait for the state of the online archive to reach `IDLE` or `ACTIVE` when creating an online archive. Defaults to `false`. ### Criteria From 664d77c0687e2751b4a3b8fa4e9ae2cfdab76487 Mon Sep 17 00:00:00 2001 From: svc-apix-bot Date: Mon, 9 Sep 2024 16:14:45 +0000 Subject: [PATCH 06/16] chore: Updates CHANGELOG.md for #2568 --- CHANGELOG.md | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/CHANGELOG.md b/CHANGELOG.md index 95a011e88d..c186fb5d7a 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -16,6 +16,10 @@ ENHANCEMENTS: * data-source/mongodbatlas_advanced_clusters: supports replica_set_scaling_strategy attribute ([#2539](https://github.com/mongodb/terraform-provider-mongodbatlas/pull/2539)) * resource/mongodbatlas_advanced_cluster: supports replica_set_scaling_strategy attribute ([#2539](https://github.com/mongodb/terraform-provider-mongodbatlas/pull/2539)) +BUG FIXES: + +* data-source/mongodbatlas_advanced_clusters: Sets correct `zone_id` when `use_replication_spec_per_shard` is false ([#2568](https://github.com/mongodb/terraform-provider-mongodbatlas/pull/2568)) + ## 1.18.1 (August 26, 2024) NOTES: From 467250fa3bf444e4c2d53751edc72737c0e887ef Mon Sep 17 00:00:00 2001 From: maastha <122359335+maastha@users.noreply.github.com> Date: Mon, 9 Sep 2024 18:18:55 +0100 Subject: [PATCH 07/16] chore: Merges Azure KMS Encryption at Rest Private Endpoint feature to master (#2569) * update sdk dev (#2490) * chore: Creates TF models & interfaces for new `mongodbatlas_encryption_at_rest_private_endpoint` resource (#2493) * chore: Creates TF models & interfaces for new `mongodbatlas_encryption_at_rest_private_endpoint` data source (#2500) * feat: Updates `mongodbatlas_encryption_at_rest` resource to use new `azure_key_vault_config.require_private_networking` field (#2509) * chore: Creates TF models & interfaces for `mongodbatlas_encryption_at_rest_private_endpoints` plural data source (#2502) * feat: Implements `mongodbatlas_encryption_at_rest_private_endpoint` resource (#2512) * wip - implementing CRUD * include changelog entry * small adjustments * supporting state transition logic * implement acceptance test * add unit testing for state transitions * handle return error message if failed status is present * add acceptance test transitioning for public to private network * improve messaging for failed status * fix prechecks * use global const for resource name * avoid hardcoded value * adjust state transition logic for delete * adjusting target version in migration test to 1.19.0 * adjust default refresh to 30 seconds for quicker response * feat: Implements `mongodbatlas_encryption_at_rest_private_endpoint` singular data source (#2527) * implement singular data source * including changelog entry * doc: Updates existing documentation for `mongodbatlas_encryption_at_rest` resource to be auto-generated (#2529) * doc: Include example for new `mongodbatlas_encryption_at_rest_private_endpoint` resource (#2540) * Include example for ear with private endpoint * fix example * adjust readme * Update examples/mongodbatlas_encryption_at_rest_private_endpoint/azure/README.md Co-authored-by: maastha <122359335+maastha@users.noreply.github.com> * Update examples/mongodbatlas_encryption_at_rest_private_endpoint/azure/README.md Co-authored-by: maastha <122359335+maastha@users.noreply.github.com> * add example cli command * make use of variables to make value of resource id more compact --------- Co-authored-by: maastha <122359335+maastha@users.noreply.github.com> * feat: Implements new `mongodbatlas_encryption_at_rest_private_endpoints` data source (#2536) * temporary change to cloud provider access and getting latest sdk * implements plural data source * adapted cloud provider access with latest changes from dev preview * fix unit test * adding changelog entry * add changes to verify plural data source in basic test case * doc adjust to cloud_provider attribute * feat: Implements new `mongodbatlas_encryption_at_rest` singular data source & adds `valid` attribute for cloud provider configs in the resource (#2538) * fix: Adds error message handling to `mongodbatlas_encryption_at_rest_private_endpoint` resource (#2544) * doc: Adds documentation for new `encryption_at_rest_private_endpoint` resource and data sources (#2547) * adding documentation for encryption_at_rest_private_endpoint resource and data sources * align generated docs * minor typo fix * Adjust description of project_id to make it more concise * align note stating feature is available by request as defined in general docs * chore: Adopt latest changes from master into ear private endpoint dev branch to adopt latest SDK (#2549) * test: Reduce instance size and use of provisioned disk iops for test that verifies transition for symmetric to asymmetric configuration (#2503) * doc: Include changelog entries to mention 2 new guides (#2506) * add entry for 2 new guides * add link * chore: Updates examples link in index.md for v1.18.0 release * chore: Updates CHANGELOG.md header for v1.18.0 release * doc: Update Atlas SP db_role_to_execute info. (#2508) * (DOCSP-41590) Updating Atlas SP db_role_to_execute info. * Update docs/resources/stream_connection.md Co-authored-by: kanchana-mongodb <54281287+kanchana-mongodb@users.noreply.github.com> --------- Co-authored-by: kanchana-mongodb <54281287+kanchana-mongodb@users.noreply.github.com> * doc: Contributing Guidelines Updates (#2494) * Contributing Guidelines Updates * Update README.md * Update README.md * Update contributing/README.md Co-authored-by: kyuan-mongodb <78768401+kyuan-mongodb@users.noreply.github.com> --------- Co-authored-by: kyuan-mongodb <78768401+kyuan-mongodb@users.noreply.github.com> * test: Simply migration test checks after 1.18.0 release and adjust version constraint in advanced_cluster examples uing new schema (#2510) * doc: Add references to the terraform modules in the resources documentations (#2513) * add references to the modules in the resources documentations * fix pr comments * chore: Bump hashicorp/setup-terraform from 3.1.1 to 3.1.2 (#2515) Bumps [hashicorp/setup-terraform](https://github.com/hashicorp/setup-terraform) from 3.1.1 to 3.1.2. - [Release notes](https://github.com/hashicorp/setup-terraform/releases) - [Changelog](https://github.com/hashicorp/setup-terraform/blob/main/CHANGELOG.md) - [Commits](https://github.com/hashicorp/setup-terraform/compare/651471c36a6092792c552e8b1bef71e592b462d8...b9cd54a3c349d3f38e8881555d616ced269862dd) --- updated-dependencies: - dependency-name: hashicorp/setup-terraform dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore: Add mention of support ticket when opening a pull request (#2507) * Add mention of creating support ticket when opening PR * rephrasing to avoid mention of priority * including suggestion * doc: Updates`mongodbatlas_advanced_cluster` ISS migration guide & resource doc with expected 500 error on update (#2525) * chore: Updates mongodbatlas_advanced_cluster tests to expect temporary SERVICE_UNAVAILABLE error when migrating from old to new schema (#2523) * doc: Fixes wordings in the new advanced_cluster sharding guide. (#2524) * chore: Updates examples link in index.md for v1.18.1 release * chore: Updates CHANGELOG.md header for v1.18.1 release * chore: upgrades go SDK from `v20240805001` to `v20240805002` (#2534) * chore: Updates to Go 1.23 (#2535) * update asdf TF version * update to Go 1.23 * update linter * update golang-ci linter * disable Go telemetry * revert TF change * chore: Bump go.mongodb.org/atlas from 0.36.0 to 0.37.0 (#2532) Bumps [go.mongodb.org/atlas](https://github.com/mongodb/go-client-mongodb-atlas) from 0.36.0 to 0.37.0. - [Release notes](https://github.com/mongodb/go-client-mongodb-atlas/releases) - [Changelog](https://github.com/mongodb/go-client-mongodb-atlas/blob/master/CHANGELOG.md) - [Commits](https://github.com/mongodb/go-client-mongodb-atlas/compare/v0.36.0...v0.37.0) --- updated-dependencies: - dependency-name: go.mongodb.org/atlas dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore: Bump github.com/hashicorp/hcl/v2 from 2.21.0 to 2.22.0 (#2530) Bumps [github.com/hashicorp/hcl/v2](https://github.com/hashicorp/hcl) from 2.21.0 to 2.22.0. - [Release notes](https://github.com/hashicorp/hcl/releases) - [Changelog](https://github.com/hashicorp/hcl/blob/main/CHANGELOG.md) - [Commits](https://github.com/hashicorp/hcl/compare/v2.21.0...v2.22.0) --- updated-dependencies: - dependency-name: github.com/hashicorp/hcl/v2 dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * update asdf TF version to 1.9.5 (#2537) * chore: Changes deprecation message for labels attribute (#2542) * chore: Upgrades go SDK from `v20240805002` to `v20240805003` (#2545) * major version update calling gomajor tool * manual change to reincorporate v20240530005 * reverts temp changes in cloud provider resources, fixes sdk versions in new implementations --------- Signed-off-by: dependabot[bot] Co-authored-by: svc-apix-bot Co-authored-by: lmkerbey-mdb <105309825+lmkerbey-mdb@users.noreply.github.com> Co-authored-by: kanchana-mongodb <54281287+kanchana-mongodb@users.noreply.github.com> Co-authored-by: Zuhair Ahmed Co-authored-by: kyuan-mongodb <78768401+kyuan-mongodb@users.noreply.github.com> Co-authored-by: rubenVB01 <95967197+rubenVB01@users.noreply.github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: maastha <122359335+maastha@users.noreply.github.com> Co-authored-by: Marco Suma Co-authored-by: Espen Albert Co-authored-by: Leo Antoli <430982+lantoli@users.noreply.github.com> Co-authored-by: Oriol * doc: Adds documentation & examples for `mongodbatlas_encryption_at_rest` singular data source (#2543) * chore: Enables `mongodbatlas_encryption_at_rest` (Azure) tests to run in CI (#2551) * chore: Adds `mongodbatlas_encryption_at_rest_private_endpoint` acceptance test using azapi to approve private endpoint & check ACTIVE status (#2558) * doc: Add user journey considerations in current resource and example documentation (#2559) * minor typo fix * improve initial description in ear * adjust ear docs with mention of azure private link * private link doc adjustments * improve example * improve example * add mention in ear examples about policies * add note on update operation * link adjustments and add header for handling existing clusters * add note on private endpoint * add note in data sources * Update docs/resources/encryption_at_rest_private_endpoint.md Co-authored-by: maastha <122359335+maastha@users.noreply.github.com> * add clarification of preview flag for data sources --------- Co-authored-by: maastha <122359335+maastha@users.noreply.github.com> * update project_ip_addresses action * address doc comment --------- Signed-off-by: dependabot[bot] Co-authored-by: Agustin Bettati Co-authored-by: svc-apix-bot Co-authored-by: lmkerbey-mdb <105309825+lmkerbey-mdb@users.noreply.github.com> Co-authored-by: kanchana-mongodb <54281287+kanchana-mongodb@users.noreply.github.com> Co-authored-by: Zuhair Ahmed Co-authored-by: kyuan-mongodb <78768401+kyuan-mongodb@users.noreply.github.com> Co-authored-by: rubenVB01 <95967197+rubenVB01@users.noreply.github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Marco Suma Co-authored-by: Espen Albert Co-authored-by: Leo Antoli <430982+lantoli@users.noreply.github.com> Co-authored-by: Oriol --- .changelog/2509.txt | 3 + .changelog/2512.txt | 3 + .changelog/2527.txt | 3 + .changelog/2536.txt | 3 + .changelog/2538.txt | 7 + .github/workflows/acceptance-tests-runner.yml | 40 +- .github/workflows/acceptance-tests.yml | 9 + .github/workflows/code-health.yml | 12 +- GNUmakefile | 6 +- contributing/development-setup.md | 5 +- contributing/documentation.md | 2 +- docs/data-sources/encryption_at_rest.md | 190 +++++++ .../encryption_at_rest_private_endpoint.md | 42 ++ .../encryption_at_rest_private_endpoints.md | 50 ++ docs/resources/encryption_at_rest.md | 246 ++++++--- .../encryption_at_rest_private_endpoint.md | 94 ++++ .../aws/atlas-cluster/main.tf | 10 +- .../azure/README.md | 57 ++ .../azure/main.tf | 25 + .../azure/providers.tf | 5 + .../azure/variables.tf | 50 ++ .../azure/versions.tf | 9 + .../azure/README.md | 73 +++ .../azure/main.tf | 46 ++ .../azure/plural-data-source.tf | 8 + .../azure/providers.tf | 11 + .../azure/singular-data-source.tf | 9 + .../azure/variables.tf | 54 ++ .../azure/versions.tf | 14 + internal/common/conversion/type_conversion.go | 5 + .../common/conversion/type_conversion_test.go | 19 +- internal/common/dsschema/page_request.go | 31 ++ internal/common/retrystrategy/retry_state.go | 21 +- internal/provider/provider.go | 11 +- .../service/encryptionatrest/data_source.go | 47 ++ .../encryptionatrest/data_source_schema.go | 184 +++++++ internal/service/encryptionatrest/model.go | 151 ++++++ .../model_encryption_at_rest.go | 120 ----- ...cryption_at_rest_test.go => model_test.go} | 125 ++--- ...urce_encryption_at_rest.go => resource.go} | 186 +++++-- ...ion_test.go => resource_migration_test.go} | 127 ++--- ...ption_at_rest_test.go => resource_test.go} | 487 ++++++++---------- .../tfplugingen/generator_config.yml | 17 + .../data_source.go | 50 ++ .../data_source_schema.go | 56 ++ .../main_test.go | 15 + .../encryptionatrestprivateendpoint/model.go | 42 ++ .../model_test.go | 169 ++++++ .../plural_data_source.go | 61 +++ .../pural_data_source_schema.go | 39 ++ .../resource.go | 175 +++++++ .../resource_migration_test.go | 13 + .../resource_schema.go | 60 +++ .../resource_test.go | 401 ++++++++++++++ .../state_transition.go | 78 +++ .../state_transition_test.go | 135 +++++ .../tfplugingen/generator_config.yml | 34 ++ internal/testutil/acc/attribute_checks.go | 28 + internal/testutil/acc/encryption_at_rest.go | 114 ++++ internal/testutil/acc/pre_check.go | 33 +- internal/testutil/acc/provider.go | 27 + scripts/generate-doc.sh | 3 + .../data-sources/encryption_at_rest.md.tmpl | 57 ++ ...ncryption_at_rest_private_endpoint.md.tmpl | 18 + ...cryption_at_rest_private_endpoints.md.tmpl | 18 + .../resources/encryption_at_rest.md.tmpl | 77 +++ ...ncryption_at_rest_private_endpoint.md.tmpl | 33 ++ 67 files changed, 3665 insertions(+), 688 deletions(-) create mode 100644 .changelog/2509.txt create mode 100644 .changelog/2512.txt create mode 100644 .changelog/2527.txt create mode 100644 .changelog/2536.txt create mode 100644 .changelog/2538.txt create mode 100644 docs/data-sources/encryption_at_rest.md create mode 100644 docs/data-sources/encryption_at_rest_private_endpoint.md create mode 100644 docs/data-sources/encryption_at_rest_private_endpoints.md create mode 100644 docs/resources/encryption_at_rest_private_endpoint.md create mode 100644 examples/mongodbatlas_encryption_at_rest/azure/README.md create mode 100644 examples/mongodbatlas_encryption_at_rest/azure/main.tf create mode 100644 examples/mongodbatlas_encryption_at_rest/azure/providers.tf create mode 100644 examples/mongodbatlas_encryption_at_rest/azure/variables.tf create mode 100644 examples/mongodbatlas_encryption_at_rest/azure/versions.tf create mode 100644 examples/mongodbatlas_encryption_at_rest_private_endpoint/azure/README.md create mode 100644 examples/mongodbatlas_encryption_at_rest_private_endpoint/azure/main.tf create mode 100644 examples/mongodbatlas_encryption_at_rest_private_endpoint/azure/plural-data-source.tf create mode 100644 examples/mongodbatlas_encryption_at_rest_private_endpoint/azure/providers.tf create mode 100644 examples/mongodbatlas_encryption_at_rest_private_endpoint/azure/singular-data-source.tf create mode 100644 examples/mongodbatlas_encryption_at_rest_private_endpoint/azure/variables.tf create mode 100644 examples/mongodbatlas_encryption_at_rest_private_endpoint/azure/versions.tf create mode 100644 internal/common/dsschema/page_request.go create mode 100644 internal/service/encryptionatrest/data_source.go create mode 100644 internal/service/encryptionatrest/data_source_schema.go create mode 100644 internal/service/encryptionatrest/model.go delete mode 100644 internal/service/encryptionatrest/model_encryption_at_rest.go rename internal/service/encryptionatrest/{model_encryption_at_rest_test.go => model_test.go} (63%) rename internal/service/encryptionatrest/{resource_encryption_at_rest.go => resource.go} (59%) rename internal/service/encryptionatrest/{resource_encryption_at_rest_migration_test.go => resource_migration_test.go} (61%) rename internal/service/encryptionatrest/{resource_encryption_at_rest_test.go => resource_test.go} (52%) create mode 100644 internal/service/encryptionatrest/tfplugingen/generator_config.yml create mode 100644 internal/service/encryptionatrestprivateendpoint/data_source.go create mode 100644 internal/service/encryptionatrestprivateendpoint/data_source_schema.go create mode 100644 internal/service/encryptionatrestprivateendpoint/main_test.go create mode 100644 internal/service/encryptionatrestprivateendpoint/model.go create mode 100644 internal/service/encryptionatrestprivateendpoint/model_test.go create mode 100644 internal/service/encryptionatrestprivateendpoint/plural_data_source.go create mode 100644 internal/service/encryptionatrestprivateendpoint/pural_data_source_schema.go create mode 100644 internal/service/encryptionatrestprivateendpoint/resource.go create mode 100644 internal/service/encryptionatrestprivateendpoint/resource_migration_test.go create mode 100644 internal/service/encryptionatrestprivateendpoint/resource_schema.go create mode 100644 internal/service/encryptionatrestprivateendpoint/resource_test.go create mode 100644 internal/service/encryptionatrestprivateendpoint/state_transition.go create mode 100644 internal/service/encryptionatrestprivateendpoint/state_transition_test.go create mode 100644 internal/service/encryptionatrestprivateendpoint/tfplugingen/generator_config.yml create mode 100644 internal/testutil/acc/encryption_at_rest.go create mode 100644 templates/data-sources/encryption_at_rest.md.tmpl create mode 100644 templates/data-sources/encryption_at_rest_private_endpoint.md.tmpl create mode 100644 templates/data-sources/encryption_at_rest_private_endpoints.md.tmpl create mode 100644 templates/resources/encryption_at_rest.md.tmpl create mode 100644 templates/resources/encryption_at_rest_private_endpoint.md.tmpl diff --git a/.changelog/2509.txt b/.changelog/2509.txt new file mode 100644 index 0000000000..68aab37b6a --- /dev/null +++ b/.changelog/2509.txt @@ -0,0 +1,3 @@ +```release-note:enhancement +resource/mongodbatlas_encryption_at_rest: Adds new `azure_key_vault_config.#.require_private_networking` field to enable connection to Azure Key Vault over private networking +``` diff --git a/.changelog/2512.txt b/.changelog/2512.txt new file mode 100644 index 0000000000..8197f201b4 --- /dev/null +++ b/.changelog/2512.txt @@ -0,0 +1,3 @@ +```release-note:new-resource +resource/mongodbatlas_encryption_at_rest_private_endpoint +``` diff --git a/.changelog/2527.txt b/.changelog/2527.txt new file mode 100644 index 0000000000..c8acc8ea4e --- /dev/null +++ b/.changelog/2527.txt @@ -0,0 +1,3 @@ +```release-note:new-datasource +data-source/mongodbatlas_encryption_at_rest_private_endpoint +``` diff --git a/.changelog/2536.txt b/.changelog/2536.txt new file mode 100644 index 0000000000..c7f1c1323e --- /dev/null +++ b/.changelog/2536.txt @@ -0,0 +1,3 @@ +```release-note:new-datasource +data-source/mongodbatlas_encryption_at_rest_private_endpoints +``` diff --git a/.changelog/2538.txt b/.changelog/2538.txt new file mode 100644 index 0000000000..9127a008da --- /dev/null +++ b/.changelog/2538.txt @@ -0,0 +1,7 @@ +```release-note:new-datasource +data-source/mongodbatlas_encryption_at_rest +``` + +```release-note:enhancement +resource/mongodbatlas_encryption_at_rest: Adds `aws_kms_config.0.valid`, `azure_key_vault_config.0.valid` and `google_cloud_kms_config.0.valid` attribute +``` diff --git a/.github/workflows/acceptance-tests-runner.yml b/.github/workflows/acceptance-tests-runner.yml index 14ec550d01..8a51636934 100644 --- a/.github/workflows/acceptance-tests-runner.yml +++ b/.github/workflows/acceptance-tests-runner.yml @@ -86,6 +86,15 @@ on: mongodb_atlas_federated_settings_associated_domain: type: string required: true + mongodb_atlas_project_ear_pe_id: + type: string + required: true + mongodb_atlas_enable_preview: + type: string + required: true + azure_private_endpoint_region: + type: string + required: true secrets: # all secrets are passed explicitly in this workflow mongodb_atlas_public_key: required: true @@ -135,6 +144,18 @@ on: required: true azure_vnet_name_updated: required: true + azure_client_id: + required: true + azure_key_vault_name: + required: true + azure_key_identifier: + required: true + azure_key_vault_name_updated: + required: true + azure_key_identifier_updated: + required: true + azure_app_secret: + required: true env: TF_ACC: 1 @@ -238,7 +259,8 @@ jobs: data_lake: - 'internal/service/datalakepipeline/*.go' encryption: - - 'internal/service/encryptionatrest/*.go' + - 'internal/service/encryptionatrest/*.go' + - 'internal/service/encryptionatrestprivateendpoint/*.go' event_trigger: - 'internal/service/eventtrigger/*.go' federated: @@ -515,7 +537,21 @@ jobs: - name: Acceptance Tests env: MONGODB_ATLAS_LAST_VERSION: ${{ needs.get-provider-version.outputs.provider_version }} - ACCTEST_PACKAGES: ./internal/service/encryptionatrest + ACCTEST_PACKAGES: | + ./internal/service/encryptionatrest + ./internal/service/encryptionatrestprivateendpoint + MONGODB_ATLAS_PROJECT_EAR_PE_ID: ${{ inputs.mongodb_atlas_project_ear_pe_id }} + AZURE_PRIVATE_ENDPOINT_REGION: ${{ inputs.azure_private_endpoint_region }} + AZURE_CLIENT_ID: ${{ secrets.azure_client_id }} + AZURE_RESOURCE_GROUP_NAME: ${{ secrets.azure_resource_group_name }} + AZURE_SUBSCRIPTION_ID: ${{ secrets.azure_subscription_id }} + AZURE_TENANT_ID: ${{ vars.azure_tenant_id }} + AZURE_APP_SECRET: ${{ secrets.azure_app_secret }} + AZURE_KEY_VAULT_NAME: ${{ secrets.azure_key_vault_name }} + AZURE_KEY_IDENTIFIER: ${{ secrets.azure_key_identifier }} + AZURE_KEY_VAULT_NAME_UPDATED: ${{ secrets.azure_key_vault_name_updated }} + AZURE_KEY_IDENTIFIER_UPDATED: ${{ secrets.azure_key_identifier_updated }} + MONGODB_ATLAS_ENABLE_PREVIEW: ${{ inputs.mongodb_atlas_enable_preview }} run: make testacc event_trigger: diff --git a/.github/workflows/acceptance-tests.yml b/.github/workflows/acceptance-tests.yml index e02fd08675..e15be2a2c4 100644 --- a/.github/workflows/acceptance-tests.yml +++ b/.github/workflows/acceptance-tests.yml @@ -77,6 +77,12 @@ jobs: azure_subscription_id: ${{ secrets.AZURE_SUBSCRIPTION_ID }} azure_vnet_name: ${{ secrets.AZURE_VNET_NAME }} azure_vnet_name_updated: ${{ secrets.AZURE_VNET_NAME_UPDATED }} + azure_client_id: ${{ secrets.AZURE_CLIENT_ID }} + azure_key_vault_name: ${{ secrets.AZURE_KEY_VAULT_NAME }} + azure_key_identifier: ${{ secrets.AZURE_KEY_IDENTIFIER }} + azure_key_vault_name_updated: ${{ secrets.AZURE_KEY_VAULT_NAME_UPDATED }} + azure_key_identifier_updated: ${{ secrets.AZURE_KEY_IDENTIFIER_UPDATED }} + azure_app_secret: ${{ secrets.AZURE_APP_SECRET }} with: terraform_version: ${{ inputs.terraform_version || vars.TF_VERSION_LATEST }} @@ -104,3 +110,6 @@ jobs: mongodb_atlas_gov_org_id: ${{ inputs.atlas_cloud_env == 'qa' && vars.MONGODB_ATLAS_GOV_ORG_ID_QA || vars.MONGODB_ATLAS_GOV_ORG_ID_DEV }} mongodb_atlas_gov_project_owner_id: ${{ inputs.atlas_cloud_env == 'qa' && vars.MONGODB_ATLAS_GOV_PROJECT_OWNER_ID_QA || vars.MONGODB_ATLAS_GOV_PROJECT_OWNER_ID_DEV }} mongodb_atlas_federated_settings_associated_domain: ${{ vars.MONGODB_ATLAS_FEDERATED_SETTINGS_ASSOCIATED_DOMAIN }} + mongodb_atlas_project_ear_pe_id: ${{ inputs.atlas_cloud_env == 'qa' && vars.MONGODB_ATLAS_PROJECT_EAR_PE_ID_QA || vars.MONGODB_ATLAS_PROJECT_EAR_PE_ID_DEV }} + mongodb_atlas_enable_preview: ${{ vars.MONGODB_ATLAS_ENABLE_PREVIEW }} + azure_private_endpoint_region: ${{ vars.AZURE_PRIVATE_ENDPOINT_REGION }} diff --git a/.github/workflows/code-health.yml b/.github/workflows/code-health.yml index 87b7cb3a25..652ecacdc4 100644 --- a/.github/workflows/code-health.yml +++ b/.github/workflows/code-health.yml @@ -70,13 +70,17 @@ jobs: - uses: actions/checkout@692973e3d937129bcbf40652eb9f2f61becf3332 - run: make tools # all resources with auto-generated doc must be specified below here - name: Doc for control_plane_ip_addresses - run: export resource_name=control_plane_ip_addresses && make generate-doc + run: make generate-doc resource_name=control_plane_ip_addresses - name: Doc for push_based_log_export - run: export resource_name=push_based_log_export && make generate-doc + run: make generate-doc resource_name=push_based_log_export - name: Doc for search_deployment - run: export resource_name=search_deployment && make generate-doc + run: make generate-doc resource_name=search_deployment + - name: Doc for encryption_at_rest + run: make generate-doc resource_name=encryption_at_rest + - name: Doc for encryption_at_rest_private_endpoint + run: make generate-doc resource_name=encryption_at_rest_private_endpoint - name: Doc for project_ip_addresses - run: export resource_name=project_ip_addresses && make generate-doc + run: make generate-doc resource_name=project_ip_addresses - name: Find mutations id: self_mutation run: |- diff --git a/GNUmakefile b/GNUmakefile index 84619e6bde..d8739af884 100644 --- a/GNUmakefile +++ b/GNUmakefile @@ -124,8 +124,10 @@ scaffold-schemas: .PHONY: generate-doc -generate-doc: ## Generate the resource documentation via tfplugindocs - ./scripts/generate-doc.sh ${resource_name} +# e.g. run: make generate-doc resource_name=search_deployment +# generate the resource documentation via tfplugindocs +generate-doc: + @scripts/generate-doc.sh ${resource_name} .PHONY: update-tf-compatibility-matrix update-tf-compatibility-matrix: ## Update Terraform Compatibility Matrix documentation diff --git a/contributing/development-setup.md b/contributing/development-setup.md index ece7da2611..7d34219be7 100644 --- a/contributing/development-setup.md +++ b/contributing/development-setup.md @@ -218,15 +218,12 @@ You must also configure the following environment variables before running the t export AZURE_CLIENT_ID= export AZURE_SUBSCRIPTION_ID= export AZURE_RESOURCE_GROUP_NAME= - export AZURE_SECRET= + export AZURE_APP_SECRET= export AZURE_KEY_VAULT_NAME= export AZURE_KEY_IDENTIFIER= export AZURE_TENANT_ID= export AZURE_DIRECTORY_ID= - export AZURE_CLIENT_ID_UPDATED= - export AZURE_RESOURCE_GROUP_NAME_UPDATED= - export AZURE_SECRET_UPDATED= export AZURE_KEY_VAULT_NAME_UPDATED= export AZURE_KEY_IDENTIFIER_UPDATED= ``` diff --git a/contributing/documentation.md b/contributing/documentation.md index eaa3d9eae4..6879da6f53 100644 --- a/contributing/documentation.md +++ b/contributing/documentation.md @@ -12,5 +12,5 @@ We autogenerate the documentation of our provider resources and data sources via - Add the resource/data source templates to the [templates](https://github.com/mongodb/terraform-provider-mongodbatlas/blob/master/templates) folder. See [README.md](https://github.com/mongodb/terraform-provider-mongodbatlas/blob/master/templates/README.md) for more info. - Run the Makefile command `generate-doc` ```bash -export resource_name=search_deployment && make generate-doc +make generate-doc resource_name=search_deployment ``` diff --git a/docs/data-sources/encryption_at_rest.md b/docs/data-sources/encryption_at_rest.md new file mode 100644 index 0000000000..d5f2e240a9 --- /dev/null +++ b/docs/data-sources/encryption_at_rest.md @@ -0,0 +1,190 @@ +# Data Source: mongodbatlas_encryption_at_rest + +`mongodbatlas_encryption_at_rest` describes encryption at rest configuration for an Atlas project with one of the following providers: + +[Amazon Web Services Key Management Service](https://docs.atlas.mongodb.com/security-aws-kms/#security-aws-kms) +[Azure Key Vault](https://docs.atlas.mongodb.com/security-azure-kms/#security-azure-kms) +[Google Cloud KMS](https://docs.atlas.mongodb.com/security-gcp-kms/#security-gcp-kms) + + +~> **IMPORTANT** By default, Atlas enables encryption at rest for all cluster storage and snapshot volumes. + +~> **IMPORTANT** Atlas limits this feature to dedicated cluster tiers of M10 and greater. For more information see: https://www.mongodb.com/docs/atlas/reference/api-resources-spec/#tag/Encryption-at-Rest-using-Customer-Key-Management + +-> **NOTE:** Groups and projects are synonymous terms. You may find `groupId` in the official documentation. + + +## Example Usages + +### Configuring encryption at rest using customer key management in AWS +```terraform +resource "mongodbatlas_cloud_provider_access_setup" "setup_only" { + project_id = var.atlas_project_id + provider_name = "AWS" +} + +resource "mongodbatlas_cloud_provider_access_authorization" "auth_role" { + project_id = var.atlas_project_id + role_id = mongodbatlas_cloud_provider_access_setup.setup_only.role_id + + aws { + iam_assumed_role_arn = aws_iam_role.test_role.arn + } +} + +resource "mongodbatlas_encryption_at_rest" "test" { + project_id = var.atlas_project_id + + aws_kms_config { + enabled = true + customer_master_key_id = aws_kms_key.kms_key.id + region = var.atlas_region + role_id = mongodbatlas_cloud_provider_access_authorization.auth_role.role_id + } +} + +resource "mongodbatlas_advanced_cluster" "cluster" { + project_id = mongodbatlas_encryption_at_rest.test.project_id + name = "MyCluster" + cluster_type = "REPLICASET" + backup_enabled = true + encryption_at_rest_provider = "AWS" + + replication_specs { + region_configs { + priority = 7 + provider_name = "AWS" + region_name = "US_EAST_1" + electable_specs { + instance_size = "M10" + node_count = 3 + } + } + } +} + +data "mongodbatlas_encryption_at_rest" "test" { + project_id = mongodbatlas_encryption_at_rest.test.project_id +} + +output "is_aws_kms_encryption_at_rest_valid" { + value = data.mongodbatlas_encryption_at_rest.test.aws_kms_config.valid +} +``` + +### Configuring encryption at rest using customer key management in Azure +```terraform +resource "mongodbatlas_encryption_at_rest" "test" { + project_id = var.atlas_project_id + + azure_key_vault_config { + enabled = true + azure_environment = "AZURE" + + tenant_id = var.azure_tenant_id + subscription_id = var.azure_subscription_id + client_id = var.azure_client_id + secret = var.azure_client_secret + + resource_group_name = var.azure_resource_group_name + key_vault_name = var.azure_key_vault_name + key_identifier = var.azure_key_identifier + } +} + +data "mongodbatlas_encryption_at_rest" "test" { + project_id = mongodbatlas_encryption_at_rest.test.project_id +} + +output "is_azure_encryption_at_rest_valid" { + value = data.mongodbatlas_encryption_at_rest.test.azure_key_vault_config.valid +} +``` + +-> **NOTE:** It is possible to configure Atlas Encryption at Rest to communicate with Azure Key Vault using Azure Private Link, ensuring that all traffic between Atlas and Key Vault takes place over Azure’s private network interfaces. Please review `mongodbatlas_encryption_at_rest_private_endpoint` resource for details. + +### Configuring encryption at rest using customer key management in GCP +```terraform +resource "mongodbatlas_encryption_at_rest" "test" { + project_id = var.atlas_project_id + + google_cloud_kms_config { + enabled = true + service_account_key = "{\"type\": \"service_account\",\"project_id\": \"my-project-common-0\",\"private_key_id\": \"e120598ea4f88249469fcdd75a9a785c1bb3\",\"private_key\": \"-----BEGIN PRIVATE KEY-----\\nMIIEuwIBA(truncated)SfecnS0mT94D9\\n-----END PRIVATE KEY-----\\n\",\"client_email\": \"my-email-kms-0@my-project-common-0.iam.gserviceaccount.com\",\"client_id\": \"10180967717292066\",\"auth_uri\": \"https://accounts.google.com/o/oauth2/auth\",\"token_uri\": \"https://accounts.google.com/o/oauth2/token\",\"auth_provider_x509_cert_url\": \"https://www.googleapis.com/oauth2/v1/certs\",\"client_x509_cert_url\": \"https://www.googleapis.com/robot/v1/metadata/x509/my-email-kms-0%40my-project-common-0.iam.gserviceaccount.com\"}" + key_version_resource_id = "projects/my-project-common-0/locations/us-east4/keyRings/my-key-ring-0/cryptoKeys/my-key-0/cryptoKeyVersions/1" + } +} + +data "mongodbatlas_encryption_at_rest" "test" { + project_id = mongodbatlas_encryption_at_rest.test.project_id +} + +output "is_gcp_encryption_at_rest_valid" { + value = data.mongodbatlas_encryption_at_rest.test.google_cloud_kms_config.valid +} +``` + + +## Schema + +### Required + +- `project_id` (String) Unique 24-hexadecimal digit string that identifies your project. + +### Read-Only + +- `aws_kms_config` (Attributes) Amazon Web Services (AWS) KMS configuration details and encryption at rest configuration set for the specified project. (see [below for nested schema](#nestedatt--aws_kms_config)) +- `azure_key_vault_config` (Attributes) Details that define the configuration of Encryption at Rest using Azure Key Vault (AKV). (see [below for nested schema](#nestedatt--azure_key_vault_config)) +- `google_cloud_kms_config` (Attributes) Details that define the configuration of Encryption at Rest using Google Cloud Key Management Service (KMS). (see [below for nested schema](#nestedatt--google_cloud_kms_config)) +- `id` (String) The ID of this resource. + + +### Nested Schema for `aws_kms_config` + +Read-Only: + +- `access_key_id` (String, Sensitive) Unique alphanumeric string that identifies an Identity and Access Management (IAM) access key with permissions required to access your Amazon Web Services (AWS) Customer Master Key (CMK). +- `customer_master_key_id` (String, Sensitive) Unique alphanumeric string that identifies the Amazon Web Services (AWS) Customer Master Key (CMK) you used to encrypt and decrypt the MongoDB master keys. +- `enabled` (Boolean) Flag that indicates whether someone enabled encryption at rest for the specified project through Amazon Web Services (AWS) Key Management Service (KMS). To disable encryption at rest using customer key management and remove the configuration details, pass only this parameter with a value of `false`. +- `region` (String) Physical location where MongoDB Atlas deploys your AWS-hosted MongoDB cluster nodes. The region you choose can affect network latency for clients accessing your databases. When MongoDB Atlas deploys a dedicated cluster, it checks if a VPC or VPC connection exists for that provider and region. If not, MongoDB Atlas creates them as part of the deployment. MongoDB Atlas assigns the VPC a CIDR block. To limit a new VPC peering connection to one CIDR block and region, create the connection first. Deploy the cluster after the connection starts. +- `role_id` (String) Unique 24-hexadecimal digit string that identifies an Amazon Web Services (AWS) Identity and Access Management (IAM) role. This IAM role has the permissions required to manage your AWS customer master key. +- `secret_access_key` (String, Sensitive) Human-readable label of the Identity and Access Management (IAM) secret access key with permissions required to access your Amazon Web Services (AWS) customer master key. +- `valid` (Boolean) Flag that indicates whether the Amazon Web Services (AWS) Key Management Service (KMS) encryption key can encrypt and decrypt data. + + + +### Nested Schema for `azure_key_vault_config` + +Read-Only: + +- `azure_environment` (String) Azure environment in which your account credentials reside. +- `client_id` (String, Sensitive) Unique 36-hexadecimal character string that identifies an Azure application associated with your Azure Active Directory tenant. +- `enabled` (Boolean) Flag that indicates whether someone enabled encryption at rest for the specified project. To disable encryption at rest using customer key management and remove the configuration details, pass only this parameter with a value of `false`. +- `key_identifier` (String, Sensitive) Web address with a unique key that identifies for your Azure Key Vault. +- `key_vault_name` (String) Unique string that identifies the Azure Key Vault that contains your key. +- `require_private_networking` (Boolean) Enable connection to your Azure Key Vault over private networking. +- `resource_group_name` (String) Name of the Azure resource group that contains your Azure Key Vault. +- `secret` (String, Sensitive) Private data that you need secured and that belongs to the specified Azure Key Vault (AKV) tenant (**azureKeyVault.tenantID**). This data can include any type of sensitive data such as passwords, database connection strings, API keys, and the like. AKV stores this information as encrypted binary data. +- `subscription_id` (String, Sensitive) Unique 36-hexadecimal character string that identifies your Azure subscription. +- `tenant_id` (String, Sensitive) Unique 36-hexadecimal character string that identifies the Azure Active Directory tenant within your Azure subscription. +- `valid` (Boolean) Flag that indicates whether the Azure encryption key can encrypt and decrypt data. + + + +### Nested Schema for `google_cloud_kms_config` + +Read-Only: + +- `enabled` (Boolean) Flag that indicates whether someone enabled encryption at rest for the specified project. To disable encryption at rest using customer key management and remove the configuration details, pass only this parameter with a value of `false`. +- `key_version_resource_id` (String, Sensitive) Resource path that displays the key version resource ID for your Google Cloud KMS. +- `service_account_key` (String, Sensitive) JavaScript Object Notation (JSON) object that contains the Google Cloud Key Management Service (KMS). Format the JSON as a string and not as an object. +- `valid` (Boolean) Flag that indicates whether the Google Cloud Key Management Service (KMS) encryption key can encrypt and decrypt data. + +# Import +Encryption at Rest Settings can be imported using project ID, in the format `project_id`, e.g. + +``` +$ terraform import mongodbatlas_encryption_at_rest.example 1112222b3bf99403840e8934 +``` + +For more information see: [MongoDB Atlas API Reference for Encryption at Rest using Customer Key Management.](https://www.mongodb.com/docs/atlas/reference/api-resources-spec/#tag/Encryption-at-Rest-using-Customer-Key-Management) \ No newline at end of file diff --git a/docs/data-sources/encryption_at_rest_private_endpoint.md b/docs/data-sources/encryption_at_rest_private_endpoint.md new file mode 100644 index 0000000000..3cd1f2e29e --- /dev/null +++ b/docs/data-sources/encryption_at_rest_private_endpoint.md @@ -0,0 +1,42 @@ +# Data Source: mongodbatlas_encryption_at_rest_private_endpoint + +`mongodbatlas_encryption_at_rest_private_endpoint` describes a private endpoint used for encryption at rest using customer-managed keys. + +~> **IMPORTANT** The Encryption at Rest using Azure Key Vault over Private Endpoints feature is available by request. To request this functionality for your Atlas deployments, contact your Account Manager. +Additionally, you'll need to set the environment variable `MONGODB_ATLAS_ENABLE_PREVIEW=true` to use this data source. To learn more about existing limitations, see the [Manage Customer Keys with Azure Key Vault Over Private Endpoints](https://www.mongodb.com/docs/atlas/security/azure-kms-over-private-endpoint/#manage-customer-keys-with-azure-key-vault-over-private-endpoints). + +## Example Usages + +-> **NOTE:** Only Azure Key Vault with Azure Private Link is supported at this time. + +```terraform +data "mongodbatlas_encryption_at_rest_private_endpoint" "single" { + project_id = var.atlas_project_id + cloud_provider = "AZURE" + id = mongodbatlas_encryption_at_rest_private_endpoint.endpoint.id +} + +output "endpoint_connection_name" { + value = data.mongodbatlas_encryption_at_rest_private_endpoint.single.private_endpoint_connection_name +} +``` + + +## Schema + +### Required + +- `cloud_provider` (String) Label that identifies the cloud provider of the private endpoint. +- `id` (String) Unique 24-hexadecimal digit string that identifies the Private Endpoint Service. +- `project_id` (String) Unique 24-hexadecimal digit string that identifies your project. + +### Read-Only + +- `error_message` (String) Error message for failures associated with the Encryption At Rest private endpoint. +- `private_endpoint_connection_name` (String) Connection name of the Azure Private Endpoint. +- `region_name` (String) Cloud provider region in which the Encryption At Rest private endpoint is located. +- `status` (String) State of the Encryption At Rest private endpoint. + +For more information see: +- [MongoDB Atlas API - Private Endpoint for Encryption at Rest Using Customer Key Management](https://www.mongodb.com/docs/atlas/reference/api-resources-spec/v2/#tag/Encryption-at-Rest-using-Customer-Key-Management/operation/getEncryptionAtRestPrivateEndpoint) Documentation. +- [Manage Customer Keys with Azure Key Vault Over Private Endpoints](https://www.mongodb.com/docs/atlas/security/azure-kms-over-private-endpoint/). diff --git a/docs/data-sources/encryption_at_rest_private_endpoints.md b/docs/data-sources/encryption_at_rest_private_endpoints.md new file mode 100644 index 0000000000..96f3fd17b0 --- /dev/null +++ b/docs/data-sources/encryption_at_rest_private_endpoints.md @@ -0,0 +1,50 @@ +# Data Source: mongodbatlas_encryption_at_rest_private_endpoints + +`mongodbatlas_encryption_at_rest_private_endpoints` describes private endpoints of a particular cloud provider used for encryption at rest using customer-managed keys. + +~> **IMPORTANT** The Encryption at Rest using Azure Key Vault over Private Endpoints feature is available by request. To request this functionality for your Atlas deployments, contact your Account Manager. +Additionally, you'll need to set the environment variable `MONGODB_ATLAS_ENABLE_PREVIEW=true` to use this data source. To learn more about existing limitations, see the [Manage Customer Keys with Azure Key Vault Over Private Endpoints](https://www.mongodb.com/docs/atlas/security/azure-kms-over-private-endpoint/#manage-customer-keys-with-azure-key-vault-over-private-endpoints). + +## Example Usages + +-> **NOTE:** Only Azure Key Vault with Azure Private Link is supported at this time. + +```terraform +data "mongodbatlas_encryption_at_rest_private_endpoints" "plural" { + project_id = var.atlas_project_id + cloud_provider = "AZURE" +} + +output "number_of_endpoints" { + value = length(data.mongodbatlas_encryption_at_rest_private_endpoints.plural.results) +} +``` + + +## Schema + +### Required + +- `cloud_provider` (String) Human-readable label that identifies the cloud provider for the private endpoints to return. +- `project_id` (String) Unique 24-hexadecimal digit string that identifies your project. + +### Read-Only + +- `results` (Attributes List) List of returned documents that MongoDB Cloud providers when completing this request. (see [below for nested schema](#nestedatt--results)) + + +### Nested Schema for `results` + +Read-Only: + +- `cloud_provider` (String) Label that identifies the cloud provider of the private endpoint. +- `error_message` (String) Error message for failures associated with the Encryption At Rest private endpoint. +- `id` (String) Unique 24-hexadecimal digit string that identifies the Private Endpoint Service. +- `private_endpoint_connection_name` (String) Connection name of the Azure Private Endpoint. +- `project_id` (String) Unique 24-hexadecimal digit string that identifies your project. +- `region_name` (String) Cloud provider region in which the Encryption At Rest private endpoint is located. +- `status` (String) State of the Encryption At Rest private endpoint. + +For more information see: +- [MongoDB Atlas API - Private Endpoint for Encryption at Rest Using Customer Key Management](https://www.mongodb.com/docs/atlas/reference/api-resources-spec/v2/#tag/Encryption-at-Rest-using-Customer-Key-Management/operation/getEncryptionAtRestPrivateEndpointsForCloudProvider) Documentation. +- [Manage Customer Keys with Azure Key Vault Over Private Endpoints](https://www.mongodb.com/docs/atlas/security/azure-kms-over-private-endpoint/). diff --git a/docs/resources/encryption_at_rest.md b/docs/resources/encryption_at_rest.md index 5ac02ddbe4..dbf92fbc65 100644 --- a/docs/resources/encryption_at_rest.md +++ b/docs/resources/encryption_at_rest.md @@ -1,20 +1,17 @@ # Resource: mongodbatlas_encryption_at_rest -`mongodbatlas_encryption_at_rest` allows management of encryption at rest for an Atlas project with one of the following providers: +`mongodbatlas_encryption_at_rest` allows management of Encryption at Rest for an Atlas project using Customer Key Management configuration. The following providers are supported: +- [Amazon Web Services Key Management Service](https://docs.atlas.mongodb.com/security-aws-kms/#security-aws-kms) +- [Azure Key Vault](https://docs.atlas.mongodb.com/security-azure-kms/#security-azure-kms) +- [Google Cloud KMS](https://docs.atlas.mongodb.com/security-gcp-kms/#security-gcp-kms) -[Amazon Web Services Key Management Service](https://docs.atlas.mongodb.com/security-aws-kms/#security-aws-kms) -[Azure Key Vault](https://docs.atlas.mongodb.com/security-azure-kms/#security-azure-kms) -[Google Cloud KMS](https://docs.atlas.mongodb.com/security-gcp-kms/#security-gcp-kms) - -The [encryption at rest Terraform module](https://registry.terraform.io/modules/terraform-mongodbatlas-modules/encryption-at-rest/mongodbatlas/latest) makes use of this resource and simplifies its use. - -After configuring at least one Encryption at Rest provider for the Atlas project, Project Owners can enable Encryption at Rest for each Atlas cluster for which they require encryption. The Encryption at Rest provider does not have to match the cluster cloud service provider. +The [encryption at rest Terraform module](https://registry.terraform.io/modules/terraform-mongodbatlas-modules/encryption-at-rest/mongodbatlas/latest) makes use of this resource and simplifies its use. It is currently limited to AWS KMS. Atlas does not automatically rotate user-managed encryption keys. Defer to your preferred Encryption at Rest provider’s documentation and guidance for best practices on key rotation. Atlas automatically creates a 90-day key rotation alert when you configure Encryption at Rest using your Key Management in an Atlas project. See [Encryption at Rest](https://docs.atlas.mongodb.com/security-kms-encryption/index.html) for more information, including prerequisites and restrictions. -~> **IMPORTANT** Atlas encrypts all cluster storage and snapshot volumes, securing all cluster data on disk: a concept known as encryption at rest, by default. +~> **IMPORTANT** By default, Atlas enables encryption at rest for all cluster storage and snapshot volumes. ~> **IMPORTANT** Atlas limits this feature to dedicated cluster tiers of M10 and greater. For more information see: https://www.mongodb.com/docs/atlas/reference/api-resources-spec/#tag/Encryption-at-Rest-using-Customer-Key-Management @@ -23,40 +20,74 @@ See [Encryption at Rest](https://docs.atlas.mongodb.com/security-kms-encryption/ -> **IMPORTANT NOTE** To disable the encryption at rest with customer key management for a project all existing clusters in the project must first either have encryption at rest for the provider set to none, e.g. `encryption_at_rest_provider = "NONE"`, or be deleted. -## Example Usage +## Enabling Encryption at Rest for existing Atlas cluster + +After configuring at least one key management provider for an Atlas project, Project Owners can enable customer key management for each Atlas cluster for which they require encryption. For clusters defined in terraform, the [`encryption_at_rest_provider` attribute](advanced_cluster#encryption_at_rest_provider) can be used in both `mongodbatlas_advanced_cluster` and `mongodbatlas_cluster` resources. The key management provider does not have to match the cluster cloud service provider. + +Please reference [Enable Customer Key Management for an Atlas Cluster](https://www.mongodb.com/docs/atlas/security-kms-encryption/#enable-customer-key-management-for-an-service-cluster) documentation for additional considerations. + + +## Example Usages + +### Configuring encryption at rest using customer key management in AWS +The configuration of encryption at rest with customer key management, `mongodbatlas_encryption_at_rest`, needs to be completed before a cluster is created in the project. Force this wait by using an implicit dependency via `project_id` as shown in the example below. ```terraform +resource "mongodbatlas_cloud_provider_access_setup" "setup_only" { + project_id = var.atlas_project_id + provider_name = "AWS" +} + +resource "mongodbatlas_cloud_provider_access_authorization" "auth_role" { + project_id = var.atlas_project_id + role_id = mongodbatlas_cloud_provider_access_setup.setup_only.role_id + + aws { + iam_assumed_role_arn = aws_iam_role.test_role.arn + } +} + resource "mongodbatlas_encryption_at_rest" "test" { - project_id = "" + project_id = var.atlas_project_id aws_kms_config { enabled = true - customer_master_key_id = "5ce83906-6563-46b7-8045-11c20e3a5766" - region = "US_EAST_1" - role_id = "60815e2fe01a49138a928ebb" + customer_master_key_id = aws_kms_key.kms_key.id + region = var.atlas_region + role_id = mongodbatlas_cloud_provider_access_authorization.auth_role.role_id } +} - azure_key_vault_config { - enabled = true - client_id = "g54f9e2-89e3-40fd-8188-EXAMPLEID" - azure_environment = "AZURE" - subscription_id = "0ec944e3-g725-44f9-a147-EXAMPLEID" - resource_group_name = "ExampleRGName" - key_vault_name = "EXAMPLEKeyVault" - key_identifier = "https://EXAMPLEKeyVault.vault.azure.net/keys/EXAMPLEKey/d891821e3d364e9eb88fbd3d11807b86" - secret = "EXAMPLESECRET" - tenant_id = "e8e4b6ba-ff32-4c88-a9af-EXAMPLEID" - } +resource "mongodbatlas_advanced_cluster" "cluster" { + project_id = mongodbatlas_encryption_at_rest.test.project_id + name = "MyCluster" + cluster_type = "REPLICASET" + backup_enabled = true + encryption_at_rest_provider = "AWS" - google_cloud_kms_config { - enabled = true - service_account_key = "{\"type\": \"service_account\",\"project_id\": \"my-project-common-0\",\"private_key_id\": \"e120598ea4f88249469fcdd75a9a785c1bb3\",\"private_key\": \"-----BEGIN PRIVATE KEY-----\\nMIIEuwIBA(truncated)SfecnS0mT94D9\\n-----END PRIVATE KEY-----\\n\",\"client_email\": \"my-email-kms-0@my-project-common-0.iam.gserviceaccount.com\",\"client_id\": \"10180967717292066\",\"auth_uri\": \"https://accounts.google.com/o/oauth2/auth\",\"token_uri\": \"https://accounts.google.com/o/oauth2/token\",\"auth_provider_x509_cert_url\": \"https://www.googleapis.com/oauth2/v1/certs\",\"client_x509_cert_url\": \"https://www.googleapis.com/robot/v1/metadata/x509/my-email-kms-0%40my-project-common-0.iam.gserviceaccount.com\"}" - key_version_resource_id = "projects/my-project-common-0/locations/us-east4/keyRings/my-key-ring-0/cryptoKeys/my-key-0/cryptoKeyVersions/1" + replication_specs { + region_configs { + priority = 7 + provider_name = "AWS" + region_name = "US_EAST_1" + electable_specs { + instance_size = "M10" + node_count = 3 + } + } } } + +data "mongodbatlas_encryption_at_rest" "test" { + project_id = mongodbatlas_encryption_at_rest.test.project_id +} + +output "is_aws_kms_encryption_at_rest_valid" { + value = data.mongodbatlas_encryption_at_rest.test.aws_kms_config.valid +} ``` -**NOTE** if using the two resources path for cloud provider access, `cloud_provider_access_setup` and `cloud_provider_access_authorization`, you may need to define a `depends_on` statement for these two resources, because terraform is not able to infer the dependency. +**NOTE** If using the two resources path for cloud provider access, `cloud_provider_access_setup` and `cloud_provider_access_authorization`, you may need to define a `depends_on` statement for these two resources, because terraform is not able to infer the dependency. ```terraform resource "mongodbatlas_encryption_at_rest" "default" { @@ -65,78 +96,123 @@ resource "mongodbatlas_encryption_at_rest" "default" { } ``` -## Example: Configuring encryption at rest using customer key management in Azure and then creating a cluster - -The configuration of encryption at rest with customer key management, `mongodbatlas_encryption_at_rest`, needs to be completed before a cluster is created in the project. Force this wait by using an implicit dependency via `project_id` as shown in the example below. - +### Configuring encryption at rest using customer key management in Azure ```terraform -resource "mongodbatlas_encryption_at_rest" "example" { - project_id = "" +resource "mongodbatlas_encryption_at_rest" "test" { + project_id = var.atlas_project_id azure_key_vault_config { - enabled = true - client_id = "g54f9e2-89e3-40fd-8188-EXAMPLEID" - azure_environment = "AZURE" - subscription_id = "0ec944e3-g725-44f9-a147-EXAMPLEID" - resource_group_name = "ExampleRGName" - key_vault_name = "EXAMPLEKeyVault" - key_identifier = "https://EXAMPLEKeyVault.vault.azure.net/keys/EXAMPLEKey/d891821e3d364e9eb88fbd3d11807b86" - secret = "EXAMPLESECRET" - tenant_id = "e8e4b6ba-ff32-4c88-a9af-EXAMPLEID" + enabled = true + azure_environment = "AZURE" + + tenant_id = var.azure_tenant_id + subscription_id = var.azure_subscription_id + client_id = var.azure_client_id + secret = var.azure_client_secret + + resource_group_name = var.azure_resource_group_name + key_vault_name = var.azure_key_vault_name + key_identifier = var.azure_key_identifier } } -resource "mongodbatlas_advanced_cluster" "example_cluster" { - project_id = mongodbatlas_encryption_at_rest.example.project_id - name = "CLUSTER NAME" - cluster_type = "REPLICASET" - backup_enabled = true - encryption_at_rest_provider = "AZURE" +data "mongodbatlas_encryption_at_rest" "test" { + project_id = mongodbatlas_encryption_at_rest.test.project_id +} - replication_specs { - region_configs { - priority = 7 - provider_name = "AZURE" - region_name = "REGION" - electable_specs { - instance_size = "M10" - node_count = 3 - } - } - } +output "is_azure_encryption_at_rest_valid" { + value = data.mongodbatlas_encryption_at_rest.test.azure_key_vault_config.valid } +``` + +#### Manage Customer Keys with Azure Key Vault Over Private Endpoints +It is possible to configure Atlas Encryption at Rest to communicate with Azure Key Vault using Azure Private Link, ensuring that all traffic between Atlas and Key Vault takes place over Azure’s private network interfaces. This requires enabling `azure_key_vault_config.require_private_networking` attribute, together with the configuration of `mongodbatlas_encryption_at_rest_private_endpoint` resource. + +Please review [`mongodbatlas_encryption_at_rest_private_endpoint` resource documentation](encryption_at_rest_private_endpoint) and [complete example](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_encryption_at_rest_private_endpoint/azure) for details on this functionality. + + +### Configuring encryption at rest using customer key management in GCP +```terraform +resource "mongodbatlas_encryption_at_rest" "test" { + project_id = var.atlas_project_id + google_cloud_kms_config { + enabled = true + service_account_key = "{\"type\": \"service_account\",\"project_id\": \"my-project-common-0\",\"private_key_id\": \"e120598ea4f88249469fcdd75a9a785c1bb3\",\"private_key\": \"-----BEGIN PRIVATE KEY-----\\nMIIEuwIBA(truncated)SfecnS0mT94D9\\n-----END PRIVATE KEY-----\\n\",\"client_email\": \"my-email-kms-0@my-project-common-0.iam.gserviceaccount.com\",\"client_id\": \"10180967717292066\",\"auth_uri\": \"https://accounts.google.com/o/oauth2/auth\",\"token_uri\": \"https://accounts.google.com/o/oauth2/token\",\"auth_provider_x509_cert_url\": \"https://www.googleapis.com/oauth2/v1/certs\",\"client_x509_cert_url\": \"https://www.googleapis.com/robot/v1/metadata/x509/my-email-kms-0%40my-project-common-0.iam.gserviceaccount.com\"}" + key_version_resource_id = "projects/my-project-common-0/locations/us-east4/keyRings/my-key-ring-0/cryptoKeys/my-key-0/cryptoKeyVersions/1" + } +} ``` -## Argument Reference + +## Schema + +### Required + +- `project_id` (String) Unique 24-hexadecimal digit string that identifies your project. + +### Optional + +- `aws_kms_config` (Block List) Amazon Web Services (AWS) KMS configuration details and encryption at rest configuration set for the specified project. (see [below for nested schema](#nestedblock--aws_kms_config)) +- `azure_key_vault_config` (Block List) Details that define the configuration of Encryption at Rest using Azure Key Vault (AKV). (see [below for nested schema](#nestedblock--azure_key_vault_config)) +- `google_cloud_kms_config` (Block List) Details that define the configuration of Encryption at Rest using Google Cloud Key Management Service (KMS). (see [below for nested schema](#nestedblock--google_cloud_kms_config)) + +### Read-Only + +- `id` (String) The ID of this resource. + + +### Nested Schema for `aws_kms_config` + +Optional: + +- `access_key_id` (String, Sensitive) Unique alphanumeric string that identifies an Identity and Access Management (IAM) access key with permissions required to access your Amazon Web Services (AWS) Customer Master Key (CMK). +- `customer_master_key_id` (String, Sensitive) Unique alphanumeric string that identifies the Amazon Web Services (AWS) Customer Master Key (CMK) you used to encrypt and decrypt the MongoDB master keys. +- `enabled` (Boolean) Flag that indicates whether someone enabled encryption at rest for the specified project through Amazon Web Services (AWS) Key Management Service (KMS). To disable encryption at rest using customer key management and remove the configuration details, pass only this parameter with a value of `false`. +- `region` (String) Physical location where MongoDB Atlas deploys your AWS-hosted MongoDB cluster nodes. The region you choose can affect network latency for clients accessing your databases. When MongoDB Cloud deploys a dedicated cluster, it checks if a VPC or VPC connection exists for that provider and region. If not, MongoDB Atlas creates them as part of the deployment. MongoDB Atlas assigns the VPC a CIDR block. To limit a new VPC peering connection to one CIDR block and region, create the connection first. Deploy the cluster after the connection starts. +- `role_id` (String) Unique 24-hexadecimal digit string that identifies an Amazon Web Services (AWS) Identity and Access Management (IAM) role. This IAM role has the permissions required to manage your AWS customer master key. +- `secret_access_key` (String, Sensitive) Human-readable label of the Identity and Access Management (IAM) secret access key with permissions required to access your Amazon Web Services (AWS) customer master key. + +Read-Only: + +- `valid` (Boolean) Flag that indicates whether the Amazon Web Services (AWS) Key Management Service (KMS) encryption key can encrypt and decrypt data. + + + +### Nested Schema for `azure_key_vault_config` + +Optional: + +- `azure_environment` (String) Azure environment in which your account credentials reside. +- `client_id` (String, Sensitive) Unique 36-hexadecimal character string that identifies an Azure application associated with your Azure Active Directory tenant. +- `enabled` (Boolean) Flag that indicates whether someone enabled encryption at rest for the specified project. To disable encryption at rest using customer key management and remove the configuration details, pass only this parameter with a value of `false`. +- `key_identifier` (String, Sensitive) Web address with a unique key that identifies for your Azure Key Vault. +- `key_vault_name` (String) Unique string that identifies the Azure Key Vault that contains your key. +- `require_private_networking` (Boolean) Enable connection to your Azure Key Vault over private networking. +- `resource_group_name` (String) Name of the Azure resource group that contains your Azure Key Vault. +- `secret` (String, Sensitive) Private data that you need secured and that belongs to the specified Azure Key Vault (AKV) tenant (**azureKeyVault.tenantID**). This data can include any type of sensitive data such as passwords, database connection strings, API keys, and the like. AKV stores this information as encrypted binary data. +- `subscription_id` (String, Sensitive) Unique 36-hexadecimal character string that identifies your Azure subscription. +- `tenant_id` (String, Sensitive) Unique 36-hexadecimal character string that identifies the Azure Active Directory tenant within your Azure subscription. + +Read-Only: + +- `valid` (Boolean) Flag that indicates whether the Azure encryption key can encrypt and decrypt data. + -* `project_id` - (Required) The unique identifier for the project. + +### Nested Schema for `google_cloud_kms_config` -### aws_kms_config -Refer to the example in the [official github repository](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples) to implement Encryption at Rest -* `enabled` - Specifies whether Encryption at Rest is enabled for an Atlas project, To disable Encryption at Rest, pass only this parameter with a value of false, When you disable Encryption at Rest, Atlas also removes the configuration details. -* `customer_master_key_id` - The AWS customer master key used to encrypt and decrypt the MongoDB master keys. -* `region` - The AWS region in which the AWS customer master key exists: CA_CENTRAL_1, US_EAST_1, US_EAST_2, US_WEST_1, US_WEST_2, SA_EAST_1 -* `role_id` - ID of an AWS IAM role authorized to manage an AWS customer master key. To find the ID for an existing IAM role check the `role_id` attribute of the `mongodbatlas_cloud_provider_access` resource. +Optional: -### azure_key_vault_config -* `enabled` - Specifies whether Encryption at Rest is enabled for an Atlas project. To disable Encryption at Rest, pass only this parameter with a value of false. When you disable Encryption at Rest, Atlas also removes the configuration details. -* `client_id` - The client ID, also known as the application ID, for an Azure application associated with the Azure AD tenant. -* `azure_environment` - The Azure environment where the Azure account credentials reside. Valid values are the following: AZURE, AZURE_CHINA, AZURE_GERMANY -* `subscription_id` - The unique identifier associated with an Azure subscription. -* `resource_group_name` - The name of the Azure Resource group that contains an Azure Key Vault. -* `key_vault_name` - The name of an Azure Key Vault containing your key. -* `key_identifier` - The unique identifier of a key in an Azure Key Vault. -* `secret` - The secret associated with the Azure Key Vault specified by azureKeyVault.tenantID. -* `tenant_id` - The unique identifier for an Azure AD tenant within an Azure subscription. +- `enabled` (Boolean) Flag that indicates whether someone enabled encryption at rest for the specified project. To disable encryption at rest using customer key management and remove the configuration details, pass only this parameter with a value of `false`. +- `key_version_resource_id` (String, Sensitive) Resource path that displays the key version resource ID for your Google Cloud KMS. +- `service_account_key` (String, Sensitive) JavaScript Object Notation (JSON) object that contains the Google Cloud Key Management Service (KMS). Format the JSON as a string and not as an object. -### google_cloud_kms_config -* `enabled` - Specifies whether Encryption at Rest is enabled for an Atlas project. To disable Encryption at Rest, pass only this parameter with a value of false. When you disable Encryption at Rest, Atlas also removes the configuration details. -* `service_account_key` - String-formatted JSON object containing GCP KMS credentials from your GCP account. -* `key_version_resource_id` - The Key Version Resource ID from your GCP account. +Read-Only: -## Import +- `valid` (Boolean) Flag that indicates whether the Google Cloud Key Management Service (KMS) encryption key can encrypt and decrypt data. +# Import Encryption at Rest Settings can be imported using project ID, in the format `project_id`, e.g. ``` diff --git a/docs/resources/encryption_at_rest_private_endpoint.md b/docs/resources/encryption_at_rest_private_endpoint.md new file mode 100644 index 0000000000..3e3e068d12 --- /dev/null +++ b/docs/resources/encryption_at_rest_private_endpoint.md @@ -0,0 +1,94 @@ +# Resource: mongodbatlas_encryption_at_rest_private_endpoint + +`mongodbatlas_encryption_at_rest_private_endpoint` provides a resource for managing a private endpoint used for encryption at rest with customer-managed keys. This ensures all traffic between Atlas and customer key management systems take place over private network interfaces. + +~> **IMPORTANT** The Encryption at Rest using Azure Key Vault over Private Endpoints feature is available by request. To request this functionality for your Atlas deployments, contact your Account Manager. +Additionally, you'll need to set the environment variable `MONGODB_ATLAS_ENABLE_PREVIEW=true` to use this resource. To learn more about existing limitations, see the [Manage Customer Keys with Azure Key Vault Over Private Endpoints](https://www.mongodb.com/docs/atlas/security/azure-kms-over-private-endpoint/#manage-customer-keys-with-azure-key-vault-over-private-endpoints). + +-> **NOTE:** As a prerequisite to configuring a private endpoint for Azure Key Vault, the corresponding [`mongodbatlas_encryption_at_rest`](encryption_at_rest) resource has to be adjust by configuring [`azure_key_vault_config.require_private_networking`](encryption_at_rest#require_private_networking) to true. This attribute should be updated in place, ensuring the customer-managed keys encryption is never disabled. + +-> **NOTE:** This resource does not support update operations. To modify values of a private endpoint the existing resource must be deleted and a new one can be created with the modified values. + +## Example Usages + +-> **NOTE:** Only Azure Key Vault with Azure Private Link is supported at this time. + +### Configuring Atlas Encryption at Rest using Azure Key Vault with Azure Private Link + +Make sure to reference the [complete example section](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_encryption_at_rest_private_endpoint/azure) for detailed steps and considerations. + +```terraform +resource "mongodbatlas_encryption_at_rest" "ear" { + project_id = var.atlas_project_id + + azure_key_vault_config { + require_private_networking = true + + enabled = true + azure_environment = "AZURE" + + tenant_id = var.azure_tenant_id + subscription_id = var.azure_subscription_id + client_id = var.azure_client_id + secret = var.azure_client_secret + + resource_group_name = var.azure_resource_group_name + key_vault_name = var.azure_key_vault_name + key_identifier = var.azure_key_identifier + } +} + +# Creates private endpoint +resource "mongodbatlas_encryption_at_rest_private_endpoint" "endpoint" { + project_id = mongodbatlas_encryption_at_rest.ear.project_id + cloud_provider = "AZURE" + region_name = var.azure_region_name +} + +locals { + key_vault_resource_id = "/subscriptions/${var.azure_subscription_id}/resourceGroups/${var.azure_resource_group_name}/providers/Microsoft.KeyVault/vaults/${var.azure_key_vault_name}" +} + +# Approves private endpoint connection from Azure Key Vault +resource "azapi_update_resource" "approval" { + type = "Microsoft.KeyVault/Vaults/PrivateEndpointConnections@2023-07-01" + name = mongodbatlas_encryption_at_rest_private_endpoint.endpoint.private_endpoint_connection_name + parent_id = local.key_vault_resource_id + + body = jsonencode({ + properties = { + privateLinkServiceConnectionState = { + description = "Approved via Terraform" + status = "Approved" + } + } + }) +} +``` + + +## Schema + +### Required + +- `cloud_provider` (String) Label that identifies the cloud provider for the Encryption At Rest private endpoint. +- `project_id` (String) Unique 24-hexadecimal digit string that identifies your project. +- `region_name` (String) Cloud provider region in which the Encryption At Rest private endpoint is located. + +### Read-Only + +- `error_message` (String) Error message for failures associated with the Encryption At Rest private endpoint. +- `id` (String) Unique 24-hexadecimal digit string that identifies the Private Endpoint Service. +- `private_endpoint_connection_name` (String) Connection name of the Azure Private Endpoint. +- `status` (String) State of the Encryption At Rest private endpoint. + +# Import +Encryption At Rest Private Endpoint resource can be imported using the project ID, cloud provider, and private endpoint ID. The format must be `{project_id}-{cloud_provider}-{private_endpoint_id}` e.g. + +``` +$ terraform import mongodbatlas_encryption_at_rest_private_endpoint.test 650972848269185c55f40ca1-AZURE-650972848269185c55f40ca2 +``` + +For more information see: +- [MongoDB Atlas API - Private Endpoint for Encryption at Rest Using Customer Key Management](https://www.mongodb.com/docs/atlas/reference/api-resources-spec/v2/#tag/Encryption-at-Rest-using-Customer-Key-Management/operation/getEncryptionAtRestPrivateEndpoint) Documentation. +- [Manage Customer Keys with Azure Key Vault Over Private Endpoints](https://www.mongodb.com/docs/atlas/security/azure-kms-over-private-endpoint/). diff --git a/examples/mongodbatlas_encryption_at_rest/aws/atlas-cluster/main.tf b/examples/mongodbatlas_encryption_at_rest/aws/atlas-cluster/main.tf index fb4b6d9826..e07e46e1e4 100644 --- a/examples/mongodbatlas_encryption_at_rest/aws/atlas-cluster/main.tf +++ b/examples/mongodbatlas_encryption_at_rest/aws/atlas-cluster/main.tf @@ -24,7 +24,7 @@ resource "mongodbatlas_encryption_at_rest" "test" { } resource "mongodbatlas_advanced_cluster" "cluster" { - project_id = var.atlas_project_id + project_id = mongodbatlas_encryption_at_rest.test.project_id name = "MyCluster" cluster_type = "REPLICASET" backup_enabled = true @@ -42,3 +42,11 @@ resource "mongodbatlas_advanced_cluster" "cluster" { } } } + +data "mongodbatlas_encryption_at_rest" "test" { + project_id = mongodbatlas_encryption_at_rest.test.project_id +} + +output "is_aws_kms_encryption_at_rest_valid" { + value = data.mongodbatlas_encryption_at_rest.test.aws_kms_config.valid +} diff --git a/examples/mongodbatlas_encryption_at_rest/azure/README.md b/examples/mongodbatlas_encryption_at_rest/azure/README.md new file mode 100644 index 0000000000..512a16d281 --- /dev/null +++ b/examples/mongodbatlas_encryption_at_rest/azure/README.md @@ -0,0 +1,57 @@ +# MongoDB Atlas Provider -- Encryption At Rest using Customer Key Management with Azure +This example shows how to configure encryption at rest with customer managed keys with Azure Key Vault. + +Note: It is possible to configure Atlas Encryption at Rest to communicate with Azure Key Vault using Azure Private Link, ensuring that all traffic between Atlas and Key Vault takes place over Azure’s private network interfaces. Please review `mongodbatlas_encryption_at_rest_private_endpoint` resource for details. + +## Dependencies + +* Terraform MongoDB Atlas Provider +* A MongoDB Atlas account +* A Microsoft Azure account + +## Usage + +**1\. Provide the appropriate values for the input variables.** + +- `atlas_public_key`: The public API key for MongoDB Atlas +- `atlas_private_key`: The private API key for MongoDB Atlas +- `atlas_project_id`: Atlas Project ID +- `azure_subscription_id`: Azure ID that identifies your Azure subscription +- `azure_client_id`: Azure ID identifies an Azure application associated with your Azure Active Directory tenant +- `azure_client_secret`: Secret associated to the Azure application +- `azure_tenant_id`: Azure ID that identifies the Azure Active Directory tenant within your Azure subscription +- `azure_resource_group_name`: Name of the Azure resource group that contains your Azure Key Vault +- `azure_key_vault_name`: Unique string that identifies the Azure Key Vault that contains your key +- `azure_key_identifier`: Web address with a unique key that identifies for your Azure Key Vault + +**NOTE**: The Azure application (associated to `azure_client_id`) must have the following permissions associated to the Azure Key Vault (`azure_key_vault_name`): +- GET (Key Management Operation), ENCRYPT (Cryptographic Operation) and DECRYPT (Cryptographic Operation) policy permissions. +- A `Key Vault Reader` role. + +**2\. Review the Terraform plan.** + +Execute the following command and ensure you are happy with the plan. + +``` bash +$ terraform plan +``` +This project currently supports the following deployments: + +- Configure encryption at rest in an existing project using a custom Azure Key. + +**3\. Execute the Terraform apply.** + +Now execute the plan to provision the resources. + +``` bash +$ terraform apply +``` + +**4\. Destroy the resources.** + +When you have finished your testing, ensure you destroy the resources to avoid unnecessary Atlas charges. + +``` bash +$ terraform destroy +``` + diff --git a/examples/mongodbatlas_encryption_at_rest/azure/main.tf b/examples/mongodbatlas_encryption_at_rest/azure/main.tf new file mode 100644 index 0000000000..2323df7241 --- /dev/null +++ b/examples/mongodbatlas_encryption_at_rest/azure/main.tf @@ -0,0 +1,25 @@ +resource "mongodbatlas_encryption_at_rest" "test" { + project_id = var.atlas_project_id + + azure_key_vault_config { + enabled = true + azure_environment = "AZURE" + + tenant_id = var.azure_tenant_id + subscription_id = var.azure_subscription_id + client_id = var.azure_client_id + secret = var.azure_client_secret + + resource_group_name = var.azure_resource_group_name + key_vault_name = var.azure_key_vault_name + key_identifier = var.azure_key_identifier + } +} + +data "mongodbatlas_encryption_at_rest" "test" { + project_id = mongodbatlas_encryption_at_rest.test.project_id +} + +output "is_azure_encryption_at_rest_valid" { + value = data.mongodbatlas_encryption_at_rest.test.azure_key_vault_config.valid +} diff --git a/examples/mongodbatlas_encryption_at_rest/azure/providers.tf b/examples/mongodbatlas_encryption_at_rest/azure/providers.tf new file mode 100644 index 0000000000..6fc0d099e0 --- /dev/null +++ b/examples/mongodbatlas_encryption_at_rest/azure/providers.tf @@ -0,0 +1,5 @@ +provider "mongodbatlas" { + public_key = var.atlas_public_key + private_key = var.atlas_private_key +} + diff --git a/examples/mongodbatlas_encryption_at_rest/azure/variables.tf b/examples/mongodbatlas_encryption_at_rest/azure/variables.tf new file mode 100644 index 0000000000..d4b94a39b5 --- /dev/null +++ b/examples/mongodbatlas_encryption_at_rest/azure/variables.tf @@ -0,0 +1,50 @@ +variable "atlas_public_key" { + description = "The public API key for MongoDB Atlas" + type = string +} +variable "atlas_private_key" { + description = "The private API key for MongoDB Atlas" + type = string + sensitive = true +} +variable "atlas_project_id" { + description = "Atlas Project ID" + type = string +} +variable "azure_subscription_id" { + type = string + description = "Azure ID that identifies your Azure subscription" +} + +variable "azure_client_id" { + type = string + description = "Azure ID identifies an Azure application associated with your Azure Active Directory tenant" +} + +variable "azure_client_secret" { + type = string + sensitive = true + description = "Secret associated to the Azure application" +} + +variable "azure_tenant_id" { + type = string + description = "Azure ID that identifies the Azure Active Directory tenant within your Azure subscription" +} + +variable "azure_resource_group_name" { + type = string + description = "Name of the Azure resource group that contains your Azure Key Vault" +} + +variable "azure_key_vault_name" { + type = string + description = "Unique string that identifies the Azure Key Vault that contains your key" +} + +variable "azure_key_identifier" { + type = string + description = "Web address with a unique key that identifies for your Azure Key Vault" +} + + diff --git a/examples/mongodbatlas_encryption_at_rest/azure/versions.tf b/examples/mongodbatlas_encryption_at_rest/azure/versions.tf new file mode 100644 index 0000000000..9b4be6c14c --- /dev/null +++ b/examples/mongodbatlas_encryption_at_rest/azure/versions.tf @@ -0,0 +1,9 @@ +terraform { + required_providers { + mongodbatlas = { + source = "mongodb/mongodbatlas" + version = "~> 1.18" + } + } + required_version = ">= 1.0" +} diff --git a/examples/mongodbatlas_encryption_at_rest_private_endpoint/azure/README.md b/examples/mongodbatlas_encryption_at_rest_private_endpoint/azure/README.md new file mode 100644 index 0000000000..727ec3b95b --- /dev/null +++ b/examples/mongodbatlas_encryption_at_rest_private_endpoint/azure/README.md @@ -0,0 +1,73 @@ +# MongoDB Atlas Provider - Encryption At Rest using Customer Key Management via Private Network Interfaces (Azure) +This example shows how to configure encryption at rest using Azure with customer managed keys ensuring all communication with Azure Key Vault happens exclusively over Azure Private Link. + +## Dependencies + +* Terraform MongoDB Atlas Provider v1.19.0 minimum +* A MongoDB Atlas account +* Terraform Azure `azapi` provider +* A Microsoft Azure account + +## Usage + +**1\. Ensure that Encryption At Rest Azure Key Vault Private Endpoint feature is available for your project.** + +The Encryption at Rest using Azure Key Vault over Private Endpoints feature is available by request. To request this functionality for your Atlas deployments, contact your Account Manager. + +**2\. Enable `MONGODB_ATLAS_ENABLE_PREVIEW` flag.** + +This step is needed to make use of the `mongodbatlas_encryption_at_rest_private_endpoint` resource. + +``` +export MONGODB_ATLAS_ENABLE_PREVIEW="true" +``` + +**3\. Provide the appropriate values for the input variables.** + +- `atlas_public_key`: The public API key for MongoDB Atlas +- `atlas_private_key`: The private API key for MongoDB Atlas +- `atlas_project_id`: Atlas Project ID +- `azure_subscription_id`: Azure ID that identifies your Azure subscription +- `azure_client_id`: Azure ID identifies an Azure application associated with your Azure Active Directory tenant +- `azure_client_secret`: Secret associated to the Azure application +- `azure_tenant_id`: Azure ID that identifies the Azure Active Directory tenant within your Azure subscription +- `azure_resource_group_name`: Name of the Azure resource group that contains your Azure Key Vault +- `azure_key_vault_name`: Unique string that identifies the Azure Key Vault that contains your key +- `azure_key_identifier`: Web address with a unique key that identifies for your Azure Key Vault +- `azure_region_name`: Region in which the Encryption At Rest private endpoint is located + + +**NOTE**: The Azure application (associated to `azure_client_id`) must have the following permissions associated to the Azure Key Vault (`azure_key_vault_name`): +- GET (Key Management Operation), ENCRYPT (Cryptographic Operation) and DECRYPT (Cryptographic Operation) policy permissions. +- A `Key Vault Reader` role. + +**4\. Review the Terraform plan.** + +Execute the following command and ensure you are happy with the plan. + +``` bash +$ terraform plan +``` +This project will execute the following changes to acheive a successful Azure Private Link for customer managed keys: + +- Configure encryption at rest in an existing project using a custom Azure Key. For successful private networking configuration, the `requires_private_networking` attribute in `mongodbatlas_encryption_at_rest` is set to true. +- Create a private endpoint for the existing project under a certain Azure region using `mongodbatlas_encryption_at_rest_private_endpoint`. +- Approve the connection from the Azure Key Vault. This is being done through terraform with the `azapi_update_resource` resource. Alternatively, the private connection can be approved through the Azure UI or CLI. + - CLI example command: `az keyvault private-endpoint-connection approve --approval-description {"OPTIONAL DESCRIPTION"} --resource-group {RG} --vault-name {KEY VAULT NAME} –name {PRIVATE LINK CONNECTION NAME}` + +**3\. Execute the Terraform apply.** + +Now execute the plan to provision the resources. + +``` bash +$ terraform apply +``` + +**4\. Destroy the resources.** + +When you have finished your testing, ensure you destroy the resources to avoid unnecessary Atlas charges. + +``` bash +$ terraform destroy +``` + diff --git a/examples/mongodbatlas_encryption_at_rest_private_endpoint/azure/main.tf b/examples/mongodbatlas_encryption_at_rest_private_endpoint/azure/main.tf new file mode 100644 index 0000000000..636a423013 --- /dev/null +++ b/examples/mongodbatlas_encryption_at_rest_private_endpoint/azure/main.tf @@ -0,0 +1,46 @@ +resource "mongodbatlas_encryption_at_rest" "ear" { + project_id = var.atlas_project_id + + azure_key_vault_config { + require_private_networking = true + + enabled = true + azure_environment = "AZURE" + + tenant_id = var.azure_tenant_id + subscription_id = var.azure_subscription_id + client_id = var.azure_client_id + secret = var.azure_client_secret + + resource_group_name = var.azure_resource_group_name + key_vault_name = var.azure_key_vault_name + key_identifier = var.azure_key_identifier + } +} + +# Creates private endpoint +resource "mongodbatlas_encryption_at_rest_private_endpoint" "endpoint" { + project_id = mongodbatlas_encryption_at_rest.ear.project_id + cloud_provider = "AZURE" + region_name = var.azure_region_name +} + +locals { + key_vault_resource_id = "/subscriptions/${var.azure_subscription_id}/resourceGroups/${var.azure_resource_group_name}/providers/Microsoft.KeyVault/vaults/${var.azure_key_vault_name}" +} + +# Approves private endpoint connection from Azure Key Vault +resource "azapi_update_resource" "approval" { + type = "Microsoft.KeyVault/Vaults/PrivateEndpointConnections@2023-07-01" + name = mongodbatlas_encryption_at_rest_private_endpoint.endpoint.private_endpoint_connection_name + parent_id = local.key_vault_resource_id + + body = jsonencode({ + properties = { + privateLinkServiceConnectionState = { + description = "Approved via Terraform" + status = "Approved" + } + } + }) +} diff --git a/examples/mongodbatlas_encryption_at_rest_private_endpoint/azure/plural-data-source.tf b/examples/mongodbatlas_encryption_at_rest_private_endpoint/azure/plural-data-source.tf new file mode 100644 index 0000000000..f2cb36d2dd --- /dev/null +++ b/examples/mongodbatlas_encryption_at_rest_private_endpoint/azure/plural-data-source.tf @@ -0,0 +1,8 @@ +data "mongodbatlas_encryption_at_rest_private_endpoints" "plural" { + project_id = var.atlas_project_id + cloud_provider = "AZURE" +} + +output "number_of_endpoints" { + value = length(data.mongodbatlas_encryption_at_rest_private_endpoints.plural.results) +} diff --git a/examples/mongodbatlas_encryption_at_rest_private_endpoint/azure/providers.tf b/examples/mongodbatlas_encryption_at_rest_private_endpoint/azure/providers.tf new file mode 100644 index 0000000000..432a001a39 --- /dev/null +++ b/examples/mongodbatlas_encryption_at_rest_private_endpoint/azure/providers.tf @@ -0,0 +1,11 @@ +provider "mongodbatlas" { + public_key = var.atlas_public_key + private_key = var.atlas_private_key +} + +provider "azapi" { + tenant_id = var.azure_tenant_id + subscription_id = var.azure_subscription_id + client_id = var.azure_client_id + client_secret = var.azure_client_secret +} diff --git a/examples/mongodbatlas_encryption_at_rest_private_endpoint/azure/singular-data-source.tf b/examples/mongodbatlas_encryption_at_rest_private_endpoint/azure/singular-data-source.tf new file mode 100644 index 0000000000..f3699a2353 --- /dev/null +++ b/examples/mongodbatlas_encryption_at_rest_private_endpoint/azure/singular-data-source.tf @@ -0,0 +1,9 @@ +data "mongodbatlas_encryption_at_rest_private_endpoint" "single" { + project_id = var.atlas_project_id + cloud_provider = "AZURE" + id = mongodbatlas_encryption_at_rest_private_endpoint.endpoint.id +} + +output "endpoint_connection_name" { + value = data.mongodbatlas_encryption_at_rest_private_endpoint.single.private_endpoint_connection_name +} diff --git a/examples/mongodbatlas_encryption_at_rest_private_endpoint/azure/variables.tf b/examples/mongodbatlas_encryption_at_rest_private_endpoint/azure/variables.tf new file mode 100644 index 0000000000..50a8762fc3 --- /dev/null +++ b/examples/mongodbatlas_encryption_at_rest_private_endpoint/azure/variables.tf @@ -0,0 +1,54 @@ +variable "atlas_public_key" { + description = "The public API key for MongoDB Atlas" + type = string +} +variable "atlas_private_key" { + description = "The private API key for MongoDB Atlas" + type = string + sensitive = true +} +variable "atlas_project_id" { + description = "Atlas Project ID" + type = string +} +variable "azure_subscription_id" { + type = string + description = "Azure ID that identifies your Azure subscription" +} + +variable "azure_client_id" { + type = string + description = "Azure ID identifies an Azure application associated with your Azure Active Directory tenant" +} + +variable "azure_client_secret" { + type = string + sensitive = true + description = "Secret associated to the Azure application" +} + +variable "azure_tenant_id" { + type = string + description = "Azure ID that identifies the Azure Active Directory tenant within your Azure subscription" +} + +variable "azure_resource_group_name" { + type = string + description = "Name of the Azure resource group that contains your Azure Key Vault" +} + +variable "azure_key_vault_name" { + type = string + description = "Unique string that identifies the Azure Key Vault that contains your key" +} + +variable "azure_key_identifier" { + type = string + description = "Web address with a unique key that identifies for your Azure Key Vault" +} + +variable "azure_region_name" { + type = string + description = "Region in which the Encryption At Rest private endpoint is located." +} + diff --git a/examples/mongodbatlas_encryption_at_rest_private_endpoint/azure/versions.tf b/examples/mongodbatlas_encryption_at_rest_private_endpoint/azure/versions.tf new file mode 100644 index 0000000000..c955a31212 --- /dev/null +++ b/examples/mongodbatlas_encryption_at_rest_private_endpoint/azure/versions.tf @@ -0,0 +1,14 @@ +terraform { + required_providers { + mongodbatlas = { + source = "mongodb/mongodbatlas" + version = "~> 1.18" + } + + azapi = { + source = "Azure/azapi" + version = "~> 1.15" + } + } + required_version = ">= 1.0" +} diff --git a/internal/common/conversion/type_conversion.go b/internal/common/conversion/type_conversion.go index 21a555db76..a05d8ecccd 100644 --- a/internal/common/conversion/type_conversion.go +++ b/internal/common/conversion/type_conversion.go @@ -62,3 +62,8 @@ func IsStringPresent(strPtr *string) bool { func MongoDBRegionToAWSRegion(region string) string { return strings.ReplaceAll(strings.ToLower(region), "_", "-") } + +// AWSRegionToMongoDBRegion converts region in us-east-1-like format to US_EAST_1-like +func AWSRegionToMongoDBRegion(region string) string { + return strings.ReplaceAll(strings.ToUpper(region), "-", "_") +} diff --git a/internal/common/conversion/type_conversion_test.go b/internal/common/conversion/type_conversion_test.go index badf82f279..028d7d163a 100644 --- a/internal/common/conversion/type_conversion_test.go +++ b/internal/common/conversion/type_conversion_test.go @@ -4,8 +4,9 @@ import ( "testing" "time" - "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion" "github.com/stretchr/testify/assert" + + "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion" ) func TestTimeWithoutNanos(t *testing.T) { @@ -78,3 +79,19 @@ func TestMongoDBRegionToAWSRegion(t *testing.T) { } } } + +func TestAWSRegionToMongoDBRegion(t *testing.T) { + tests := []struct { + region string + expected string + }{ + {"us-east-1", "US_EAST_1"}, + {"US-EAST-1", "US_EAST_1"}, + } + + for _, test := range tests { + if resp := conversion.AWSRegionToMongoDBRegion(test.region); resp != test.expected { + t.Errorf("AWSRegionToMongoDBRegion(%v) = %v; want %v", test.region, resp, test.expected) + } + } +} diff --git a/internal/common/dsschema/page_request.go b/internal/common/dsschema/page_request.go new file mode 100644 index 0000000000..4195f46104 --- /dev/null +++ b/internal/common/dsschema/page_request.go @@ -0,0 +1,31 @@ +package dsschema + +import ( + "context" + "errors" + "net/http" +) + +type PaginateResponse[T any] interface { + GetResults() []T + GetTotalCount() int +} + +func AllPages[T any](ctx context.Context, listOnPage func(ctx context.Context, pageNum int) (PaginateResponse[T], *http.Response, error)) ([]T, error) { + var results []T + for currentPage := 1; ; currentPage++ { + resp, _, err := listOnPage(ctx, currentPage) + if err != nil { + return nil, err + } + if resp == nil { + return nil, errors.New("no response") + } + currentResults := resp.GetResults() + results = append(results, currentResults...) + if len(currentResults) == 0 || len(results) >= resp.GetTotalCount() { + break + } + } + return results, nil +} diff --git a/internal/common/retrystrategy/retry_state.go b/internal/common/retrystrategy/retry_state.go index f926cc3225..00d5f6670e 100644 --- a/internal/common/retrystrategy/retry_state.go +++ b/internal/common/retrystrategy/retry_state.go @@ -1,11 +1,18 @@ package retrystrategy const ( - RetryStrategyPendingState = "PENDING" - RetryStrategyCompletedState = "COMPLETED" - RetryStrategyErrorState = "ERROR" - RetryStrategyPausedState = "PAUSED" - RetryStrategyUpdatingState = "UPDATING" - RetryStrategyIdleState = "IDLE" - RetryStrategyDeletedState = "DELETED" + RetryStrategyPendingState = "PENDING" + RetryStrategyCompletedState = "COMPLETED" + RetryStrategyErrorState = "ERROR" + RetryStrategyPausedState = "PAUSED" + RetryStrategyUpdatingState = "UPDATING" + RetryStrategyDeletingState = "DELETING" + RetryStrategyInitiatingState = "INITIATING" + RetryStrategyIdleState = "IDLE" + RetryStrategyFailedState = "FAILED" + RetryStrategyActiveState = "ACTIVE" + RetryStrategyDeletedState = "DELETED" + + RetryStrategyPendingAcceptanceState = "PENDING_ACCEPTANCE" + RetryStrategyPendingRecreationState = "PENDING_RECREATION" ) diff --git a/internal/provider/provider.go b/internal/provider/provider.go index 17b2bbdef5..6c2341da86 100644 --- a/internal/provider/provider.go +++ b/internal/provider/provider.go @@ -31,6 +31,7 @@ import ( "github.com/mongodb/terraform-provider-mongodbatlas/internal/service/controlplaneipaddresses" "github.com/mongodb/terraform-provider-mongodbatlas/internal/service/databaseuser" "github.com/mongodb/terraform-provider-mongodbatlas/internal/service/encryptionatrest" + "github.com/mongodb/terraform-provider-mongodbatlas/internal/service/encryptionatrestprivateendpoint" "github.com/mongodb/terraform-provider-mongodbatlas/internal/service/project" "github.com/mongodb/terraform-provider-mongodbatlas/internal/service/projectipaccesslist" "github.com/mongodb/terraform-provider-mongodbatlas/internal/service/projectipaddresses" @@ -435,8 +436,12 @@ func (p *MongodbtlasProvider) DataSources(context.Context) []func() datasource.D streamconnection.PluralDataSource, controlplaneipaddresses.DataSource, projectipaddresses.DataSource, + encryptionatrest.DataSource, + } + previewDataSources := []func() datasource.DataSource{ // Data sources not yet in GA + encryptionatrestprivateendpoint.DataSource, + encryptionatrestprivateendpoint.PluralDataSource, } - previewDataSources := []func() datasource.DataSource{} // Data sources not yet in GA if providerEnablePreview { dataSources = append(dataSources, previewDataSources...) } @@ -455,7 +460,9 @@ func (p *MongodbtlasProvider) Resources(context.Context) []func() resource.Resou streaminstance.Resource, streamconnection.Resource, } - previewResources := []func() resource.Resource{} // Resources not yet in GA + previewResources := []func() resource.Resource{ // Resources not yet in GA + encryptionatrestprivateendpoint.Resource, + } if providerEnablePreview { resources = append(resources, previewResources...) } diff --git a/internal/service/encryptionatrest/data_source.go b/internal/service/encryptionatrest/data_source.go new file mode 100644 index 0000000000..f0acd090cf --- /dev/null +++ b/internal/service/encryptionatrest/data_source.go @@ -0,0 +1,47 @@ +package encryptionatrest + +import ( + "context" + + "github.com/hashicorp/terraform-plugin-framework/datasource" + + "github.com/mongodb/terraform-provider-mongodbatlas/internal/config" +) + +var _ datasource.DataSource = &encryptionAtRestDS{} +var _ datasource.DataSourceWithConfigure = &encryptionAtRestDS{} + +func DataSource() datasource.DataSource { + return &encryptionAtRestDS{ + DSCommon: config.DSCommon{ + DataSourceName: encryptionAtRestResourceName, + }, + } +} + +type encryptionAtRestDS struct { + config.DSCommon +} + +func (d *encryptionAtRestDS) Schema(ctx context.Context, req datasource.SchemaRequest, resp *datasource.SchemaResponse) { + resp.Schema = DataSourceSchema(ctx) +} + +func (d *encryptionAtRestDS) Read(ctx context.Context, req datasource.ReadRequest, resp *datasource.ReadResponse) { + var earConfig TFEncryptionAtRestDSModel + resp.Diagnostics.Append(req.Config.Get(ctx, &earConfig)...) + if resp.Diagnostics.HasError() { + return + } + + connV2 := d.Client.AtlasV2 + projectID := earConfig.ProjectID.ValueString() + + encryptionResp, _, err := connV2.EncryptionAtRestUsingCustomerKeyManagementApi.GetEncryptionAtRest(context.Background(), projectID).Execute() + if err != nil { + resp.Diagnostics.AddError("error fetching resource", err.Error()) + return + } + + resp.Diagnostics.Append(resp.State.Set(ctx, NewTFEncryptionAtRestDSModel(projectID, encryptionResp))...) +} diff --git a/internal/service/encryptionatrest/data_source_schema.go b/internal/service/encryptionatrest/data_source_schema.go new file mode 100644 index 0000000000..540fc59159 --- /dev/null +++ b/internal/service/encryptionatrest/data_source_schema.go @@ -0,0 +1,184 @@ +package encryptionatrest + +import ( + "context" + + "go.mongodb.org/atlas-sdk/v20240805003/admin" + + "github.com/hashicorp/terraform-plugin-framework/datasource/schema" + "github.com/hashicorp/terraform-plugin-framework/types" +) + +func DataSourceSchema(ctx context.Context) schema.Schema { + return schema.Schema{ + Attributes: map[string]schema.Attribute{ + "aws_kms_config": schema.SingleNestedAttribute{ + Attributes: map[string]schema.Attribute{ + "access_key_id": schema.StringAttribute{ + Computed: true, + Sensitive: true, + Description: "Unique alphanumeric string that identifies an Identity and Access Management (IAM) access key with permissions required to access your Amazon Web Services (AWS) Customer Master Key (CMK).", + MarkdownDescription: "Unique alphanumeric string that identifies an Identity and Access Management (IAM) access key with permissions required to access your Amazon Web Services (AWS) Customer Master Key (CMK).", + }, + "customer_master_key_id": schema.StringAttribute{ + Computed: true, + Sensitive: true, + Description: "Unique alphanumeric string that identifies the Amazon Web Services (AWS) Customer Master Key (CMK) you used to encrypt and decrypt the MongoDB master keys.", + MarkdownDescription: "Unique alphanumeric string that identifies the Amazon Web Services (AWS) Customer Master Key (CMK) you used to encrypt and decrypt the MongoDB master keys.", + }, + "enabled": schema.BoolAttribute{ + Computed: true, + Description: "Flag that indicates whether someone enabled encryption at rest for the specified project through Amazon Web Services (AWS) Key Management Service (KMS). To disable encryption at rest using customer key management and remove the configuration details, pass only this parameter with a value of `false`.", + MarkdownDescription: "Flag that indicates whether someone enabled encryption at rest for the specified project through Amazon Web Services (AWS) Key Management Service (KMS). To disable encryption at rest using customer key management and remove the configuration details, pass only this parameter with a value of `false`.", + }, + "region": schema.StringAttribute{ + Computed: true, + Description: "Physical location where MongoDB Atlas deploys your AWS-hosted MongoDB cluster nodes. The region you choose can affect network latency for clients accessing your databases. When MongoDB Cloud deploys a dedicated cluster, it checks if a VPC or VPC connection exists for that provider and region. If not, MongoDB Atlas creates them as part of the deployment. MongoDB Atlas assigns the VPC a CIDR block. To limit a new VPC peering connection to one CIDR block and region, create the connection first. Deploy the cluster after the connection starts.", //nolint:lll // reason: auto-generated from Open API spec. + MarkdownDescription: "Physical location where MongoDB Atlas deploys your AWS-hosted MongoDB cluster nodes. The region you choose can affect network latency for clients accessing your databases. When MongoDB Atlas deploys a dedicated cluster, it checks if a VPC or VPC connection exists for that provider and region. If not, MongoDB Atlas creates them as part of the deployment. MongoDB Atlas assigns the VPC a CIDR block. To limit a new VPC peering connection to one CIDR block and region, create the connection first. Deploy the cluster after the connection starts.", //nolint:lll // reason: auto-generated from Open API spec. + }, + "role_id": schema.StringAttribute{ + Computed: true, + Description: "Unique 24-hexadecimal digit string that identifies an Amazon Web Services (AWS) Identity and Access Management (IAM) role. This IAM role has the permissions required to manage your AWS customer master key.", + MarkdownDescription: "Unique 24-hexadecimal digit string that identifies an Amazon Web Services (AWS) Identity and Access Management (IAM) role. This IAM role has the permissions required to manage your AWS customer master key.", + }, + "secret_access_key": schema.StringAttribute{ + Computed: true, + Sensitive: true, + Description: "Human-readable label of the Identity and Access Management (IAM) secret access key with permissions required to access your Amazon Web Services (AWS) customer master key.", + MarkdownDescription: "Human-readable label of the Identity and Access Management (IAM) secret access key with permissions required to access your Amazon Web Services (AWS) customer master key.", + }, + "valid": schema.BoolAttribute{ + Computed: true, + Description: "Flag that indicates whether the Amazon Web Services (AWS) Key Management Service (KMS) encryption key can encrypt and decrypt data.", + MarkdownDescription: "Flag that indicates whether the Amazon Web Services (AWS) Key Management Service (KMS) encryption key can encrypt and decrypt data.", + }, + }, + Computed: true, + Description: "Amazon Web Services (AWS) KMS configuration details and encryption at rest configuration set for the specified project.", + MarkdownDescription: "Amazon Web Services (AWS) KMS configuration details and encryption at rest configuration set for the specified project.", + }, + "azure_key_vault_config": schema.SingleNestedAttribute{ + Attributes: map[string]schema.Attribute{ + "azure_environment": schema.StringAttribute{ + Computed: true, + Description: "Azure environment in which your account credentials reside.", + MarkdownDescription: "Azure environment in which your account credentials reside.", + }, + "client_id": schema.StringAttribute{ + Computed: true, + Sensitive: true, + Description: "Unique 36-hexadecimal character string that identifies an Azure application associated with your Azure Active Directory tenant.", + MarkdownDescription: "Unique 36-hexadecimal character string that identifies an Azure application associated with your Azure Active Directory tenant.", + }, + "enabled": schema.BoolAttribute{ + Computed: true, + Description: "Flag that indicates whether someone enabled encryption at rest for the specified project. To disable encryption at rest using customer key management and remove the configuration details, pass only this parameter with a value of `false`.", + MarkdownDescription: "Flag that indicates whether someone enabled encryption at rest for the specified project. To disable encryption at rest using customer key management and remove the configuration details, pass only this parameter with a value of `false`.", + }, + "key_identifier": schema.StringAttribute{ + Computed: true, + Sensitive: true, + Description: "Web address with a unique key that identifies for your Azure Key Vault.", + MarkdownDescription: "Web address with a unique key that identifies for your Azure Key Vault.", + }, + "key_vault_name": schema.StringAttribute{ + Computed: true, + Description: "Unique string that identifies the Azure Key Vault that contains your key.", + MarkdownDescription: "Unique string that identifies the Azure Key Vault that contains your key.", + }, + "require_private_networking": schema.BoolAttribute{ + Computed: true, + Description: "Enable connection to your Azure Key Vault over private networking.", + MarkdownDescription: "Enable connection to your Azure Key Vault over private networking.", + }, + "resource_group_name": schema.StringAttribute{ + Computed: true, + Description: "Name of the Azure resource group that contains your Azure Key Vault.", + MarkdownDescription: "Name of the Azure resource group that contains your Azure Key Vault.", + }, + "secret": schema.StringAttribute{ + Computed: true, + Sensitive: true, + Description: "Private data that you need secured and that belongs to the specified Azure Key Vault (AKV) tenant (**azureKeyVault.tenantID**). This data can include any type of sensitive data such as passwords, database connection strings, API keys, and the like. AKV stores this information as encrypted binary data.", + MarkdownDescription: "Private data that you need secured and that belongs to the specified Azure Key Vault (AKV) tenant (**azureKeyVault.tenantID**). This data can include any type of sensitive data such as passwords, database connection strings, API keys, and the like. AKV stores this information as encrypted binary data.", + }, + "subscription_id": schema.StringAttribute{ + Computed: true, + Sensitive: true, + Description: "Unique 36-hexadecimal character string that identifies your Azure subscription.", + MarkdownDescription: "Unique 36-hexadecimal character string that identifies your Azure subscription.", + }, + "tenant_id": schema.StringAttribute{ + Computed: true, + Sensitive: true, + Description: "Unique 36-hexadecimal character string that identifies the Azure Active Directory tenant within your Azure subscription.", + MarkdownDescription: "Unique 36-hexadecimal character string that identifies the Azure Active Directory tenant within your Azure subscription.", + }, + "valid": schema.BoolAttribute{ + Computed: true, + Description: "Flag that indicates whether the Azure encryption key can encrypt and decrypt data.", + MarkdownDescription: "Flag that indicates whether the Azure encryption key can encrypt and decrypt data.", + }, + }, + Computed: true, + Description: "Details that define the configuration of Encryption at Rest using Azure Key Vault (AKV).", + MarkdownDescription: "Details that define the configuration of Encryption at Rest using Azure Key Vault (AKV).", + }, + "google_cloud_kms_config": schema.SingleNestedAttribute{ + Attributes: map[string]schema.Attribute{ + "enabled": schema.BoolAttribute{ + Computed: true, + Description: "Flag that indicates whether someone enabled encryption at rest for the specified project. To disable encryption at rest using customer key management and remove the configuration details, pass only this parameter with a value of `false`.", + MarkdownDescription: "Flag that indicates whether someone enabled encryption at rest for the specified project. To disable encryption at rest using customer key management and remove the configuration details, pass only this parameter with a value of `false`.", + }, + "key_version_resource_id": schema.StringAttribute{ + Computed: true, + Sensitive: true, + Description: "Resource path that displays the key version resource ID for your Google Cloud KMS.", + MarkdownDescription: "Resource path that displays the key version resource ID for your Google Cloud KMS.", + }, + "service_account_key": schema.StringAttribute{ + Computed: true, + Sensitive: true, + Description: "JavaScript Object Notation (JSON) object that contains the Google Cloud Key Management Service (KMS). Format the JSON as a string and not as an object.", + MarkdownDescription: "JavaScript Object Notation (JSON) object that contains the Google Cloud Key Management Service (KMS). Format the JSON as a string and not as an object.", + }, + "valid": schema.BoolAttribute{ + Computed: true, + Description: "Flag that indicates whether the Google Cloud Key Management Service (KMS) encryption key can encrypt and decrypt data.", + MarkdownDescription: "Flag that indicates whether the Google Cloud Key Management Service (KMS) encryption key can encrypt and decrypt data.", + }, + }, + Computed: true, + Description: "Details that define the configuration of Encryption at Rest using Google Cloud Key Management Service (KMS).", + MarkdownDescription: "Details that define the configuration of Encryption at Rest using Google Cloud Key Management Service (KMS).", + }, + "project_id": schema.StringAttribute{ + Required: true, + Description: "Unique 24-hexadecimal digit string that identifies your project.", + MarkdownDescription: "Unique 24-hexadecimal digit string that identifies your project.", + }, + "id": schema.StringAttribute{ + Computed: true, + }, + }, + } +} + +type TFEncryptionAtRestDSModel struct { + AzureKeyVaultConfig *TFAzureKeyVaultConfigModel `tfsdk:"azure_key_vault_config"` + AwsKmsConfig *TFAwsKmsConfigModel `tfsdk:"aws_kms_config"` + GoogleCloudKmsConfig *TFGcpKmsConfigModel `tfsdk:"google_cloud_kms_config"` + ID types.String `tfsdk:"id"` + ProjectID types.String `tfsdk:"project_id"` +} + +func NewTFEncryptionAtRestDSModel(projectID string, encryptionResp *admin.EncryptionAtRest) *TFEncryptionAtRestDSModel { + return &TFEncryptionAtRestDSModel{ + ID: types.StringValue(projectID), + ProjectID: types.StringValue(projectID), + AwsKmsConfig: NewTFAwsKmsConfigItem(encryptionResp.AwsKms), + AzureKeyVaultConfig: NewTFAzureKeyVaultConfigItem(encryptionResp.AzureKeyVault), + GoogleCloudKmsConfig: NewTFGcpKmsConfigItem(encryptionResp.GoogleCloudKms), + } +} diff --git a/internal/service/encryptionatrest/model.go b/internal/service/encryptionatrest/model.go new file mode 100644 index 0000000000..3a3fddbce7 --- /dev/null +++ b/internal/service/encryptionatrest/model.go @@ -0,0 +1,151 @@ +package encryptionatrest + +import ( + "context" + + "go.mongodb.org/atlas-sdk/v20240805003/admin" + + "github.com/hashicorp/terraform-plugin-framework/types" + + "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion" +) + +func NewTFEncryptionAtRestRSModel(ctx context.Context, projectID string, encryptionResp *admin.EncryptionAtRest) *TfEncryptionAtRestRSModel { + return &TfEncryptionAtRestRSModel{ + ID: types.StringValue(projectID), + ProjectID: types.StringValue(projectID), + AwsKmsConfig: NewTFAwsKmsConfig(ctx, encryptionResp.AwsKms), + AzureKeyVaultConfig: NewTFAzureKeyVaultConfig(ctx, encryptionResp.AzureKeyVault), + GoogleCloudKmsConfig: NewTFGcpKmsConfig(ctx, encryptionResp.GoogleCloudKms), + } +} + +func NewTFAwsKmsConfig(ctx context.Context, awsKms *admin.AWSKMSConfiguration) []TFAwsKmsConfigModel { + if awsKms == nil { + return []TFAwsKmsConfigModel{} + } + + return []TFAwsKmsConfigModel{ + *NewTFAwsKmsConfigItem(awsKms), + } +} + +func NewTFAzureKeyVaultConfig(ctx context.Context, az *admin.AzureKeyVault) []TFAzureKeyVaultConfigModel { + if az == nil { + return []TFAzureKeyVaultConfigModel{} + } + + return []TFAzureKeyVaultConfigModel{ + *NewTFAzureKeyVaultConfigItem(az), + } +} + +func NewTFGcpKmsConfig(ctx context.Context, gcpKms *admin.GoogleCloudKMS) []TFGcpKmsConfigModel { + if gcpKms == nil { + return []TFGcpKmsConfigModel{} + } + + return []TFGcpKmsConfigModel{ + *NewTFGcpKmsConfigItem(gcpKms), + } +} + +func NewTFAwsKmsConfigItem(awsKms *admin.AWSKMSConfiguration) *TFAwsKmsConfigModel { + if awsKms == nil { + return nil + } + + return &TFAwsKmsConfigModel{ + Enabled: types.BoolPointerValue(awsKms.Enabled), + CustomerMasterKeyID: types.StringValue(awsKms.GetCustomerMasterKeyID()), + Region: types.StringValue(awsKms.GetRegion()), + AccessKeyID: conversion.StringNullIfEmpty(awsKms.GetAccessKeyID()), + SecretAccessKey: conversion.StringNullIfEmpty(awsKms.GetSecretAccessKey()), + RoleID: conversion.StringNullIfEmpty(awsKms.GetRoleId()), + Valid: types.BoolPointerValue(awsKms.Valid), + } +} + +func NewTFAzureKeyVaultConfigItem(az *admin.AzureKeyVault) *TFAzureKeyVaultConfigModel { + if az == nil { + return nil + } + + return &TFAzureKeyVaultConfigModel{ + Enabled: types.BoolPointerValue(az.Enabled), + ClientID: types.StringValue(az.GetClientID()), + AzureEnvironment: types.StringValue(az.GetAzureEnvironment()), + SubscriptionID: types.StringValue(az.GetSubscriptionID()), + ResourceGroupName: types.StringValue(az.GetResourceGroupName()), + KeyVaultName: types.StringValue(az.GetKeyVaultName()), + KeyIdentifier: types.StringValue(az.GetKeyIdentifier()), + TenantID: types.StringValue(az.GetTenantID()), + Secret: conversion.StringNullIfEmpty(az.GetSecret()), + RequirePrivateNetworking: types.BoolValue(az.GetRequirePrivateNetworking()), + Valid: types.BoolPointerValue(az.Valid), + } +} + +func NewTFGcpKmsConfigItem(gcpKms *admin.GoogleCloudKMS) *TFGcpKmsConfigModel { + if gcpKms == nil { + return nil + } + + return &TFGcpKmsConfigModel{ + Enabled: types.BoolPointerValue(gcpKms.Enabled), + KeyVersionResourceID: types.StringValue(gcpKms.GetKeyVersionResourceID()), + ServiceAccountKey: conversion.StringNullIfEmpty(gcpKms.GetServiceAccountKey()), + Valid: types.BoolPointerValue(gcpKms.Valid), + } +} + +func NewAtlasAwsKms(tfAwsKmsConfigSlice []TFAwsKmsConfigModel) *admin.AWSKMSConfiguration { + if len(tfAwsKmsConfigSlice) == 0 { + return &admin.AWSKMSConfiguration{} + } + v := tfAwsKmsConfigSlice[0] + + awsRegion, _ := conversion.ValRegion(v.Region.ValueString()) + + return &admin.AWSKMSConfiguration{ + Enabled: v.Enabled.ValueBoolPointer(), + AccessKeyID: v.AccessKeyID.ValueStringPointer(), + SecretAccessKey: v.SecretAccessKey.ValueStringPointer(), + CustomerMasterKeyID: v.CustomerMasterKeyID.ValueStringPointer(), + Region: conversion.StringPtr(awsRegion), + RoleId: v.RoleID.ValueStringPointer(), + } +} + +func NewAtlasGcpKms(tfGcpKmsConfigSlice []TFGcpKmsConfigModel) *admin.GoogleCloudKMS { + if len(tfGcpKmsConfigSlice) == 0 { + return &admin.GoogleCloudKMS{} + } + v := tfGcpKmsConfigSlice[0] + + return &admin.GoogleCloudKMS{ + Enabled: v.Enabled.ValueBoolPointer(), + ServiceAccountKey: v.ServiceAccountKey.ValueStringPointer(), + KeyVersionResourceID: v.KeyVersionResourceID.ValueStringPointer(), + } +} + +func NewAtlasAzureKeyVault(tfAzKeyVaultConfigSlice []TFAzureKeyVaultConfigModel) *admin.AzureKeyVault { + if len(tfAzKeyVaultConfigSlice) == 0 { + return &admin.AzureKeyVault{} + } + v := tfAzKeyVaultConfigSlice[0] + + return &admin.AzureKeyVault{ + Enabled: v.Enabled.ValueBoolPointer(), + ClientID: v.ClientID.ValueStringPointer(), + AzureEnvironment: v.AzureEnvironment.ValueStringPointer(), + SubscriptionID: v.SubscriptionID.ValueStringPointer(), + ResourceGroupName: v.ResourceGroupName.ValueStringPointer(), + KeyVaultName: v.KeyVaultName.ValueStringPointer(), + KeyIdentifier: v.KeyIdentifier.ValueStringPointer(), + Secret: v.Secret.ValueStringPointer(), + TenantID: v.TenantID.ValueStringPointer(), + RequirePrivateNetworking: v.RequirePrivateNetworking.ValueBoolPointer(), + } +} diff --git a/internal/service/encryptionatrest/model_encryption_at_rest.go b/internal/service/encryptionatrest/model_encryption_at_rest.go deleted file mode 100644 index 3e1eed7375..0000000000 --- a/internal/service/encryptionatrest/model_encryption_at_rest.go +++ /dev/null @@ -1,120 +0,0 @@ -package encryptionatrest - -import ( - "context" - - "github.com/hashicorp/terraform-plugin-framework/types" - "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion" - "go.mongodb.org/atlas-sdk/v20240805003/admin" -) - -func NewTfEncryptionAtRestRSModel(ctx context.Context, projectID string, encryptionResp *admin.EncryptionAtRest) *TfEncryptionAtRestRSModel { - return &TfEncryptionAtRestRSModel{ - ID: types.StringValue(projectID), - ProjectID: types.StringValue(projectID), - AwsKmsConfig: NewTFAwsKmsConfig(ctx, encryptionResp.AwsKms), - AzureKeyVaultConfig: NewTFAzureKeyVaultConfig(ctx, encryptionResp.AzureKeyVault), - GoogleCloudKmsConfig: NewTFGcpKmsConfig(ctx, encryptionResp.GoogleCloudKms), - } -} - -func NewTFAwsKmsConfig(ctx context.Context, awsKms *admin.AWSKMSConfiguration) []TfAwsKmsConfigModel { - if awsKms == nil { - return []TfAwsKmsConfigModel{} - } - - return []TfAwsKmsConfigModel{ - { - Enabled: types.BoolPointerValue(awsKms.Enabled), - CustomerMasterKeyID: types.StringValue(awsKms.GetCustomerMasterKeyID()), - Region: types.StringValue(awsKms.GetRegion()), - AccessKeyID: conversion.StringNullIfEmpty(awsKms.GetAccessKeyID()), - SecretAccessKey: conversion.StringNullIfEmpty(awsKms.GetSecretAccessKey()), - RoleID: conversion.StringNullIfEmpty(awsKms.GetRoleId()), - }, - } -} - -func NewTFAzureKeyVaultConfig(ctx context.Context, az *admin.AzureKeyVault) []TfAzureKeyVaultConfigModel { - if az == nil { - return []TfAzureKeyVaultConfigModel{} - } - - return []TfAzureKeyVaultConfigModel{ - { - Enabled: types.BoolPointerValue(az.Enabled), - ClientID: types.StringValue(az.GetClientID()), - AzureEnvironment: types.StringValue(az.GetAzureEnvironment()), - SubscriptionID: types.StringValue(az.GetSubscriptionID()), - ResourceGroupName: types.StringValue(az.GetResourceGroupName()), - KeyVaultName: types.StringValue(az.GetKeyVaultName()), - KeyIdentifier: types.StringValue(az.GetKeyIdentifier()), - TenantID: types.StringValue(az.GetTenantID()), - Secret: conversion.StringNullIfEmpty(az.GetSecret()), - }, - } -} - -func NewTFGcpKmsConfig(ctx context.Context, gcpKms *admin.GoogleCloudKMS) []TfGcpKmsConfigModel { - if gcpKms == nil { - return []TfGcpKmsConfigModel{} - } - - return []TfGcpKmsConfigModel{ - { - Enabled: types.BoolPointerValue(gcpKms.Enabled), - KeyVersionResourceID: types.StringValue(gcpKms.GetKeyVersionResourceID()), - ServiceAccountKey: conversion.StringNullIfEmpty(gcpKms.GetServiceAccountKey()), - }, - } -} - -func NewAtlasAwsKms(tfAwsKmsConfigSlice []TfAwsKmsConfigModel) *admin.AWSKMSConfiguration { - if len(tfAwsKmsConfigSlice) == 0 { - return &admin.AWSKMSConfiguration{} - } - v := tfAwsKmsConfigSlice[0] - - awsRegion, _ := conversion.ValRegion(v.Region.ValueString()) - - return &admin.AWSKMSConfiguration{ - Enabled: v.Enabled.ValueBoolPointer(), - AccessKeyID: v.AccessKeyID.ValueStringPointer(), - SecretAccessKey: v.SecretAccessKey.ValueStringPointer(), - CustomerMasterKeyID: v.CustomerMasterKeyID.ValueStringPointer(), - Region: conversion.StringPtr(awsRegion), - RoleId: v.RoleID.ValueStringPointer(), - } -} - -func NewAtlasGcpKms(tfGcpKmsConfigSlice []TfGcpKmsConfigModel) *admin.GoogleCloudKMS { - if len(tfGcpKmsConfigSlice) == 0 { - return &admin.GoogleCloudKMS{} - } - v := tfGcpKmsConfigSlice[0] - - return &admin.GoogleCloudKMS{ - Enabled: v.Enabled.ValueBoolPointer(), - ServiceAccountKey: v.ServiceAccountKey.ValueStringPointer(), - KeyVersionResourceID: v.KeyVersionResourceID.ValueStringPointer(), - } -} - -func NewAtlasAzureKeyVault(tfAzKeyVaultConfigSlice []TfAzureKeyVaultConfigModel) *admin.AzureKeyVault { - if len(tfAzKeyVaultConfigSlice) == 0 { - return &admin.AzureKeyVault{} - } - v := tfAzKeyVaultConfigSlice[0] - - return &admin.AzureKeyVault{ - Enabled: v.Enabled.ValueBoolPointer(), - ClientID: v.ClientID.ValueStringPointer(), - AzureEnvironment: v.AzureEnvironment.ValueStringPointer(), - SubscriptionID: v.SubscriptionID.ValueStringPointer(), - ResourceGroupName: v.ResourceGroupName.ValueStringPointer(), - KeyVaultName: v.KeyVaultName.ValueStringPointer(), - KeyIdentifier: v.KeyIdentifier.ValueStringPointer(), - Secret: v.Secret.ValueStringPointer(), - TenantID: v.TenantID.ValueStringPointer(), - } -} diff --git a/internal/service/encryptionatrest/model_encryption_at_rest_test.go b/internal/service/encryptionatrest/model_test.go similarity index 63% rename from internal/service/encryptionatrest/model_encryption_at_rest_test.go rename to internal/service/encryptionatrest/model_test.go index 9786cb0fa4..808f5b9d74 100644 --- a/internal/service/encryptionatrest/model_encryption_at_rest_test.go +++ b/internal/service/encryptionatrest/model_test.go @@ -4,31 +4,34 @@ import ( "context" "testing" + "go.mongodb.org/atlas-sdk/v20240805003/admin" + "github.com/hashicorp/terraform-plugin-framework/types" - "github.com/mongodb/terraform-provider-mongodbatlas/internal/service/encryptionatrest" "github.com/stretchr/testify/assert" - "go.mongodb.org/atlas-sdk/v20240805003/admin" + + "github.com/mongodb/terraform-provider-mongodbatlas/internal/service/encryptionatrest" ) var ( - projectID = "projectID" - enabled = true - customerMasterKeyID = "CustomerMasterKeyID" - region = "Region" - accessKeyID = "AccessKeyID" - secretAccessKey = "SecretAccessKey" - roleID = "RoleID" - clientID = "clientID" - azureEnvironment = "AzureEnvironment" - subscriptionID = "SubscriptionID" - resourceGroupName = "ResourceGroupName" - keyVaultName = "KeyVaultName" - keyIdentifier = "KeyIdentifier" - tenantID = "TenantID" - secret = "Secret" - keyVersionResourceID = "KeyVersionResourceID" - serviceAccountKey = "ServiceAccountKey" - AWSKMSConfiguration = &admin.AWSKMSConfiguration{ + projectID = "projectID" + enabled = true + requirePrivateNetworking = true + customerMasterKeyID = "CustomerMasterKeyID" + region = "Region" + accessKeyID = "AccessKeyID" + secretAccessKey = "SecretAccessKey" + roleID = "RoleID" + clientID = "clientID" + azureEnvironment = "AzureEnvironment" + subscriptionID = "SubscriptionID" + resourceGroupName = "ResourceGroupName" + keyVaultName = "KeyVaultName" + keyIdentifier = "KeyIdentifier" + tenantID = "TenantID" + secret = "Secret" + keyVersionResourceID = "KeyVersionResourceID" + serviceAccountKey = "ServiceAccountKey" + AWSKMSConfiguration = &admin.AWSKMSConfiguration{ Enabled: &enabled, CustomerMasterKeyID: &customerMasterKeyID, Region: ®ion, @@ -36,7 +39,7 @@ var ( SecretAccessKey: &secretAccessKey, RoleId: &roleID, } - TfAwsKmsConfigModel = encryptionatrest.TfAwsKmsConfigModel{ + TfAwsKmsConfigModel = encryptionatrest.TFAwsKmsConfigModel{ Enabled: types.BoolValue(enabled), CustomerMasterKeyID: types.StringValue(customerMasterKeyID), Region: types.StringValue(region), @@ -45,33 +48,35 @@ var ( RoleID: types.StringValue(roleID), } AzureKeyVault = &admin.AzureKeyVault{ - Enabled: &enabled, - ClientID: &clientID, - AzureEnvironment: &azureEnvironment, - SubscriptionID: &subscriptionID, - ResourceGroupName: &resourceGroupName, - KeyVaultName: &keyVaultName, - KeyIdentifier: &keyIdentifier, - TenantID: &tenantID, - Secret: &secret, + Enabled: &enabled, + ClientID: &clientID, + AzureEnvironment: &azureEnvironment, + SubscriptionID: &subscriptionID, + ResourceGroupName: &resourceGroupName, + KeyVaultName: &keyVaultName, + KeyIdentifier: &keyIdentifier, + TenantID: &tenantID, + Secret: &secret, + RequirePrivateNetworking: &requirePrivateNetworking, } - TfAzureKeyVaultConfigModel = encryptionatrest.TfAzureKeyVaultConfigModel{ - Enabled: types.BoolValue(enabled), - ClientID: types.StringValue(clientID), - AzureEnvironment: types.StringValue(azureEnvironment), - SubscriptionID: types.StringValue(subscriptionID), - ResourceGroupName: types.StringValue(resourceGroupName), - KeyVaultName: types.StringValue(keyVaultName), - KeyIdentifier: types.StringValue(keyIdentifier), - TenantID: types.StringValue(tenantID), - Secret: types.StringValue(secret), + TfAzureKeyVaultConfigModel = encryptionatrest.TFAzureKeyVaultConfigModel{ + Enabled: types.BoolValue(enabled), + ClientID: types.StringValue(clientID), + AzureEnvironment: types.StringValue(azureEnvironment), + SubscriptionID: types.StringValue(subscriptionID), + ResourceGroupName: types.StringValue(resourceGroupName), + KeyVaultName: types.StringValue(keyVaultName), + KeyIdentifier: types.StringValue(keyIdentifier), + TenantID: types.StringValue(tenantID), + Secret: types.StringValue(secret), + RequirePrivateNetworking: types.BoolValue(requirePrivateNetworking), } GoogleCloudKMS = &admin.GoogleCloudKMS{ Enabled: &enabled, KeyVersionResourceID: &keyVersionResourceID, ServiceAccountKey: &serviceAccountKey, } - TfGcpKmsConfigModel = encryptionatrest.TfGcpKmsConfigModel{ + TfGcpKmsConfigModel = encryptionatrest.TFGcpKmsConfigModel{ Enabled: types.BoolValue(enabled), KeyVersionResourceID: types.StringValue(keyVersionResourceID), ServiceAccountKey: types.StringValue(serviceAccountKey), @@ -95,16 +100,16 @@ func TestNewTfEncryptionAtRestRSModel(t *testing.T) { expectedResult: &encryptionatrest.TfEncryptionAtRestRSModel{ ID: types.StringValue(projectID), ProjectID: types.StringValue(projectID), - AwsKmsConfig: []encryptionatrest.TfAwsKmsConfigModel{TfAwsKmsConfigModel}, - AzureKeyVaultConfig: []encryptionatrest.TfAzureKeyVaultConfigModel{TfAzureKeyVaultConfigModel}, - GoogleCloudKmsConfig: []encryptionatrest.TfGcpKmsConfigModel{TfGcpKmsConfigModel}, + AwsKmsConfig: []encryptionatrest.TFAwsKmsConfigModel{TfAwsKmsConfigModel}, + AzureKeyVaultConfig: []encryptionatrest.TFAzureKeyVaultConfigModel{TfAzureKeyVaultConfigModel}, + GoogleCloudKmsConfig: []encryptionatrest.TFGcpKmsConfigModel{TfGcpKmsConfigModel}, }, }, } for _, tc := range testCases { t.Run(tc.name, func(t *testing.T) { - resultModel := encryptionatrest.NewTfEncryptionAtRestRSModel(context.Background(), projectID, tc.sdkModel) + resultModel := encryptionatrest.NewTFEncryptionAtRestRSModel(context.Background(), projectID, tc.sdkModel) assert.Equal(t, tc.expectedResult, resultModel) }) } @@ -114,19 +119,19 @@ func TestNewTFAwsKmsConfig(t *testing.T) { testCases := []struct { name string sdkModel *admin.AWSKMSConfiguration - expectedResult []encryptionatrest.TfAwsKmsConfigModel + expectedResult []encryptionatrest.TFAwsKmsConfigModel }{ { name: "Success NewTFAwsKmsConfig", sdkModel: AWSKMSConfiguration, - expectedResult: []encryptionatrest.TfAwsKmsConfigModel{ + expectedResult: []encryptionatrest.TFAwsKmsConfigModel{ TfAwsKmsConfigModel, }, }, { name: "Empty sdkModel", sdkModel: nil, - expectedResult: []encryptionatrest.TfAwsKmsConfigModel{}, + expectedResult: []encryptionatrest.TFAwsKmsConfigModel{}, }, } @@ -142,19 +147,19 @@ func TestNewTFAzureKeyVaultConfig(t *testing.T) { testCases := []struct { name string sdkModel *admin.AzureKeyVault - expectedResult []encryptionatrest.TfAzureKeyVaultConfigModel + expectedResult []encryptionatrest.TFAzureKeyVaultConfigModel }{ { name: "Success NewTFAwsKmsConfig", sdkModel: AzureKeyVault, - expectedResult: []encryptionatrest.TfAzureKeyVaultConfigModel{ + expectedResult: []encryptionatrest.TFAzureKeyVaultConfigModel{ TfAzureKeyVaultConfigModel, }, }, { name: "Empty sdkModel", sdkModel: nil, - expectedResult: []encryptionatrest.TfAzureKeyVaultConfigModel{}, + expectedResult: []encryptionatrest.TFAzureKeyVaultConfigModel{}, }, } @@ -170,19 +175,19 @@ func TestNewTFGcpKmsConfig(t *testing.T) { testCases := []struct { name string sdkModel *admin.GoogleCloudKMS - expectedResult []encryptionatrest.TfGcpKmsConfigModel + expectedResult []encryptionatrest.TFGcpKmsConfigModel }{ { name: "Success NewTFGcpKmsConfig", sdkModel: GoogleCloudKMS, - expectedResult: []encryptionatrest.TfGcpKmsConfigModel{ + expectedResult: []encryptionatrest.TFGcpKmsConfigModel{ TfGcpKmsConfigModel, }, }, { name: "Empty sdkModel", sdkModel: nil, - expectedResult: []encryptionatrest.TfGcpKmsConfigModel{}, + expectedResult: []encryptionatrest.TFGcpKmsConfigModel{}, }, } @@ -198,11 +203,11 @@ func TestNewAtlasAwsKms(t *testing.T) { testCases := []struct { name string expectedResult *admin.AWSKMSConfiguration - tfModel []encryptionatrest.TfAwsKmsConfigModel + tfModel []encryptionatrest.TFAwsKmsConfigModel }{ { name: "Success NewAtlasAwsKms", - tfModel: []encryptionatrest.TfAwsKmsConfigModel{TfAwsKmsConfigModel}, + tfModel: []encryptionatrest.TFAwsKmsConfigModel{TfAwsKmsConfigModel}, expectedResult: AWSKMSConfiguration, }, { @@ -224,11 +229,11 @@ func TestNewAtlasGcpKms(t *testing.T) { testCases := []struct { name string expectedResult *admin.GoogleCloudKMS - tfModel []encryptionatrest.TfGcpKmsConfigModel + tfModel []encryptionatrest.TFGcpKmsConfigModel }{ { name: "Success NewAtlasAwsKms", - tfModel: []encryptionatrest.TfGcpKmsConfigModel{TfGcpKmsConfigModel}, + tfModel: []encryptionatrest.TFGcpKmsConfigModel{TfGcpKmsConfigModel}, expectedResult: GoogleCloudKMS, }, { @@ -250,11 +255,11 @@ func TestNewAtlasAzureKeyVault(t *testing.T) { testCases := []struct { name string expectedResult *admin.AzureKeyVault - tfModel []encryptionatrest.TfAzureKeyVaultConfigModel + tfModel []encryptionatrest.TFAzureKeyVaultConfigModel }{ { name: "Success NewAtlasAwsKms", - tfModel: []encryptionatrest.TfAzureKeyVaultConfigModel{TfAzureKeyVaultConfigModel}, + tfModel: []encryptionatrest.TFAzureKeyVaultConfigModel{TfAzureKeyVaultConfigModel}, expectedResult: AzureKeyVault, }, { diff --git a/internal/service/encryptionatrest/resource_encryption_at_rest.go b/internal/service/encryptionatrest/resource.go similarity index 59% rename from internal/service/encryptionatrest/resource_encryption_at_rest.go rename to internal/service/encryptionatrest/resource.go index fc17c1085a..7a82d2bc69 100644 --- a/internal/service/encryptionatrest/resource_encryption_at_rest.go +++ b/internal/service/encryptionatrest/resource.go @@ -9,6 +9,8 @@ import ( "reflect" "time" + "go.mongodb.org/atlas-sdk/v20240805003/admin" + "github.com/hashicorp/terraform-plugin-framework-validators/listvalidator" "github.com/hashicorp/terraform-plugin-framework/path" "github.com/hashicorp/terraform-plugin-framework/resource" @@ -19,12 +21,12 @@ import ( "github.com/hashicorp/terraform-plugin-framework/schema/validator" "github.com/hashicorp/terraform-plugin-framework/types" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" + "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion" "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/retrystrategy" "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/validate" "github.com/mongodb/terraform-provider-mongodbatlas/internal/config" "github.com/mongodb/terraform-provider-mongodbatlas/internal/service/project" - "go.mongodb.org/atlas-sdk/v20240805003/admin" ) const ( @@ -53,34 +55,38 @@ type encryptionAtRestRS struct { type TfEncryptionAtRestRSModel struct { ID types.String `tfsdk:"id"` ProjectID types.String `tfsdk:"project_id"` - AwsKmsConfig []TfAwsKmsConfigModel `tfsdk:"aws_kms_config"` - AzureKeyVaultConfig []TfAzureKeyVaultConfigModel `tfsdk:"azure_key_vault_config"` - GoogleCloudKmsConfig []TfGcpKmsConfigModel `tfsdk:"google_cloud_kms_config"` + AwsKmsConfig []TFAwsKmsConfigModel `tfsdk:"aws_kms_config"` + AzureKeyVaultConfig []TFAzureKeyVaultConfigModel `tfsdk:"azure_key_vault_config"` + GoogleCloudKmsConfig []TFGcpKmsConfigModel `tfsdk:"google_cloud_kms_config"` } -type TfAwsKmsConfigModel struct { +type TFAwsKmsConfigModel struct { AccessKeyID types.String `tfsdk:"access_key_id"` SecretAccessKey types.String `tfsdk:"secret_access_key"` CustomerMasterKeyID types.String `tfsdk:"customer_master_key_id"` Region types.String `tfsdk:"region"` RoleID types.String `tfsdk:"role_id"` Enabled types.Bool `tfsdk:"enabled"` + Valid types.Bool `tfsdk:"valid"` } -type TfAzureKeyVaultConfigModel struct { - ClientID types.String `tfsdk:"client_id"` - AzureEnvironment types.String `tfsdk:"azure_environment"` - SubscriptionID types.String `tfsdk:"subscription_id"` - ResourceGroupName types.String `tfsdk:"resource_group_name"` - KeyVaultName types.String `tfsdk:"key_vault_name"` - KeyIdentifier types.String `tfsdk:"key_identifier"` - Secret types.String `tfsdk:"secret"` - TenantID types.String `tfsdk:"tenant_id"` - Enabled types.Bool `tfsdk:"enabled"` +type TFAzureKeyVaultConfigModel struct { + ClientID types.String `tfsdk:"client_id"` + AzureEnvironment types.String `tfsdk:"azure_environment"` + SubscriptionID types.String `tfsdk:"subscription_id"` + ResourceGroupName types.String `tfsdk:"resource_group_name"` + KeyVaultName types.String `tfsdk:"key_vault_name"` + KeyIdentifier types.String `tfsdk:"key_identifier"` + Secret types.String `tfsdk:"secret"` + TenantID types.String `tfsdk:"tenant_id"` + Enabled types.Bool `tfsdk:"enabled"` + RequirePrivateNetworking types.Bool `tfsdk:"require_private_networking"` + Valid types.Bool `tfsdk:"valid"` } -type TfGcpKmsConfigModel struct { +type TFGcpKmsConfigModel struct { ServiceAccountKey types.String `tfsdk:"service_account_key"` KeyVersionResourceID types.String `tfsdk:"key_version_resource_id"` Enabled types.Bool `tfsdk:"enabled"` + Valid types.Bool `tfsdk:"valid"` } func (r *encryptionAtRestRS) Schema(ctx context.Context, req resource.SchemaRequest, resp *resource.SchemaResponse) { @@ -97,11 +103,15 @@ func (r *encryptionAtRestRS) Schema(ctx context.Context, req resource.SchemaRequ PlanModifiers: []planmodifier.String{ stringplanmodifier.RequiresReplace(), }, + Description: "Unique 24-hexadecimal digit string that identifies your project.", + MarkdownDescription: "Unique 24-hexadecimal digit string that identifies your project.", }, }, Blocks: map[string]schema.Block{ "aws_kms_config": schema.ListNestedBlock{ - Validators: []validator.List{listvalidator.SizeAtMost(1)}, + Description: "Amazon Web Services (AWS) KMS configuration details and encryption at rest configuration set for the specified project.", + MarkdownDescription: "Amazon Web Services (AWS) KMS configuration details and encryption at rest configuration set for the specified project.", + Validators: []validator.List{listvalidator.SizeAtMost(1)}, NestedObject: schema.NestedBlockObject{ Attributes: map[string]schema.Attribute{ "enabled": schema.BoolAttribute{ @@ -110,31 +120,50 @@ func (r *encryptionAtRestRS) Schema(ctx context.Context, req resource.SchemaRequ PlanModifiers: []planmodifier.Bool{ boolplanmodifier.UseStateForUnknown(), }, + Description: "Flag that indicates whether someone enabled encryption at rest for the specified project through Amazon Web Services (AWS) Key Management Service (KMS). To disable encryption at rest using customer key management and remove the configuration details, pass only this parameter with a value of `false`.", + MarkdownDescription: "Flag that indicates whether someone enabled encryption at rest for the specified project through Amazon Web Services (AWS) Key Management Service (KMS). To disable encryption at rest using customer key management and remove the configuration details, pass only this parameter with a value of `false`.", }, "access_key_id": schema.StringAttribute{ - Optional: true, - Sensitive: true, + Optional: true, + Sensitive: true, + Description: "Unique alphanumeric string that identifies an Identity and Access Management (IAM) access key with permissions required to access your Amazon Web Services (AWS) Customer Master Key (CMK).", + MarkdownDescription: "Unique alphanumeric string that identifies an Identity and Access Management (IAM) access key with permissions required to access your Amazon Web Services (AWS) Customer Master Key (CMK).", }, "secret_access_key": schema.StringAttribute{ - Optional: true, - Sensitive: true, + Optional: true, + Sensitive: true, + Description: "Human-readable label of the Identity and Access Management (IAM) secret access key with permissions required to access your Amazon Web Services (AWS) customer master key.", + MarkdownDescription: "Human-readable label of the Identity and Access Management (IAM) secret access key with permissions required to access your Amazon Web Services (AWS) customer master key.", }, "customer_master_key_id": schema.StringAttribute{ - Optional: true, - Sensitive: true, + Optional: true, + Sensitive: true, + Description: "Unique alphanumeric string that identifies the Amazon Web Services (AWS) Customer Master Key (CMK) you used to encrypt and decrypt the MongoDB master keys.", + MarkdownDescription: "Unique alphanumeric string that identifies the Amazon Web Services (AWS) Customer Master Key (CMK) you used to encrypt and decrypt the MongoDB master keys.", }, "region": schema.StringAttribute{ - Optional: true, + Optional: true, + Description: "Physical location where MongoDB Atlas deploys your AWS-hosted MongoDB cluster nodes. The region you choose can affect network latency for clients accessing your databases. When MongoDB Cloud deploys a dedicated cluster, it checks if a VPC or VPC connection exists for that provider and region. If not, MongoDB Atlas creates them as part of the deployment. MongoDB Atlas assigns the VPC a CIDR block. To limit a new VPC peering connection to one CIDR block and region, create the connection first. Deploy the cluster after the connection starts.", //nolint:lll // reason: auto-generated from Open API spec. + MarkdownDescription: "Physical location where MongoDB Atlas deploys your AWS-hosted MongoDB cluster nodes. The region you choose can affect network latency for clients accessing your databases. When MongoDB Cloud deploys a dedicated cluster, it checks if a VPC or VPC connection exists for that provider and region. If not, MongoDB Atlas creates them as part of the deployment. MongoDB Atlas assigns the VPC a CIDR block. To limit a new VPC peering connection to one CIDR block and region, create the connection first. Deploy the cluster after the connection starts.", //nolint:lll // reason: auto-generated from Open API spec. }, "role_id": schema.StringAttribute{ - Optional: true, + Optional: true, + Description: "Unique 24-hexadecimal digit string that identifies an Amazon Web Services (AWS) Identity and Access Management (IAM) role. This IAM role has the permissions required to manage your AWS customer master key.", + MarkdownDescription: "Unique 24-hexadecimal digit string that identifies an Amazon Web Services (AWS) Identity and Access Management (IAM) role. This IAM role has the permissions required to manage your AWS customer master key.", + }, + "valid": schema.BoolAttribute{ + Computed: true, + Description: "Flag that indicates whether the Amazon Web Services (AWS) Key Management Service (KMS) encryption key can encrypt and decrypt data.", + MarkdownDescription: "Flag that indicates whether the Amazon Web Services (AWS) Key Management Service (KMS) encryption key can encrypt and decrypt data.", }, }, Validators: []validator.Object{validate.AwsKmsConfig()}, }, }, "azure_key_vault_config": schema.ListNestedBlock{ - Validators: []validator.List{listvalidator.SizeAtMost(1)}, + Description: "Details that define the configuration of Encryption at Rest using Azure Key Vault (AKV).", + MarkdownDescription: "Details that define the configuration of Encryption at Rest using Azure Key Vault (AKV).", + Validators: []validator.List{listvalidator.SizeAtMost(1)}, NestedObject: schema.NestedBlockObject{ Attributes: map[string]schema.Attribute{ "enabled": schema.BoolAttribute{ @@ -143,41 +172,75 @@ func (r *encryptionAtRestRS) Schema(ctx context.Context, req resource.SchemaRequ PlanModifiers: []planmodifier.Bool{ boolplanmodifier.UseStateForUnknown(), }, + Description: "Flag that indicates whether someone enabled encryption at rest for the specified project. To disable encryption at rest using customer key management and remove the configuration details, pass only this parameter with a value of `false`.", + MarkdownDescription: "Flag that indicates whether someone enabled encryption at rest for the specified project. To disable encryption at rest using customer key management and remove the configuration details, pass only this parameter with a value of `false`.", }, "client_id": schema.StringAttribute{ - Optional: true, - Sensitive: true, + Optional: true, + Sensitive: true, + Description: "Unique 36-hexadecimal character string that identifies an Azure application associated with your Azure Active Directory tenant.", + MarkdownDescription: "Unique 36-hexadecimal character string that identifies an Azure application associated with your Azure Active Directory tenant.", }, "azure_environment": schema.StringAttribute{ - Optional: true, + Optional: true, + Description: "Azure environment in which your account credentials reside.", + MarkdownDescription: "Azure environment in which your account credentials reside.", }, "subscription_id": schema.StringAttribute{ - Optional: true, - Sensitive: true, + Optional: true, + Sensitive: true, + Description: "Unique 36-hexadecimal character string that identifies your Azure subscription.", + MarkdownDescription: "Unique 36-hexadecimal character string that identifies your Azure subscription.", }, "resource_group_name": schema.StringAttribute{ - Optional: true, + Optional: true, + Description: "Name of the Azure resource group that contains your Azure Key Vault.", + MarkdownDescription: "Name of the Azure resource group that contains your Azure Key Vault.", }, "key_vault_name": schema.StringAttribute{ - Optional: true, + Optional: true, + Description: "Unique string that identifies the Azure Key Vault that contains your key.", + MarkdownDescription: "Unique string that identifies the Azure Key Vault that contains your key.", }, "key_identifier": schema.StringAttribute{ - Optional: true, - Sensitive: true, + Optional: true, + Sensitive: true, + Description: "Web address with a unique key that identifies for your Azure Key Vault.", + MarkdownDescription: "Web address with a unique key that identifies for your Azure Key Vault.", }, "secret": schema.StringAttribute{ - Optional: true, - Sensitive: true, + Optional: true, + Sensitive: true, + Description: "Private data that you need secured and that belongs to the specified Azure Key Vault (AKV) tenant (**azureKeyVault.tenantID**). This data can include any type of sensitive data such as passwords, database connection strings, API keys, and the like. AKV stores this information as encrypted binary data.", + MarkdownDescription: "Private data that you need secured and that belongs to the specified Azure Key Vault (AKV) tenant (**azureKeyVault.tenantID**). This data can include any type of sensitive data such as passwords, database connection strings, API keys, and the like. AKV stores this information as encrypted binary data.", }, "tenant_id": schema.StringAttribute{ - Optional: true, - Sensitive: true, + Optional: true, + Sensitive: true, + Description: "Unique 36-hexadecimal character string that identifies the Azure Active Directory tenant within your Azure subscription.", + MarkdownDescription: "Unique 36-hexadecimal character string that identifies the Azure Active Directory tenant within your Azure subscription.", + }, + "require_private_networking": schema.BoolAttribute{ + Optional: true, + Computed: true, + PlanModifiers: []planmodifier.Bool{ + boolplanmodifier.UseStateForUnknown(), + }, + Description: "Enable connection to your Azure Key Vault over private networking.", + MarkdownDescription: "Enable connection to your Azure Key Vault over private networking.", + }, + "valid": schema.BoolAttribute{ + Computed: true, + Description: "Flag that indicates whether the Azure encryption key can encrypt and decrypt data.", + MarkdownDescription: "Flag that indicates whether the Azure encryption key can encrypt and decrypt data.", }, }, }, }, "google_cloud_kms_config": schema.ListNestedBlock{ - Validators: []validator.List{listvalidator.SizeAtMost(1)}, + Description: "Details that define the configuration of Encryption at Rest using Google Cloud Key Management Service (KMS).", + MarkdownDescription: "Details that define the configuration of Encryption at Rest using Google Cloud Key Management Service (KMS).", + Validators: []validator.List{listvalidator.SizeAtMost(1)}, NestedObject: schema.NestedBlockObject{ Attributes: map[string]schema.Attribute{ "enabled": schema.BoolAttribute{ @@ -186,14 +249,25 @@ func (r *encryptionAtRestRS) Schema(ctx context.Context, req resource.SchemaRequ PlanModifiers: []planmodifier.Bool{ boolplanmodifier.UseStateForUnknown(), }, + Description: "Flag that indicates whether someone enabled encryption at rest for the specified project. To disable encryption at rest using customer key management and remove the configuration details, pass only this parameter with a value of `false`.", + MarkdownDescription: "Flag that indicates whether someone enabled encryption at rest for the specified project. To disable encryption at rest using customer key management and remove the configuration details, pass only this parameter with a value of `false`.", }, "service_account_key": schema.StringAttribute{ - Optional: true, - Sensitive: true, + Optional: true, + Sensitive: true, + Description: "JavaScript Object Notation (JSON) object that contains the Google Cloud Key Management Service (KMS). Format the JSON as a string and not as an object.", + MarkdownDescription: "JavaScript Object Notation (JSON) object that contains the Google Cloud Key Management Service (KMS). Format the JSON as a string and not as an object.", }, "key_version_resource_id": schema.StringAttribute{ - Optional: true, - Sensitive: true, + Optional: true, + Sensitive: true, + Description: "Resource path that displays the key version resource ID for your Google Cloud KMS.", + MarkdownDescription: "Resource path that displays the key version resource ID for your Google Cloud KMS.", + }, + "valid": schema.BoolAttribute{ + Computed: true, + Description: "Flag that indicates whether the Google Cloud Key Management Service (KMS) encryption key can encrypt and decrypt data.", + MarkdownDescription: "Flag that indicates whether the Google Cloud Key Management Service (KMS) encryption key can encrypt and decrypt data.", }, }, }, @@ -241,7 +315,7 @@ func (r *encryptionAtRestRS) Create(ctx context.Context, req resource.CreateRequ return } - encryptionAtRestPlanNew := NewTfEncryptionAtRestRSModel(ctx, projectID, encryptionResp.(*admin.EncryptionAtRest)) + encryptionAtRestPlanNew := NewTFEncryptionAtRestRSModel(ctx, projectID, encryptionResp.(*admin.EncryptionAtRest)) resetDefaultsFromConfigOrState(ctx, encryptionAtRestPlan, encryptionAtRestPlanNew, encryptionAtRestConfig) // set state to fully populated data @@ -299,7 +373,7 @@ func (r *encryptionAtRestRS) Read(ctx context.Context, req resource.ReadRequest, return } - encryptionAtRestStateNew := NewTfEncryptionAtRestRSModel(ctx, projectID, encryptionResp) + encryptionAtRestStateNew := NewTFEncryptionAtRestRSModel(ctx, projectID, encryptionResp) if isImport { setEmptyArrayForEmptyBlocksReturnedFromImport(encryptionAtRestStateNew) } else { @@ -361,7 +435,7 @@ func (r *encryptionAtRestRS) Update(ctx context.Context, req resource.UpdateRequ return } - encryptionAtRestStateNew := NewTfEncryptionAtRestRSModel(ctx, projectID, encryptionResp) + encryptionAtRestStateNew := NewTFEncryptionAtRestRSModel(ctx, projectID, encryptionResp) resetDefaultsFromConfigOrState(ctx, encryptionAtRestState, encryptionAtRestStateNew, encryptionAtRestConfig) // save updated data into Terraform state @@ -404,15 +478,15 @@ func (r *encryptionAtRestRS) ImportState(ctx context.Context, req resource.Impor resource.ImportStatePassthroughID(ctx, path.Root("id"), req, resp) } -func hasGcpKmsConfigChanged(gcpKmsConfigsPlan, gcpKmsConfigsState []TfGcpKmsConfigModel) bool { +func hasGcpKmsConfigChanged(gcpKmsConfigsPlan, gcpKmsConfigsState []TFGcpKmsConfigModel) bool { return !reflect.DeepEqual(gcpKmsConfigsPlan, gcpKmsConfigsState) } -func hasAzureKeyVaultConfigChanged(azureKeyVaultConfigPlan, azureKeyVaultConfigState []TfAzureKeyVaultConfigModel) bool { +func hasAzureKeyVaultConfigChanged(azureKeyVaultConfigPlan, azureKeyVaultConfigState []TFAzureKeyVaultConfigModel) bool { return !reflect.DeepEqual(azureKeyVaultConfigPlan, azureKeyVaultConfigState) } -func hasAwsKmsConfigChanged(awsKmsConfigPlan, awsKmsConfigState []TfAwsKmsConfigModel) bool { +func hasAwsKmsConfigChanged(awsKmsConfigPlan, awsKmsConfigState []TFAwsKmsConfigModel) bool { return !reflect.DeepEqual(awsKmsConfigPlan, awsKmsConfigState) } @@ -432,7 +506,7 @@ func resetDefaultsFromConfigOrState(ctx context.Context, encryptionAtRestRSCurre func HandleGcpKmsConfig(ctx context.Context, earRSCurrent, earRSNew, earRSConfig *TfEncryptionAtRestRSModel) { // this is required to avoid unnecessary change detection during plan after migration to Plugin Framework if user didn't set this block if earRSCurrent.GoogleCloudKmsConfig == nil { - earRSNew.GoogleCloudKmsConfig = []TfGcpKmsConfigModel{} + earRSNew.GoogleCloudKmsConfig = []TFGcpKmsConfigModel{} return } @@ -448,7 +522,7 @@ func HandleGcpKmsConfig(ctx context.Context, earRSCurrent, earRSNew, earRSConfig func HandleAwsKmsConfigDefaults(ctx context.Context, currentStateFile, newStateFile, earRSConfig *TfEncryptionAtRestRSModel) { // this is required to avoid unnecessary change detection during plan after migration to Plugin Framework if user didn't set this block if currentStateFile.AwsKmsConfig == nil { - newStateFile.AwsKmsConfig = []TfAwsKmsConfigModel{} + newStateFile.AwsKmsConfig = []TFAwsKmsConfigModel{} return } @@ -469,7 +543,7 @@ func HandleAwsKmsConfigDefaults(ctx context.Context, currentStateFile, newStateF func HandleAzureKeyVaultConfigDefaults(ctx context.Context, earRSCurrent, earRSNew, earRSConfig *TfEncryptionAtRestRSModel) { // this is required to avoid unnecessary change detection during plan after migration to Plugin Framework if user didn't set this block if earRSCurrent.AzureKeyVaultConfig == nil { - earRSNew.AzureKeyVaultConfig = []TfAzureKeyVaultConfigModel{} + earRSNew.AzureKeyVaultConfig = []TFAzureKeyVaultConfigModel{} return } @@ -490,14 +564,14 @@ func HandleAzureKeyVaultConfigDefaults(ctx context.Context, earRSCurrent, earRSN // - the API returns the block TfAzureKeyVaultConfigModel{enable=false} if the user does not provider AZURE KMS func setEmptyArrayForEmptyBlocksReturnedFromImport(newStateFromImport *TfEncryptionAtRestRSModel) { if len(newStateFromImport.AwsKmsConfig) == 1 && !newStateFromImport.AwsKmsConfig[0].Enabled.ValueBool() { - newStateFromImport.AwsKmsConfig = []TfAwsKmsConfigModel{} + newStateFromImport.AwsKmsConfig = []TFAwsKmsConfigModel{} } if len(newStateFromImport.GoogleCloudKmsConfig) == 1 && !newStateFromImport.GoogleCloudKmsConfig[0].Enabled.ValueBool() { - newStateFromImport.GoogleCloudKmsConfig = []TfGcpKmsConfigModel{} + newStateFromImport.GoogleCloudKmsConfig = []TFGcpKmsConfigModel{} } if len(newStateFromImport.AzureKeyVaultConfig) == 1 && !newStateFromImport.AzureKeyVaultConfig[0].Enabled.ValueBool() { - newStateFromImport.AzureKeyVaultConfig = []TfAzureKeyVaultConfigModel{} + newStateFromImport.AzureKeyVaultConfig = []TFAzureKeyVaultConfigModel{} } } diff --git a/internal/service/encryptionatrest/resource_encryption_at_rest_migration_test.go b/internal/service/encryptionatrest/resource_migration_test.go similarity index 61% rename from internal/service/encryptionatrest/resource_encryption_at_rest_migration_test.go rename to internal/service/encryptionatrest/resource_migration_test.go index cf9ed9d228..095dd099f7 100644 --- a/internal/service/encryptionatrest/resource_encryption_at_rest_migration_test.go +++ b/internal/service/encryptionatrest/resource_migration_test.go @@ -1,15 +1,19 @@ package encryptionatrest_test import ( + "fmt" "os" + "strconv" "testing" + "go.mongodb.org/atlas-sdk/v20240805003/admin" + "github.com/hashicorp/terraform-plugin-testing/helper/resource" "github.com/hashicorp/terraform-plugin-testing/plancheck" + "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion" "github.com/mongodb/terraform-provider-mongodbatlas/internal/testutil/acc" "github.com/mongodb/terraform-provider-mongodbatlas/internal/testutil/mig" - "go.mongodb.org/atlas-sdk/v20240805003/admin" ) func TestMigEncryptionAtRest_basicAWS(t *testing.T) { @@ -22,36 +26,28 @@ func TestMigEncryptionAtRest_basicAWS(t *testing.T) { awsKms = admin.AWSKMSConfiguration{ Enabled: conversion.Pointer(true), CustomerMasterKeyID: conversion.StringPtr(os.Getenv("AWS_CUSTOMER_MASTER_KEY_ID")), - Region: conversion.StringPtr(os.Getenv("AWS_REGION")), + Region: conversion.StringPtr(conversion.AWSRegionToMongoDBRegion(os.Getenv("AWS_REGION"))), RoleId: conversion.StringPtr(os.Getenv("AWS_ROLE_ID")), } + useDatasource = mig.IsProviderVersionAtLeast("1.19.0") // data source introduced in this version ) resource.Test(t, resource.TestCase{ PreCheck: func() { mig.PreCheck(t); acc.PreCheckAwsEnv(t) }, - CheckDestroy: testAccCheckMongoDBAtlasEncryptionAtRestDestroy, + CheckDestroy: acc.EARDestroy, Steps: []resource.TestStep{ { ExternalProviders: mig.ExternalProviders(), - Config: testAccMongoDBAtlasEncryptionAtRestConfigAwsKms(projectID, &awsKms), + Config: configAwsKms(projectID, &awsKms, useDatasource), Check: resource.ComposeAggregateTestCheckFunc( - testAccCheckMongoDBAtlasEncryptionAtRestExists(resourceName), + acc.CheckEARExists(resourceName), resource.TestCheckResourceAttr(resourceName, "project_id", projectID), resource.TestCheckResourceAttr(resourceName, "aws_kms_config.0.enabled", "true"), resource.TestCheckResourceAttr(resourceName, "aws_kms_config.0.region", awsKms.GetRegion()), resource.TestCheckResourceAttr(resourceName, "aws_kms_config.0.role_id", awsKms.GetRoleId()), ), }, - { - ProtoV6ProviderFactories: acc.TestAccProviderV6Factories, - Config: testAccMongoDBAtlasEncryptionAtRestConfigAwsKms(projectID, &awsKms), - ConfigPlanChecks: resource.ConfigPlanChecks{ - PreApply: []plancheck.PlanCheck{ - acc.DebugPlan(), - plancheck.ExpectEmptyPlan(), - }, - }, - }, + mig.TestStepCheckEmptyPlan(configAwsKms(projectID, &awsKms, useDatasource)), }, }) } @@ -62,36 +58,35 @@ func TestMigEncryptionAtRest_withRole_basicAWS(t *testing.T) { var ( resourceName = "mongodbatlas_encryption_at_rest.test" projectID = os.Getenv("MONGODB_ATLAS_PROJECT_ID") - accessKeyID = os.Getenv("AWS_ACCESS_KEY_ID") - secretKey = os.Getenv("AWS_SECRET_ACCESS_KEY") - policyName = acc.RandomName() - roleName = acc.RandomName() + + awsIAMRoleName = acc.RandomIAMRole() + awsIAMRolePolicyName = fmt.Sprintf("%s-policy", awsIAMRoleName) + awsKeyName = acc.RandomName() awsKms = admin.AWSKMSConfiguration{ Enabled: conversion.Pointer(true), + Region: conversion.StringPtr(conversion.AWSRegionToMongoDBRegion(os.Getenv("AWS_REGION"))), CustomerMasterKeyID: conversion.StringPtr(os.Getenv("AWS_CUSTOMER_MASTER_KEY_ID")), - Region: conversion.StringPtr(os.Getenv("AWS_REGION")), } ) resource.Test(t, resource.TestCase{ PreCheck: func() { mig.PreCheck(t); acc.PreCheckAwsEnv(t) }, - CheckDestroy: testAccCheckMongoDBAtlasEncryptionAtRestDestroy, + CheckDestroy: acc.EARDestroy, Steps: []resource.TestStep{ { ExternalProviders: mig.ExternalProvidersWithAWS(), - Config: testAccMongoDBAtlasEncryptionAtRestConfigAwsKmsWithRole(awsKms.GetRegion(), accessKeyID, secretKey, projectID, policyName, roleName, false, &awsKms), + Config: testAccMongoDBAtlasEncryptionAtRestConfigAwsKmsWithRole(projectID, awsIAMRoleName, awsIAMRolePolicyName, awsKeyName, &awsKms), }, { ExternalProviders: acc.ExternalProvidersOnlyAWS(), ProtoV6ProviderFactories: acc.TestAccProviderV6Factories, - Config: testAccMongoDBAtlasEncryptionAtRestConfigAwsKmsWithRole(awsKms.GetRegion(), accessKeyID, secretKey, projectID, policyName, roleName, false, &awsKms), + Config: testAccMongoDBAtlasEncryptionAtRestConfigAwsKmsWithRole(projectID, awsIAMRoleName, awsIAMRolePolicyName, awsKeyName, &awsKms), Check: resource.ComposeAggregateTestCheckFunc( - testAccCheckMongoDBAtlasEncryptionAtRestExists(resourceName), + acc.CheckEARExists(resourceName), resource.TestCheckResourceAttr(resourceName, "project_id", projectID), resource.TestCheckResourceAttr(resourceName, "aws_kms_config.0.enabled", "true"), resource.TestCheckResourceAttr(resourceName, "aws_kms_config.0.region", awsKms.GetRegion()), - resource.TestCheckResourceAttr(resourceName, "aws_kms_config.0.role_id", awsKms.GetRoleId()), ), ConfigPlanChecks: resource.ConfigPlanChecks{ PreApply: []plancheck.PlanCheck{ @@ -105,11 +100,9 @@ func TestMigEncryptionAtRest_withRole_basicAWS(t *testing.T) { } func TestMigEncryptionAtRest_basicAzure(t *testing.T) { - acc.SkipTestForCI(t) // needs Azure configuration - var ( resourceName = "mongodbatlas_encryption_at_rest.test" - projectID = os.Getenv("MONGODB_ATLAS_PROJECT_ID") + projectID = acc.ProjectIDExecution(t) azureKeyVault = admin.AzureKeyVault{ Enabled: conversion.Pointer(true), @@ -119,37 +112,38 @@ func TestMigEncryptionAtRest_basicAzure(t *testing.T) { ResourceGroupName: conversion.StringPtr(os.Getenv("AZURE_RESOURCE_GROUP_NAME")), KeyVaultName: conversion.StringPtr(os.Getenv("AZURE_KEY_VAULT_NAME")), KeyIdentifier: conversion.StringPtr(os.Getenv("AZURE_KEY_IDENTIFIER")), - Secret: conversion.StringPtr(os.Getenv("AZURE_SECRET")), + Secret: conversion.StringPtr(os.Getenv("AZURE_APP_SECRET")), TenantID: conversion.StringPtr(os.Getenv("AZURE_TENANT_ID")), } + + attrMap = map[string]string{ + "enabled": strconv.FormatBool(azureKeyVault.GetEnabled()), + "azure_environment": azureKeyVault.GetAzureEnvironment(), + "resource_group_name": azureKeyVault.GetResourceGroupName(), + "key_vault_name": azureKeyVault.GetKeyVaultName(), + "client_id": azureKeyVault.GetClientID(), + "key_identifier": azureKeyVault.GetKeyIdentifier(), + "subscription_id": azureKeyVault.GetSubscriptionID(), + "tenant_id": azureKeyVault.GetTenantID(), + } + + useDatasource = mig.IsProviderVersionAtLeast("1.19.0") // data source introduced in this version ) resource.Test(t, resource.TestCase{ - PreCheck: func() { mig.PreCheck(t); acc.PreCheckEncryptionAtRestEnvAzure(t) }, - CheckDestroy: testAccCheckMongoDBAtlasEncryptionAtRestDestroy, + PreCheck: func() { mig.PreCheckBasic(t); acc.PreCheckEncryptionAtRestEnvAzure(t) }, + CheckDestroy: acc.EARDestroy, Steps: []resource.TestStep{ { ExternalProviders: mig.ExternalProviders(), - Config: testAccMongoDBAtlasEncryptionAtRestConfigAzureKeyVault(projectID, &azureKeyVault), + Config: acc.ConfigEARAzureKeyVault(projectID, &azureKeyVault, false, useDatasource), Check: resource.ComposeAggregateTestCheckFunc( - testAccCheckMongoDBAtlasEncryptionAtRestExists(resourceName), + acc.CheckEARExists(resourceName), resource.TestCheckResourceAttr(resourceName, "project_id", projectID), - resource.TestCheckResourceAttr(resourceName, "azure_key_vault_config.0.enabled", "true"), - resource.TestCheckResourceAttr(resourceName, "azure_key_vault_config.0.azure_environment", azureKeyVault.GetAzureEnvironment()), - resource.TestCheckResourceAttr(resourceName, "azure_key_vault_config.0.resource_group_name", azureKeyVault.GetResourceGroupName()), - resource.TestCheckResourceAttr(resourceName, "azure_key_vault_config.0.key_vault_name", azureKeyVault.GetKeyVaultName()), + acc.EARCheckResourceAttr(resourceName, "azure_key_vault_config.0", attrMap), ), }, - { - ProtoV6ProviderFactories: acc.TestAccProviderV6Factories, - Config: testAccMongoDBAtlasEncryptionAtRestConfigAzureKeyVault(projectID, &azureKeyVault), - ConfigPlanChecks: resource.ConfigPlanChecks{ - PreApply: []plancheck.PlanCheck{ - acc.DebugPlan(), - plancheck.ExpectEmptyPlan(), - }, - }, - }, + mig.TestStepCheckEmptyPlan(acc.ConfigEARAzureKeyVault(projectID, &azureKeyVault, false, useDatasource)), }, }) } @@ -166,31 +160,24 @@ func TestMigEncryptionAtRest_basicGCP(t *testing.T) { ServiceAccountKey: conversion.StringPtr(os.Getenv("GCP_SERVICE_ACCOUNT_KEY")), KeyVersionResourceID: conversion.StringPtr(os.Getenv("GCP_KEY_VERSION_RESOURCE_ID")), } + useDatasource = mig.IsProviderVersionAtLeast("1.19.0") // data source introduced in this version ) resource.Test(t, resource.TestCase{ PreCheck: func() { mig.PreCheck(t); acc.PreCheckGPCEnv(t) }, - CheckDestroy: testAccCheckMongoDBAtlasEncryptionAtRestDestroy, + CheckDestroy: acc.EARDestroy, Steps: []resource.TestStep{ { ExternalProviders: mig.ExternalProviders(), - Config: testAccMongoDBAtlasEncryptionAtRestConfigGoogleCloudKms(projectID, &googleCloudKms), + Config: configGoogleCloudKms(projectID, &googleCloudKms, useDatasource), Check: resource.ComposeAggregateTestCheckFunc( - testAccCheckMongoDBAtlasEncryptionAtRestExists(resourceName), + acc.CheckEARExists(resourceName), resource.TestCheckResourceAttr(resourceName, "project_id", projectID), resource.TestCheckResourceAttr(resourceName, "google_cloud_kms_config.0.enabled", "true"), + resource.TestCheckResourceAttrSet(resourceName, "google_cloud_kms_config.0.key_version_resource_id"), ), }, - { - ProtoV6ProviderFactories: acc.TestAccProviderV6Factories, - Config: testAccMongoDBAtlasEncryptionAtRestConfigGoogleCloudKms(projectID, &googleCloudKms), - ConfigPlanChecks: resource.ConfigPlanChecks{ - PreApply: []plancheck.PlanCheck{ - acc.DebugPlan(), - plancheck.ExpectEmptyPlan(), - }, - }, - }, + mig.TestStepCheckEmptyPlan(configGoogleCloudKms(projectID, &googleCloudKms, useDatasource)), }, }) } @@ -207,36 +194,28 @@ func TestMigEncryptionAtRest_basicAWS_from_v1_11_0(t *testing.T) { AccessKeyID: conversion.StringPtr(os.Getenv("AWS_ACCESS_KEY_ID")), SecretAccessKey: conversion.StringPtr(os.Getenv("AWS_SECRET_ACCESS_KEY")), CustomerMasterKeyID: conversion.StringPtr(os.Getenv("AWS_CUSTOMER_MASTER_KEY_ID")), - Region: conversion.StringPtr(os.Getenv("AWS_REGION")), + Region: conversion.StringPtr(conversion.AWSRegionToMongoDBRegion(os.Getenv("AWS_REGION"))), RoleId: conversion.StringPtr(os.Getenv("AWS_ROLE_ID")), } + useDatasource = mig.IsProviderVersionAtLeast("1.19.0") // data source introduced in this version ) resource.Test(t, resource.TestCase{ PreCheck: func() { acc.PreCheck(t); acc.PreCheckAwsEnv(t) }, - CheckDestroy: testAccCheckMongoDBAtlasEncryptionAtRestDestroy, + CheckDestroy: acc.EARDestroy, Steps: []resource.TestStep{ { ExternalProviders: acc.ExternalProvidersWithAWS("1.11.0"), - Config: testAccMongoDBAtlasEncryptionAtRestConfigAwsKms(projectID, &awsKms), + Config: configAwsKms(projectID, &awsKms, useDatasource), Check: resource.ComposeAggregateTestCheckFunc( - testAccCheckMongoDBAtlasEncryptionAtRestExists(resourceName), + acc.CheckEARExists(resourceName), resource.TestCheckResourceAttr(resourceName, "project_id", projectID), resource.TestCheckResourceAttr(resourceName, "aws_kms_config.0.enabled", "true"), resource.TestCheckResourceAttr(resourceName, "aws_kms_config.0.region", awsKms.GetRegion()), resource.TestCheckResourceAttr(resourceName, "aws_kms_config.0.role_id", awsKms.GetRoleId()), ), }, - { - ProtoV6ProviderFactories: acc.TestAccProviderV6Factories, - Config: testAccMongoDBAtlasEncryptionAtRestConfigAwsKms(projectID, &awsKms), - ConfigPlanChecks: resource.ConfigPlanChecks{ - PreApply: []plancheck.PlanCheck{ - acc.DebugPlan(), - plancheck.ExpectEmptyPlan(), - }, - }, - }, + mig.TestStepCheckEmptyPlan(configAwsKms(projectID, &awsKms, useDatasource)), }, }) } diff --git a/internal/service/encryptionatrest/resource_encryption_at_rest_test.go b/internal/service/encryptionatrest/resource_test.go similarity index 52% rename from internal/service/encryptionatrest/resource_encryption_at_rest_test.go rename to internal/service/encryptionatrest/resource_test.go index 91786ca273..4e306227e0 100644 --- a/internal/service/encryptionatrest/resource_encryption_at_rest_test.go +++ b/internal/service/encryptionatrest/resource_test.go @@ -7,157 +7,96 @@ import ( "os" "testing" + "go.mongodb.org/atlas-sdk/v20240805003/admin" + "go.mongodb.org/atlas-sdk/v20240805003/mockadmin" + "github.com/hashicorp/terraform-plugin-framework/types" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-plugin-testing/terraform" + "github.com/stretchr/testify/assert" + "github.com/stretchr/testify/mock" + "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion" "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/retrystrategy" "github.com/mongodb/terraform-provider-mongodbatlas/internal/service/encryptionatrest" "github.com/mongodb/terraform-provider-mongodbatlas/internal/testutil/acc" - "github.com/stretchr/testify/assert" - "github.com/stretchr/testify/mock" - "go.mongodb.org/atlas-sdk/v20240805003/admin" - "go.mongodb.org/atlas-sdk/v20240805003/mockadmin" ) const ( - initialConfigEncryptionRestRoleAWS = ` -provider "aws" { - region = lower(replace("%[1]s", "_", "-")) - access_key = "%[2]s" - secret_key = "%[3]s" -} - -%[7]s - -resource "mongodbatlas_cloud_provider_access" "test" { - project_id = "%[4]s" - provider_name = "AWS" - %[8]s - -} - -resource "aws_iam_role_policy" "test_policy" { - name = "%[5]s" - role = aws_iam_role.test_role.id - - policy = <<-EOF - { - "Version": "2012-10-17", - "Statement": [ - { - "Effect": "Deny", - "Action": "*", - "Resource": "*" - } - ] - } - EOF -} - -resource "aws_iam_role" "test_role" { - name = "%[6]s" - - assume_role_policy = < **IMPORTANT** By default, Atlas enables encryption at rest for all cluster storage and snapshot volumes. + +~> **IMPORTANT** Atlas limits this feature to dedicated cluster tiers of M10 and greater. For more information see: https://www.mongodb.com/docs/atlas/reference/api-resources-spec/#tag/Encryption-at-Rest-using-Customer-Key-Management + +-> **NOTE:** Groups and projects are synonymous terms. You may find `groupId` in the official documentation. + + +## Example Usages + +### Configuring encryption at rest using customer key management in AWS +{{ tffile (printf "examples/%s/aws/atlas-cluster/main.tf" .Name )}} + +### Configuring encryption at rest using customer key management in Azure +{{ tffile (printf "examples/%s/azure/main.tf" .Name )}} + +-> **NOTE:** It is possible to configure Atlas Encryption at Rest to communicate with Azure Key Vault using Azure Private Link, ensuring that all traffic between Atlas and Key Vault takes place over Azure’s private network interfaces. Please review `mongodbatlas_encryption_at_rest_private_endpoint` resource for details. + +### Configuring encryption at rest using customer key management in GCP +```terraform +resource "mongodbatlas_encryption_at_rest" "test" { + project_id = var.atlas_project_id + + google_cloud_kms_config { + enabled = true + service_account_key = "{\"type\": \"service_account\",\"project_id\": \"my-project-common-0\",\"private_key_id\": \"e120598ea4f88249469fcdd75a9a785c1bb3\",\"private_key\": \"-----BEGIN PRIVATE KEY-----\\nMIIEuwIBA(truncated)SfecnS0mT94D9\\n-----END PRIVATE KEY-----\\n\",\"client_email\": \"my-email-kms-0@my-project-common-0.iam.gserviceaccount.com\",\"client_id\": \"10180967717292066\",\"auth_uri\": \"https://accounts.google.com/o/oauth2/auth\",\"token_uri\": \"https://accounts.google.com/o/oauth2/token\",\"auth_provider_x509_cert_url\": \"https://www.googleapis.com/oauth2/v1/certs\",\"client_x509_cert_url\": \"https://www.googleapis.com/robot/v1/metadata/x509/my-email-kms-0%40my-project-common-0.iam.gserviceaccount.com\"}" + key_version_resource_id = "projects/my-project-common-0/locations/us-east4/keyRings/my-key-ring-0/cryptoKeys/my-key-0/cryptoKeyVersions/1" + } +} + +data "mongodbatlas_encryption_at_rest" "test" { + project_id = mongodbatlas_encryption_at_rest.test.project_id +} + +output "is_gcp_encryption_at_rest_valid" { + value = data.mongodbatlas_encryption_at_rest.test.google_cloud_kms_config.valid +} +``` + +{{ .SchemaMarkdown | trimspace }} + +# Import +Encryption at Rest Settings can be imported using project ID, in the format `project_id`, e.g. + +``` +$ terraform import mongodbatlas_encryption_at_rest.example 1112222b3bf99403840e8934 +``` + +For more information see: [MongoDB Atlas API Reference for Encryption at Rest using Customer Key Management.](https://www.mongodb.com/docs/atlas/reference/api-resources-spec/#tag/Encryption-at-Rest-using-Customer-Key-Management) \ No newline at end of file diff --git a/templates/data-sources/encryption_at_rest_private_endpoint.md.tmpl b/templates/data-sources/encryption_at_rest_private_endpoint.md.tmpl new file mode 100644 index 0000000000..74675e1338 --- /dev/null +++ b/templates/data-sources/encryption_at_rest_private_endpoint.md.tmpl @@ -0,0 +1,18 @@ +# {{.Type}}: {{.Name}} + +`{{.Name}}` describes a private endpoint used for encryption at rest using customer-managed keys. + +~> **IMPORTANT** The Encryption at Rest using Azure Key Vault over Private Endpoints feature is available by request. To request this functionality for your Atlas deployments, contact your Account Manager. +Additionally, you'll need to set the environment variable `MONGODB_ATLAS_ENABLE_PREVIEW=true` to use this data source. To learn more about existing limitations, see the [Manage Customer Keys with Azure Key Vault Over Private Endpoints](https://www.mongodb.com/docs/atlas/security/azure-kms-over-private-endpoint/#manage-customer-keys-with-azure-key-vault-over-private-endpoints). + +## Example Usages + +-> **NOTE:** Only Azure Key Vault with Azure Private Link is supported at this time. + +{{ tffile (printf "examples/%s/azure/singular-data-source.tf" .Name )}} + +{{ .SchemaMarkdown | trimspace }} + +For more information see: +- [MongoDB Atlas API - Private Endpoint for Encryption at Rest Using Customer Key Management](https://www.mongodb.com/docs/atlas/reference/api-resources-spec/v2/#tag/Encryption-at-Rest-using-Customer-Key-Management/operation/getEncryptionAtRestPrivateEndpoint) Documentation. +- [Manage Customer Keys with Azure Key Vault Over Private Endpoints](https://www.mongodb.com/docs/atlas/security/azure-kms-over-private-endpoint/). diff --git a/templates/data-sources/encryption_at_rest_private_endpoints.md.tmpl b/templates/data-sources/encryption_at_rest_private_endpoints.md.tmpl new file mode 100644 index 0000000000..701736d56a --- /dev/null +++ b/templates/data-sources/encryption_at_rest_private_endpoints.md.tmpl @@ -0,0 +1,18 @@ +# {{.Type}}: {{.Name}} + +`{{.Name}}` describes private endpoints of a particular cloud provider used for encryption at rest using customer-managed keys. + +~> **IMPORTANT** The Encryption at Rest using Azure Key Vault over Private Endpoints feature is available by request. To request this functionality for your Atlas deployments, contact your Account Manager. +Additionally, you'll need to set the environment variable `MONGODB_ATLAS_ENABLE_PREVIEW=true` to use this data source. To learn more about existing limitations, see the [Manage Customer Keys with Azure Key Vault Over Private Endpoints](https://www.mongodb.com/docs/atlas/security/azure-kms-over-private-endpoint/#manage-customer-keys-with-azure-key-vault-over-private-endpoints). + +## Example Usages + +-> **NOTE:** Only Azure Key Vault with Azure Private Link is supported at this time. + +{{ tffile ("examples/mongodbatlas_encryption_at_rest_private_endpoint/azure/plural-data-source.tf") }} + +{{ .SchemaMarkdown | trimspace }} + +For more information see: +- [MongoDB Atlas API - Private Endpoint for Encryption at Rest Using Customer Key Management](https://www.mongodb.com/docs/atlas/reference/api-resources-spec/v2/#tag/Encryption-at-Rest-using-Customer-Key-Management/operation/getEncryptionAtRestPrivateEndpointsForCloudProvider) Documentation. +- [Manage Customer Keys with Azure Key Vault Over Private Endpoints](https://www.mongodb.com/docs/atlas/security/azure-kms-over-private-endpoint/). diff --git a/templates/resources/encryption_at_rest.md.tmpl b/templates/resources/encryption_at_rest.md.tmpl new file mode 100644 index 0000000000..4a3d08c67e --- /dev/null +++ b/templates/resources/encryption_at_rest.md.tmpl @@ -0,0 +1,77 @@ +# {{.Type}}: {{.Name}} + +`{{.Name}}` allows management of Encryption at Rest for an Atlas project using Customer Key Management configuration. The following providers are supported: +- [Amazon Web Services Key Management Service](https://docs.atlas.mongodb.com/security-aws-kms/#security-aws-kms) +- [Azure Key Vault](https://docs.atlas.mongodb.com/security-azure-kms/#security-azure-kms) +- [Google Cloud KMS](https://docs.atlas.mongodb.com/security-gcp-kms/#security-gcp-kms) + +The [encryption at rest Terraform module](https://registry.terraform.io/modules/terraform-mongodbatlas-modules/encryption-at-rest/mongodbatlas/latest) makes use of this resource and simplifies its use. It is currently limited to AWS KMS. + +Atlas does not automatically rotate user-managed encryption keys. Defer to your preferred Encryption at Rest provider’s documentation and guidance for best practices on key rotation. Atlas automatically creates a 90-day key rotation alert when you configure Encryption at Rest using your Key Management in an Atlas project. + +See [Encryption at Rest](https://docs.atlas.mongodb.com/security-kms-encryption/index.html) for more information, including prerequisites and restrictions. + +~> **IMPORTANT** By default, Atlas enables encryption at rest for all cluster storage and snapshot volumes. + +~> **IMPORTANT** Atlas limits this feature to dedicated cluster tiers of M10 and greater. For more information see: https://www.mongodb.com/docs/atlas/reference/api-resources-spec/#tag/Encryption-at-Rest-using-Customer-Key-Management + +-> **NOTE:** Groups and projects are synonymous terms. You may find `groupId` in the official documentation. + + +-> **IMPORTANT NOTE** To disable the encryption at rest with customer key management for a project all existing clusters in the project must first either have encryption at rest for the provider set to none, e.g. `encryption_at_rest_provider = "NONE"`, or be deleted. + +## Enabling Encryption at Rest for existing Atlas cluster + +After configuring at least one key management provider for an Atlas project, Project Owners can enable customer key management for each Atlas cluster for which they require encryption. For clusters defined in terraform, the [`encryption_at_rest_provider` attribute](advanced_cluster#encryption_at_rest_provider) can be used in both `mongodbatlas_advanced_cluster` and `mongodbatlas_cluster` resources. The key management provider does not have to match the cluster cloud service provider. + +Please reference [Enable Customer Key Management for an Atlas Cluster](https://www.mongodb.com/docs/atlas/security-kms-encryption/#enable-customer-key-management-for-an-service-cluster) documentation for additional considerations. + + +## Example Usages + +### Configuring encryption at rest using customer key management in AWS +The configuration of encryption at rest with customer key management, `mongodbatlas_encryption_at_rest`, needs to be completed before a cluster is created in the project. Force this wait by using an implicit dependency via `project_id` as shown in the example below. + +{{ tffile (printf "examples/%s/aws/atlas-cluster/main.tf" .Name )}} + +**NOTE** If using the two resources path for cloud provider access, `cloud_provider_access_setup` and `cloud_provider_access_authorization`, you may need to define a `depends_on` statement for these two resources, because terraform is not able to infer the dependency. + +```terraform +resource "mongodbatlas_encryption_at_rest" "default" { + (...) + depends_on = [mongodbatlas_cloud_provider_access_setup., mongodbatlas_cloud_provider_access_authorization.] +} +``` + +### Configuring encryption at rest using customer key management in Azure +{{ tffile (printf "examples/%s/azure/main.tf" .Name )}} + +#### Manage Customer Keys with Azure Key Vault Over Private Endpoints +It is possible to configure Atlas Encryption at Rest to communicate with Azure Key Vault using Azure Private Link, ensuring that all traffic between Atlas and Key Vault takes place over Azure’s private network interfaces. This requires enabling `azure_key_vault_config.require_private_networking` attribute, together with the configuration of `mongodbatlas_encryption_at_rest_private_endpoint` resource. + +Please review [`mongodbatlas_encryption_at_rest_private_endpoint` resource documentation](encryption_at_rest_private_endpoint) and [complete example](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_encryption_at_rest_private_endpoint/azure) for details on this functionality. + + +### Configuring encryption at rest using customer key management in GCP +```terraform +resource "mongodbatlas_encryption_at_rest" "test" { + project_id = var.atlas_project_id + + google_cloud_kms_config { + enabled = true + service_account_key = "{\"type\": \"service_account\",\"project_id\": \"my-project-common-0\",\"private_key_id\": \"e120598ea4f88249469fcdd75a9a785c1bb3\",\"private_key\": \"-----BEGIN PRIVATE KEY-----\\nMIIEuwIBA(truncated)SfecnS0mT94D9\\n-----END PRIVATE KEY-----\\n\",\"client_email\": \"my-email-kms-0@my-project-common-0.iam.gserviceaccount.com\",\"client_id\": \"10180967717292066\",\"auth_uri\": \"https://accounts.google.com/o/oauth2/auth\",\"token_uri\": \"https://accounts.google.com/o/oauth2/token\",\"auth_provider_x509_cert_url\": \"https://www.googleapis.com/oauth2/v1/certs\",\"client_x509_cert_url\": \"https://www.googleapis.com/robot/v1/metadata/x509/my-email-kms-0%40my-project-common-0.iam.gserviceaccount.com\"}" + key_version_resource_id = "projects/my-project-common-0/locations/us-east4/keyRings/my-key-ring-0/cryptoKeys/my-key-0/cryptoKeyVersions/1" + } +} +``` + +{{ .SchemaMarkdown | trimspace }} + +# Import +Encryption at Rest Settings can be imported using project ID, in the format `project_id`, e.g. + +``` +$ terraform import mongodbatlas_encryption_at_rest.example 1112222b3bf99403840e8934 +``` + +For more information see: [MongoDB Atlas API Reference for Encryption at Rest using Customer Key Management.](https://www.mongodb.com/docs/atlas/reference/api-resources-spec/#tag/Encryption-at-Rest-using-Customer-Key-Management) diff --git a/templates/resources/encryption_at_rest_private_endpoint.md.tmpl b/templates/resources/encryption_at_rest_private_endpoint.md.tmpl new file mode 100644 index 0000000000..4867ee2014 --- /dev/null +++ b/templates/resources/encryption_at_rest_private_endpoint.md.tmpl @@ -0,0 +1,33 @@ +# {{.Type}}: {{.Name}} + +`{{.Name}}` provides a resource for managing a private endpoint used for encryption at rest with customer-managed keys. This ensures all traffic between Atlas and customer key management systems take place over private network interfaces. + +~> **IMPORTANT** The Encryption at Rest using Azure Key Vault over Private Endpoints feature is available by request. To request this functionality for your Atlas deployments, contact your Account Manager. +Additionally, you'll need to set the environment variable `MONGODB_ATLAS_ENABLE_PREVIEW=true` to use this resource. To learn more about existing limitations, see the [Manage Customer Keys with Azure Key Vault Over Private Endpoints](https://www.mongodb.com/docs/atlas/security/azure-kms-over-private-endpoint/#manage-customer-keys-with-azure-key-vault-over-private-endpoints). + +-> **NOTE:** As a prerequisite to configuring a private endpoint for Azure Key Vault, the corresponding [`mongodbatlas_encryption_at_rest`](encryption_at_rest) resource has to be adjust by configuring [`azure_key_vault_config.require_private_networking`](encryption_at_rest#require_private_networking) to true. This attribute should be updated in place, ensuring the customer-managed keys encryption is never disabled. + +-> **NOTE:** This resource does not support update operations. To modify values of a private endpoint the existing resource must be deleted and a new one can be created with the modified values. + +## Example Usages + +-> **NOTE:** Only Azure Key Vault with Azure Private Link is supported at this time. + +### Configuring Atlas Encryption at Rest using Azure Key Vault with Azure Private Link + +Make sure to reference the [complete example section](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_encryption_at_rest_private_endpoint/azure) for detailed steps and considerations. + +{{ tffile (printf "examples/%s/azure/main.tf" .Name )}} + +{{ .SchemaMarkdown | trimspace }} + +# Import +Encryption At Rest Private Endpoint resource can be imported using the project ID, cloud provider, and private endpoint ID. The format must be `{project_id}-{cloud_provider}-{private_endpoint_id}` e.g. + +``` +$ terraform import mongodbatlas_encryption_at_rest_private_endpoint.test 650972848269185c55f40ca1-AZURE-650972848269185c55f40ca2 +``` + +For more information see: +- [MongoDB Atlas API - Private Endpoint for Encryption at Rest Using Customer Key Management](https://www.mongodb.com/docs/atlas/reference/api-resources-spec/v2/#tag/Encryption-at-Rest-using-Customer-Key-Management/operation/getEncryptionAtRestPrivateEndpoint) Documentation. +- [Manage Customer Keys with Azure Key Vault Over Private Endpoints](https://www.mongodb.com/docs/atlas/security/azure-kms-over-private-endpoint/). From f38faafb79da595343caec55345ff3d62bb71d4b Mon Sep 17 00:00:00 2001 From: svc-apix-bot Date: Mon, 9 Sep 2024 17:21:15 +0000 Subject: [PATCH 08/16] chore: Updates CHANGELOG.md for #2569 --- CHANGELOG.md | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/CHANGELOG.md b/CHANGELOG.md index c186fb5d7a..e8b5ec2f06 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -8,13 +8,19 @@ NOTES: FEATURES: +* **New Data Source:** `data-source/mongodbatlas_encryption_at_rest` ([#2538](https://github.com/mongodb/terraform-provider-mongodbatlas/pull/2538)) +* **New Data Source:** `data-source/mongodbatlas_encryption_at_rest_private_endpoint` ([#2527](https://github.com/mongodb/terraform-provider-mongodbatlas/pull/2527)) +* **New Data Source:** `data-source/mongodbatlas_encryption_at_rest_private_endpoints` ([#2536](https://github.com/mongodb/terraform-provider-mongodbatlas/pull/2536)) * **New Data Source:** `data-source/mongodbatlas_project_ip_addresses` ([#2533](https://github.com/mongodb/terraform-provider-mongodbatlas/pull/2533)) +* **New Resource:** `resource/mongodbatlas_encryption_at_rest_private_endpoint` ([#2512](https://github.com/mongodb/terraform-provider-mongodbatlas/pull/2512)) ENHANCEMENTS: * data-source/mongodbatlas_advanced_cluster: supports replica_set_scaling_strategy attribute ([#2539](https://github.com/mongodb/terraform-provider-mongodbatlas/pull/2539)) * data-source/mongodbatlas_advanced_clusters: supports replica_set_scaling_strategy attribute ([#2539](https://github.com/mongodb/terraform-provider-mongodbatlas/pull/2539)) * resource/mongodbatlas_advanced_cluster: supports replica_set_scaling_strategy attribute ([#2539](https://github.com/mongodb/terraform-provider-mongodbatlas/pull/2539)) +* resource/mongodbatlas_encryption_at_rest: Adds `aws_kms_config.0.valid`, `azure_key_vault_config.0.valid` and `google_cloud_kms_config.0.valid` attribute ([#2538](https://github.com/mongodb/terraform-provider-mongodbatlas/pull/2538)) +* resource/mongodbatlas_encryption_at_rest: Adds new `azure_key_vault_config.#.require_private_networking` field to enable connection to Azure Key Vault over private networking ([#2509](https://github.com/mongodb/terraform-provider-mongodbatlas/pull/2509)) BUG FIXES: From 8074ba6f6c3909f8b3becbda2a1ba314f652e218 Mon Sep 17 00:00:00 2001 From: Oriol Date: Tue, 10 Sep 2024 09:26:47 +0200 Subject: [PATCH 09/16] feat: Adds `mongodbatlas_stream_processor` resource and data sources (#2566) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * chore: dev-branch start * feat: initial implementation of the StreamProcessor schema (#2489) * feat: Implements the `mongodbatlas_stream_processor` data source (#2497) * refactor: move diffSuppressJSON logic to common.schemafunc package * chore: implement the `streamprocessor` data source schema * refactor: use string for pipeline in resource_schema * test: fix broken search index tests * feat: implement StreamProcessorDS and add acceptance test * chore: add changelog entry * feat: Implements `mongodbatlas_stream_processor` resource (#2501) * feat: initial implementation of the StreamProcessor schema * implement resource stream processor * state transition tests * temp: comment out acc tests * fix import and add resource * refactors * doc check * start stream after creation if in the plan the state is STARTED * fixes after merge * model tests and methods for sdk to tf * final refactors * enable tests in CI * use diff instance name in tests * use random instance name * pr comments * valuestring for pipeline * add cluster test * add test for dropped state transition * adapt test into new format * remove change_stream_token and add stats to resource * fix stats and options * make options optional only and kafka test with options * pr comments * refactor test config names and remove unnecessary method --------- Co-authored-by: EspenAlbert * feat: Implements `mongodbatlas_stream_processors` plural data source (#2505) * feat: Implements `streamprocessor` plural data source * refactor: support not using `id` in PaginatedDSSchema * test: add unit tests for plural data source * docs: add changelog entry * fix: unsafe totalCount * fix: rename data-sources * generic pagination 1 * generic pagination 2 * refactor: move AllPages to common package * fix: don't use PaginatedDSSchema * refactor: revert a6aa9b55a45543a36312abe04386d4870c20382c * refactor: improve naming * refactor: remove pagination parameters * fix typo * fix: add descriptions to schema * refactor: rename i to currentPage * test: reuse state names from package * test: Refactors data_source_test.go to resource_test.go and other test improvements (#2517) * test: Refactors test configs * test: refactor checks * test: refactor data source tests to resource_test.go * test: suppor testing options * refactor: shorten names * test: minor fix and avoid printing * doc: Adds examples and documentation for `mongodbatlas_stream_processor` resource and data sources (#2516) * add examples and autogen of docs * improve state attribute documentation * add stream processor to generate-doc-check * fix plural data source and regenerate docs * exclude stream processor resource doc from generate-doc-check * change schema description and remove exlude from git diff * use jsonencode for pipeline * PR comments and extend example * specify autostart on creation is possible * wording * chore: Improves state transition error handling of 404 case in mongodbatlas_stream_processor (#2518) * refactor: increases timeout to support larger pipeline to stop (#2522) * doc: Adds guidance on how to “update” a `mongodbatlas_stream_processor` resource (#2521) * update when only state has changed * add test that fails on update * add guidance on how to update * adjust doc * remove changes from other PR * mention processor has to be running for stats to be available * Update templates/resources/stream_processor.md.tmpl Co-authored-by: Espen Albert * generate doc --------- Co-authored-by: Espen Albert * test: Adds error and migration tests (#2519) * test: add migration test * test: TestAccStreamProcessor_JSONWhiteSpaceFormat * refactor: remove DiffSuppressJSON (didn't work, still showed diff) * fix: support nonStandard formatted JSON * test: fix broken unit test * test: fix invalid transition from CREATED -> STOPPED * update when only state has changed * test: ensure invalid JSON is thrown * fix: exit when invalid state is set on create * chore: `checkDestroyStreamProcessor` still useful to verify there is no stream_processor in state and API * feat: Add support for JsonString * fix: use CustomType: JSONStringType to support different JSON formats --------- Co-authored-by: Oriol Arbusi * chore: Merges master to use the new SDK without dev-preview (#2554) * TEMPORARY: adding SDK preview version in client (#2362) * chore: Update with latest changes in master (including test adjustments in advanced cluster) (#2369) * test: Refactors `mongodb_advanced_cluster` tests (#2361) * refactor common logic in checks * fix copyChecks * checkTenant * use ComposeAggregateTestCheckFunc * checkSingleProvider * checkAggr * leftovers * fix checkSingleProvider * checkTags * checkMultiCloud * checkMultiCloudSharded and checkSingleProviderPaused * checkAdvanced * checkAdvancedDefaultWrite * checkMultiZoneWithShards * regionConfigs in checkMultiCloud * fix tests * fix TestAccClusterAdvancedCluster_multicloudSharded * chore: Bump github.com/hashicorp/hcl/v2 from 2.20.1 to 2.21.0 (#2368) Bumps [github.com/hashicorp/hcl/v2](https://github.com/hashicorp/hcl) from 2.20.1 to 2.21.0. - [Release notes](https://github.com/hashicorp/hcl/releases) - [Changelog](https://github.com/hashicorp/hcl/blob/main/CHANGELOG.md) - [Commits](https://github.com/hashicorp/hcl/compare/v2.20.1...v2.21.0) --- updated-dependencies: - dependency-name: github.com/hashicorp/hcl/v2 dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore: Bump github.com/aws/aws-sdk-go from 1.54.4 to 1.54.8 (#2367) Bumps [github.com/aws/aws-sdk-go](https://github.com/aws/aws-sdk-go) from 1.54.4 to 1.54.8. - [Release notes](https://github.com/aws/aws-sdk-go/releases) - [Commits](https://github.com/aws/aws-sdk-go/compare/v1.54.4...v1.54.8) --- updated-dependencies: - dependency-name: github.com/aws/aws-sdk-go dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore: Bump github.com/hashicorp/go-getter from 1.7.4 to 1.7.5 (#2364) Bumps [github.com/hashicorp/go-getter](https://github.com/hashicorp/go-getter) from 1.7.4 to 1.7.5. - [Release notes](https://github.com/hashicorp/go-getter/releases) - [Changelog](https://github.com/hashicorp/go-getter/blob/main/.goreleaser.yml) - [Commits](https://github.com/hashicorp/go-getter/compare/v1.7.4...v1.7.5) --- updated-dependencies: - dependency-name: github.com/hashicorp/go-getter dependency-type: indirect ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore: Bump tj-actions/verify-changed-files (#2365) Bumps [tj-actions/verify-changed-files](https://github.com/tj-actions/verify-changed-files) from 3db0da1f9e3afd57302597a8a2777b1e673de1fa to 11ea2b36f98609331b8dc9c5ad9071ee317c6d28. - [Release notes](https://github.com/tj-actions/verify-changed-files/releases) - [Changelog](https://github.com/tj-actions/verify-changed-files/blob/main/HISTORY.md) - [Commits](https://github.com/tj-actions/verify-changed-files/compare/3db0da1f9e3afd57302597a8a2777b1e673de1fa...11ea2b36f98609331b8dc9c5ad9071ee317c6d28) --- updated-dependencies: - dependency-name: tj-actions/verify-changed-files dependency-type: direct:production ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore: Bump github.com/go-test/deep from 1.1.0 to 1.1.1 (#2366) Bumps [github.com/go-test/deep](https://github.com/go-test/deep) from 1.1.0 to 1.1.1. - [Release notes](https://github.com/go-test/deep/releases) - [Changelog](https://github.com/go-test/deep/blob/master/CHANGES.md) - [Commits](https://github.com/go-test/deep/compare/v1.1.0...v1.1.1) --- updated-dependencies: - dependency-name: github.com/go-test/deep dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> --------- Signed-off-by: dependabot[bot] Co-authored-by: Leo Antoli <430982+lantoli@users.noreply.github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * feat: Calling new API version when creating clusters (#2363) * wip - calling new API for create operation * update sdk preview version and support global cluster config * propagate disk size gb to inner level when making request * add logic for defining multiple replication specs * add small comment * fix typo in comment * explicit TODO prefix for actionable comments * small adjustment to avoid pointer to empty slice * feat: Updates singular `mongodbatlas_advanced_cluster` data source to support independent shard scaling & updates relevant flattener methods (#2373) * upgrade atlas SDK dev (#2374) * feat: Support new API version for read operation in advanced cluster resource (#2381) * initial changes still making use of ds function for replication specs * wip * small comments * adjust symmetric shard cluster test * adjusting tests * remove comment * add disk size gb at root level if calling old api * include generic function * adjust test check * refactor function for making checks and adding data source * remove comments * use latest api for retry function that verifies state of cluster * use preview of get operations that check transition of state of a cluster * extract setResourceRootFields into common code * add associated ticket to existing TODO comments * chore: Merge with latest master changes updating SDK to v20240530002 (#2390) * doc: Updates `mongodbatlas_global_cluster_config` doc about self-managed sharding clusters (#2372) * update doc * add link * test: Unifies Azure and GCP networking tests (#2371) * unify Azure and GCP tests * TEMPORARY no update * Revert "TEMPORARY no update" This reverts commit ab60d67dece8f53272b2fad4a68b60b890e7636c. * run in parallel * chore: Updates examples link in index.html.markdown for v1.17.3 release * chore: Updates CHANGELOG.md header for v1.17.3 release * doc: Updates Terraform Compatibility Matrix documentation (#2370) Co-authored-by: maastha <122359335+maastha@users.noreply.github.com> * use ComposeAggregateTestCheckFunc (#2375) * chore: Updates asdf to TF 1.9.0 and compatibility matrix body (#2376) * update asdf to TF 1.9.0 * update compatibility message * Update .github/workflows/update_tf_compatibility_matrix.yml Co-authored-by: maastha <122359335+maastha@users.noreply.github.com> * Fix actionlint --------- Co-authored-by: maastha <122359335+maastha@users.noreply.github.com> * fix: stale.yaml gh action (#2379) * doc: Updates alert-config examples (#2378) * doc: Update alert-config examples * doc: Removes other references to GROUP_CHARTS_ADMIN * chore: align table * chore: Updates Atlas Go SDK (#2380) * build(deps): bump go.mongodb.org/atlas-sdk * rename DiskBackupSnapshotAWSExportBucket to DiskBackupSnapshotExportBucket * add param to DeleteAtlasSearchDeployment * add LatestDefinition * more LatestDefinition and start using SearchIndexCreateRequest * HasElementsSliceOrMap * update * ToAnySlicePointer * fix update --------- Co-authored-by: lantoli <430982+lantoli@users.noreply.github.com> * chore: Bump github.com/aws/aws-sdk-go from 1.54.8 to 1.54.13 (#2383) Bumps [github.com/aws/aws-sdk-go](https://github.com/aws/aws-sdk-go) from 1.54.8 to 1.54.13. - [Release notes](https://github.com/aws/aws-sdk-go/releases) - [Commits](https://github.com/aws/aws-sdk-go/compare/v1.54.8...v1.54.13) --- updated-dependencies: - dependency-name: github.com/aws/aws-sdk-go dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore: Bump amannn/action-semantic-pull-request from 5.5.2 to 5.5.3 (#2382) Bumps [amannn/action-semantic-pull-request](https://github.com/amannn/action-semantic-pull-request) from 5.5.2 to 5.5.3. - [Release notes](https://github.com/amannn/action-semantic-pull-request/releases) - [Changelog](https://github.com/amannn/action-semantic-pull-request/blob/main/CHANGELOG.md) - [Commits](https://github.com/amannn/action-semantic-pull-request/compare/cfb60706e18bc85e8aec535e3c577abe8f70378e...0723387faaf9b38adef4775cd42cfd5155ed6017) --- updated-dependencies: - dependency-name: amannn/action-semantic-pull-request dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * test: Improves tests for mongodbatlas_search_index (#2384) * checkVector * checkBasic * checkWithMapping * checkWithSynonyms * checkAdditional * checkAdditionalAnalyzers and checkAdditionalMappingsFields * remove addAttrChecks and addAttrSetChecks * use commonChecks in all checks * test checks cleanup * chore: Updates nightly tests to TF 1.9.x (#2386) * update nightly tests to TF 1.9.x * use TF var * keep until 1.3.x * Update .github/workflows/update_tf_compatibility_matrix.yml Co-authored-by: maastha <122359335+maastha@users.noreply.github.com> --------- Co-authored-by: maastha <122359335+maastha@users.noreply.github.com> * fix: Emptying cloud_back_schedule "copy_settings" (#2387) * test: add test to reproduce Github Issue * fix: update copy_settings on changes (even when empty) * docs: Add changelog entry * chore: fix changelog entry * apply review comments * chore: Updates CHANGELOG.md for #2387 * fixing merge conflicts and adopting preview version * chore: Updates delete logic for `mongodbatlas_search_deployment` (#2389) * update delete logic * update unit test * add test for symmetric sharded cluster using old schema and skip related tests referencing ticket --------- Signed-off-by: dependabot[bot] Co-authored-by: Leo Antoli <430982+lantoli@users.noreply.github.com> Co-authored-by: svc-apix-bot Co-authored-by: svc-apix-Bot <142542575+svc-apix-Bot@users.noreply.github.com> Co-authored-by: maastha <122359335+maastha@users.noreply.github.com> Co-authored-by: Andrea Angiolillo Co-authored-by: Espen Albert Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * Update dev preview to take latest version of cluster API (#2406) * chore: Adds validation in update operation when schema compatible with new API transitions to old schema (#2407) * feat: Support `disk_size_gb` at region config level while also preserving backwards compatibility with root level value (#2405) * wip * define function for setting root disk_size_gb when calling new api in read * add test with old shard schema and disk size gb at inner level * adjust check which was searching for readOnly spec * define root size gb at root level of data source when using new API * add unit testing and adjust acceptance test * fix float comparison * fix merge conflicts * include structs defined in test case * chore: Fix compilation and refactor test names to be more explicit in advanced_cluster (#2416) * fix test name * rename specific test names for clarity * add migration test for check correct adoption of external_id (#2420) * chore: Groups common code in data source and resource for populating root fields (#2419) * chore: Deprecates attributes associated to old `mongodbatlas_advanced_cluster` resource & data source schemas (#2421) * doc: Defining migration guide for Independent Shard Scaling feature (#2434) * define 1.18.0 migration guide * small fixes * Update website/docs/guides/1.18.0-upgrade-guide.html.markdown Co-authored-by: maastha <122359335+maastha@users.noreply.github.com> * Update website/docs/guides/1.18.0-upgrade-guide.html.markdown Co-authored-by: maastha <122359335+maastha@users.noreply.github.com> * addressing comments and suggestions * rename and move files anticipating to new structure --------- Co-authored-by: maastha <122359335+maastha@users.noreply.github.com> * chore: Merges master into dev (#2443) * doc: Updates `mongodbatlas_global_cluster_config` doc about self-managed sharding clusters (#2372) * update doc * add link * test: Unifies Azure and GCP networking tests (#2371) * unify Azure and GCP tests * TEMPORARY no update * Revert "TEMPORARY no update" This reverts commit ab60d67dece8f53272b2fad4a68b60b890e7636c. * run in parallel * chore: Updates examples link in index.html.markdown for v1.17.3 release * chore: Updates CHANGELOG.md header for v1.17.3 release * doc: Updates Terraform Compatibility Matrix documentation (#2370) Co-authored-by: maastha <122359335+maastha@users.noreply.github.com> * use ComposeAggregateTestCheckFunc (#2375) * chore: Updates asdf to TF 1.9.0 and compatibility matrix body (#2376) * update asdf to TF 1.9.0 * update compatibility message * Update .github/workflows/update_tf_compatibility_matrix.yml Co-authored-by: maastha <122359335+maastha@users.noreply.github.com> * Fix actionlint --------- Co-authored-by: maastha <122359335+maastha@users.noreply.github.com> * fix: stale.yaml gh action (#2379) * doc: Updates alert-config examples (#2378) * doc: Update alert-config examples * doc: Removes other references to GROUP_CHARTS_ADMIN * chore: align table * chore: Updates Atlas Go SDK (#2380) * build(deps): bump go.mongodb.org/atlas-sdk * rename DiskBackupSnapshotAWSExportBucket to DiskBackupSnapshotExportBucket * add param to DeleteAtlasSearchDeployment * add LatestDefinition * more LatestDefinition and start using SearchIndexCreateRequest * HasElementsSliceOrMap * update * ToAnySlicePointer * fix update --------- Co-authored-by: lantoli <430982+lantoli@users.noreply.github.com> * chore: Bump github.com/aws/aws-sdk-go from 1.54.8 to 1.54.13 (#2383) Bumps [github.com/aws/aws-sdk-go](https://github.com/aws/aws-sdk-go) from 1.54.8 to 1.54.13. - [Release notes](https://github.com/aws/aws-sdk-go/releases) - [Commits](https://github.com/aws/aws-sdk-go/compare/v1.54.8...v1.54.13) --- updated-dependencies: - dependency-name: github.com/aws/aws-sdk-go dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore: Bump amannn/action-semantic-pull-request from 5.5.2 to 5.5.3 (#2382) Bumps [amannn/action-semantic-pull-request](https://github.com/amannn/action-semantic-pull-request) from 5.5.2 to 5.5.3. - [Release notes](https://github.com/amannn/action-semantic-pull-request/releases) - [Changelog](https://github.com/amannn/action-semantic-pull-request/blob/main/CHANGELOG.md) - [Commits](https://github.com/amannn/action-semantic-pull-request/compare/cfb60706e18bc85e8aec535e3c577abe8f70378e...0723387faaf9b38adef4775cd42cfd5155ed6017) --- updated-dependencies: - dependency-name: amannn/action-semantic-pull-request dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * test: Improves tests for mongodbatlas_search_index (#2384) * checkVector * checkBasic * checkWithMapping * checkWithSynonyms * checkAdditional * checkAdditionalAnalyzers and checkAdditionalMappingsFields * remove addAttrChecks and addAttrSetChecks * use commonChecks in all checks * test checks cleanup * chore: Updates nightly tests to TF 1.9.x (#2386) * update nightly tests to TF 1.9.x * use TF var * keep until 1.3.x * Update .github/workflows/update_tf_compatibility_matrix.yml Co-authored-by: maastha <122359335+maastha@users.noreply.github.com> --------- Co-authored-by: maastha <122359335+maastha@users.noreply.github.com> * fix: Emptying cloud_back_schedule "copy_settings" (#2387) * test: add test to reproduce Github Issue * fix: update copy_settings on changes (even when empty) * docs: Add changelog entry * chore: fix changelog entry * apply review comments * chore: Updates CHANGELOG.md for #2387 * chore: Updates delete logic for `mongodbatlas_search_deployment` (#2389) * update delete logic * update unit test * refactor: use advanced_cluster instead of cluster (#2392) * fix: Returns error if the analyzers attribute contains unknown fields. (#2394) * fix: Returns error if the analyzers attribute contains unknown fields. * adds changelog file. * Update .changelog/2394.txt Co-authored-by: Leo Antoli <430982+lantoli@users.noreply.github.com> --------- Co-authored-by: Leo Antoli <430982+lantoli@users.noreply.github.com> * chore: Updates CHANGELOG.md for #2394 * chore: Bump github.com/aws/aws-sdk-go from 1.54.13 to 1.54.17 (#2401) Bumps [github.com/aws/aws-sdk-go](https://github.com/aws/aws-sdk-go) from 1.54.13 to 1.54.17. - [Release notes](https://github.com/aws/aws-sdk-go/releases) - [Commits](https://github.com/aws/aws-sdk-go/compare/v1.54.13...v1.54.17) --- updated-dependencies: - dependency-name: github.com/aws/aws-sdk-go dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore: Bump github.com/hashicorp/terraform-plugin-testing (#2400) Bumps [github.com/hashicorp/terraform-plugin-testing](https://github.com/hashicorp/terraform-plugin-testing) from 1.8.0 to 1.9.0. - [Release notes](https://github.com/hashicorp/terraform-plugin-testing/releases) - [Changelog](https://github.com/hashicorp/terraform-plugin-testing/blob/main/CHANGELOG.md) - [Commits](https://github.com/hashicorp/terraform-plugin-testing/compare/v1.8.0...v1.9.0) --- updated-dependencies: - dependency-name: github.com/hashicorp/terraform-plugin-testing dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore: Bump github.com/hashicorp/terraform-plugin-framework (#2398) Bumps [github.com/hashicorp/terraform-plugin-framework](https://github.com/hashicorp/terraform-plugin-framework) from 1.9.0 to 1.10.0. - [Release notes](https://github.com/hashicorp/terraform-plugin-framework/releases) - [Changelog](https://github.com/hashicorp/terraform-plugin-framework/blob/main/CHANGELOG.md) - [Commits](https://github.com/hashicorp/terraform-plugin-framework/compare/v1.9.0...v1.10.0) --- updated-dependencies: - dependency-name: github.com/hashicorp/terraform-plugin-framework dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore: Bump github.com/hashicorp/terraform-plugin-framework-validators (#2399) Bumps [github.com/hashicorp/terraform-plugin-framework-validators](https://github.com/hashicorp/terraform-plugin-framework-validators) from 0.12.0 to 0.13.0. - [Release notes](https://github.com/hashicorp/terraform-plugin-framework-validators/releases) - [Changelog](https://github.com/hashicorp/terraform-plugin-framework-validators/blob/main/CHANGELOG.md) - [Commits](https://github.com/hashicorp/terraform-plugin-framework-validators/compare/v0.12.0...v0.13.0) --- updated-dependencies: - dependency-name: github.com/hashicorp/terraform-plugin-framework-validators dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * test: Uses hclwrite to generate the cluster for GetClusterInfo (#2404) * test: Use hclwrite to generate the cluster for GetClusterInfo * test: fix unit test * refactor: minor improvements * refactor: use Zone 1 as the default ZoneName to make tests pass * refactor: remove num_shards in request and add more tests * fix: use same default region as before * test: Support disk_size_gb for ClusterInfo and add test case for multiple dependencies * refactor: move replication specs to ClusterRequest * test: add support for CloudRegionConfig * add: suggestions from PR comments * refactor: use acc.ReplicationSpecRequest instead of admin.ReplicationSpec * fix: Fixes `disk_iops` attribute for Azure cloud provider in `mongodbatlas_advanced_cluster` resource (#2396) * fix disk_iops in Azure * expand * tests for disk_iops * chore: Updates CHANGELOG.md for #2396 * test: Refactors `mongodbatlas_private_endpoint_regional_mode` to use cluster info (#2403) * test: refactor to use cluster info * test: enable test in CI and fix duplicate zone name * test: use AWS_REGION_UPPERCASE and add pre-checks * fix: use clusterResourceName * test: fix GetClusterInfo call * fix: pre check call * fix: add UPPERCASE/LOWERCASE to network test suite * test: Skip in ci since it is slow and use new GetClusterInfo api * test: Fix the broken test and simpify assert statements * test: enable in CI, after refactorings ~1230s * test: Refactors resource tests to use GetClusterInfo `online_archive` (#2409) * feat: adds support for Tags & AutoScalingDiskGbEnabled * feat: refactor tests to use GetClusterInfo & new SDK * chore: fomatting fix * test: make unit test deterministic * test: onlinearchive force us_east_1 * spelling in comment * test: fix migration test to use package clusterRequest (with correct region) * update .tool-versions (#2417) * feat: Adds `stored_source` attribute to `mongodbatlas_search_index` resource and corresponding data sources (#2388) * fix ds schemas * add changelog * add storedSource to configBasic and checkBasic * update doc about index_id * update boolean test * first implementation of stored_source as string * create model file * marshal * don't allow update * test for objects in stored_source * TestAccSearchIndex_withStoredSourceUpdate * update StoredSource * fix merge * tests for storedSource updates * swap test names * doc * chore: Updates CHANGELOG.md for #2388 * doc: Improves Guides menu (#2408) * add 0.8.2 metadata * update old category and remove unneeded headers * update page_title * fix titles * remove old guide * test: Refactors resource tests to use GetClusterInfo `ldap_configuration` (#2411) * test: Refactors resource tests to use GetClusterInfo ldap_configuration * test: Fix depends_on clause * test: remove unused clusterName and align fields * test: Refactors resource tests to use GetClusterInfo `cloud_backup_snapshot_restore_job` (#2413) * test: Refactors resource tests to use GetClusterInfo `cloud_backup_snapshot_restore_job` * test: fix reference to clusterResourceName * doc: Clarify usage of maintenance window resource (#2418) * test: Refactors resource tests to use GetClusterInfo `cloud_backup_schedule` (#2414) * test: Cluster support PitEnabled * test: Refactors resource tests to use GetClusterInfo `mongodbatlas_cloud_backup_schedule` * apply PR suggestions * test: fix broken test after merging * test: Refactors resource tests to use GetClusterInfo `federated_database_instance` (#2412) * test: Support getting cluster info with project * test: Refactors resource tests to use GetClusterInfo `federated_database_instance` * test: refactor, use a single GetClusterInfo and support AddDefaults * test: use renamed argument in test * doc: Removes docs headers as they are not needed (#2422) * remove unneeded YAML frontmatter headers * small adjustements * change root files * remove from templates * use Deprecated category * apply feedback * test: Refactors resource tests to use GetClusterInfo `backup_compliance_policy` (#2415) * test: Support AdvancedConfiguration, MongoDBMajorVersion, RetainBackupsEnabled, EbsVolumeType in cluster * test: refactor test to use GetClusterInfo * test: Refactors resource tests to use GetClusterInfo `cluster_outage_simulation` (#2423) * test: support Priority and NodeCountReadOnly * test: Refactors resource tests to use GetClusterInfo `cluster_outage_simulation` * test: reuse test case in migration test * chore: increase timeout to ensure test is passing * test: avoid global variables to ensure no duplicate cluster names * revert delete timeout change * test: Fixes DUPLICATE_CLUSTER_NAME failures (#2424) * test: fix DUPLICATE_CLUSTER_NAME online_archive * test: fix DUPLICATE_CLUSTER_NAME backup_snapshot_restore_job * test: Refactors GetClusterInfo (#2426) * test: support creating a datasource when using GetClusterInfo * test: Add documentation for cluster methods * refactor: move out config_cluster to its own file * refactor: move configClusterGlobal to the only usage file * refactor: remove ProjectIDStr field * test: update references for cluster_info fields * chore: missing whitespace * test: fix missing quotes around projectID * Update internal/testutil/acc/cluster.go Co-authored-by: Leo Antoli <430982+lantoli@users.noreply.github.com> --------- Co-authored-by: Leo Antoli <430982+lantoli@users.noreply.github.com> * doc: Updates to new Terraform doc structure (#2425) * move to root doc folder * rename ds and resource folders * change file extension to .md * update doc links * gitignore * releasing instructions * git hook * codeowners * workflow template * gha workflows * scripts * remove website-lint * update references to html.markdown * fix compatibility script matrix * rename rest of files * fix generate doc script using docs-out folder to temporary generate all files and copying only to docs folder the specified resource files * fix typo * chore: Bump github.com/zclconf/go-cty from 1.14.4 to 1.15.0 (#2433) Bumps [github.com/zclconf/go-cty](https://github.com/zclconf/go-cty) from 1.14.4 to 1.15.0. - [Release notes](https://github.com/zclconf/go-cty/releases) - [Changelog](https://github.com/zclconf/go-cty/blob/main/CHANGELOG.md) - [Commits](https://github.com/zclconf/go-cty/compare/v1.14.4...v1.15.0) --- updated-dependencies: - dependency-name: github.com/zclconf/go-cty dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore: Bump github.com/aws/aws-sdk-go from 1.54.17 to 1.54.19 (#2432) Bumps [github.com/aws/aws-sdk-go](https://github.com/aws/aws-sdk-go) from 1.54.17 to 1.54.19. - [Release notes](https://github.com/aws/aws-sdk-go/releases) - [Commits](https://github.com/aws/aws-sdk-go/compare/v1.54.17...v1.54.19) --- updated-dependencies: - dependency-name: github.com/aws/aws-sdk-go dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore: Bump actions/setup-go from 5.0.1 to 5.0.2 (#2431) Bumps [actions/setup-go](https://github.com/actions/setup-go) from 5.0.1 to 5.0.2. - [Release notes](https://github.com/actions/setup-go/releases) - [Commits](https://github.com/actions/setup-go/compare/cdcb36043654635271a94b9a6d1392de5bb323a7...0a12ed9d6a96ab950c8f026ed9f722fe0da7ef32) --- updated-dependencies: - dependency-name: actions/setup-go dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore: Bump tj-actions/verify-changed-files (#2430) Bumps [tj-actions/verify-changed-files](https://github.com/tj-actions/verify-changed-files) from 11ea2b36f98609331b8dc9c5ad9071ee317c6d28 to 79f398ac63ab46f7f820470c821d830e5c340ef9. - [Release notes](https://github.com/tj-actions/verify-changed-files/releases) - [Changelog](https://github.com/tj-actions/verify-changed-files/blob/main/HISTORY.md) - [Commits](https://github.com/tj-actions/verify-changed-files/compare/11ea2b36f98609331b8dc9c5ad9071ee317c6d28...79f398ac63ab46f7f820470c821d830e5c340ef9) --- updated-dependencies: - dependency-name: tj-actions/verify-changed-files dependency-type: direct:production ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * refactor: avoid usage of github.com/go-test/deep (use `reflect.DeepEqual instead`) (#2427) * chore: Deletes modules folder (#2435) * remove modules folder * gitignore * chore: Makes sure doc generation is up-to-date (#2441) * generate doc * split in runs * detect changes * TEMPORARY: change 3 files to trigger doc failures * rename * Revert "TEMPORARY: change 3 files to trigger doc failures" This reverts commit cc36481d9682f46792203662db610806d6593d89. * chore: Enables GitHub Action linter errors in GitHub (#2440) * TEMPORARY: make action linter fail * problem matcher * Revert "TEMPORARY: make action linter fail" This reverts commit 2ea3cd5fee4836f9275f59d5daaf72213e78aabe. * update version (#2439) * doc: Updates examples & docs that use replicaSet clusters (#2428) * update basic examples * fix linter * fix tf-validate * update tflint version * fix validate * remove tf linter exceptions * make linter fail * simplify and show linter errors in GH * tlint problem matcher * problem matcher * minimum severity warning * fix linter * make tf-validate logic easier to be run in local * less verbose tf init * fix /mongodbatlas_network_peering/aws * doc for backup_compliance_policy * fix container_id reference * fix mongodbatlas_network_peering/azure * use temp fodler * fix examples/mongodbatlas_network_peering/gcp * remaining examples * fix mongodbatlas_clusters * fix adv_cluster doc * remaining doc changes * fix typo * fix examples with deprecated arguments * get the first value for containter_id * container_id in doc * address feedback * test: fix cluster config generation without num_shards * test: fix usage of replication_spec.id -> replication_spec.external_id * test: attempt fixing TestAccClusterAdvancedCluster_singleShardedMultiCloud * Revert "test: attempt fixing TestAccClusterAdvancedCluster_singleShardedMultiCloud" This reverts commit 7006935409521c6ed4bac80750331921f91f7943. * Revert "test: fix usage of replication_spec.id -> replication_spec.external_id" .id and .external_id are actually different and won't work, more context in: CLOUDP-262014 This reverts commit 2b730dbf667d5e52484c3ca3a8798d8d9a2b80c8. * test: add extra checks missed by merge conflict for checkSingleShardedMultiCloud * test: skip failing tests with a reference to the ticket * test: avoid deprecation warning to fail the test --------- Signed-off-by: dependabot[bot] Co-authored-by: Leo Antoli <430982+lantoli@users.noreply.github.com> Co-authored-by: svc-apix-bot Co-authored-by: svc-apix-Bot <142542575+svc-apix-Bot@users.noreply.github.com> Co-authored-by: maastha <122359335+maastha@users.noreply.github.com> Co-authored-by: Andrea Angiolillo Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Marco Suma Co-authored-by: Agustin Bettati * doc: Adds new example defining an asymmetric sharded cluster (#2444) * chore: Updates examples and documentation in resource and data source with new ISS attributes and structure (#2438) * feat: Updates plural `mongodbatlas_advanced_clusters` data source to support independent shard scaling (#2447) * doc: Adds deprecation banner to `mongodbatlas_cluster` resource and data sources (#2450) * update docs * update guide * update examples * link to migration guide * changelog * apply feedback * feedback * doc: Updates examples and docs to use `mongodbatlas_advanced_cluster` resource (#2453) * examples * doc * fix identation * apply feedback * change cluster-atlas to cluster_atlas in TF name * apply feedback * chore: Continue populating `replications_specs.*.id` value even when calling new API (#2448) * uncomment backup tests that are relying on replicationSpecs.id * initial implementation * small change in comment * slight adjust to tenant test * remove info log * fix boolean value of asymmetric cluster which is now inverted * adjust check to handle case where id is empty string * test: Moves disk_size_gb to replication spec for GetClusterInfo (#2452) * doc: Cluster to advanced cluster migration guide (#2451) * doc: Early draft of cluster to advanced cluster migration guide * chore: outline steps of the how-to-changes * first draft of how-to-change section * address PR comments and suggestions * address PR comments * docs: Address PR suggestions * docs: Address PR suggestions 2 * docs: Address PR suggestions 3 * update links * How-To Guide was old and doesn't exist anymore * terraform formatting and more details in the step-by-step guide + explanation section * docs: Update the advanced-cluster-new-sharding-schema * docs: use consistent page_title and top level header * docs: use consistent page_title and top level header * Revert "docs: use consistent page_title and top level header" This reverts commit 4505eb890611b47f0b2ae045b4bf70c6e6141adc. * fix: typos * Update docs/guides/cluster-to-advanced-cluster-migration-guide.md Co-authored-by: Leo Antoli <430982+lantoli@users.noreply.github.com> * chore: update heading levels * address docs review * addressing comments on sharding schema migration guide * address docs review part 2 * address docs review part 3 * address docs review part 4 * address docs review part 5 --------- Co-authored-by: Leo Antoli <430982+lantoli@users.noreply.github.com> Co-authored-by: Agustin Bettati * feat: Supporting new API in update operation of advanced cluster (#2460) * wip separating logic for handling update with new API * move upgrade logic to use new API * modify tests for verifying disk_size_gb udpates * add update for new schema * adjustment to is asymmetric in checks * add third replication spec in new schema * change docs and fix updating root electable when new API is called * add test for supporting disk_size_gb change in inner spec with new schema * support update of disk_size_gb at electable level when using old schema structure * minor docs update * adjust value of disk size gb in acceptance test * add check for change in analytics specs as well * adjusting hardcoded value in check * address docs comments * feat: Updates `mongodbatlas_cloud_backup_schedule` resource to support independent shard scaling API changes (#2459) * feat: Updates `mongodbatlas_cloud_backup_schedule` data source to support independent shard scaling API changes (#2464) * test: Adjust asymmetric sharded cluster test to change disk_iops value as well (#2467) * adjust test of asymmetric shard cluster to have different disk_iops * adjust values to not pass maximum * fix tests * doc: Updates `mongodbatlas_cloud_backup_schedule` resource & data source documentation, examples, and migration guide for ISS changes (#2470) * feat: Support update transitioning from old to new sharding schema (#2466) * include acceptance and migration tests * wip * add unit tests for populating ids * fix test config definition * add data source flag when using new schema * add retry logic in exists test function, fix check of zone * renaming fix * small comment change * feat: Populate replication specs `zone_id` for all cases, even when old API is called due to old schema (#2475) * populate zone id in cases where old API is being called * adjust check in acceptance test to verify zone id is always being populated * fix test check for case of migration test * test: Supporting old schema structure for defining symmetric sharded/geo-sharded clusters (#2473) * uncomment old schema structure tests now that they are supported * add test for sharded * fix check of instance size and verifying presence of external_id * add additional retry in backup_snapshot migration test * doc: Align docs of advanced_cluster attributes with latest clarifications in API docs (#2477) * adjust disk iops * adjust instance size * adjusting description of replication specs * doc: Define changelog entries for ISS feature and new attributes (#2478) * include changelog entries * address review comments * including test for adding/removing replication specs in the middle for sharded and geosharded cluster (#2479) * test: Refactor migration tests to use utility function and cover migration scenarios coming from 1.17.5 (#2481) * refactor migration test and define scenarios for updating and transitioning after upgrade * fix check of external id in migration tests * doc: Include mention of Atlas UI limitation in 1.18.0 migration guide (#2485) * Include mention of UI limitation in migration guide * Update docs/guides/1.18.0-upgrade-guide.md Co-authored-by: kyuan-mongodb <78768401+kyuan-mongodb@users.noreply.github.com> --------- Co-authored-by: kyuan-mongodb <78768401+kyuan-mongodb@users.noreply.github.com> * fix: Adjusts auto scaling configurations during `mongodbatlas_advanced_cluster` updates (#2476) * add logic to adjust autoScaling during updates * add acceptance test * minor * adjust documentation of resource, include comment in sync function, add unit testing * migration test followup adjustments for previous PR * reduce instance sizes --------- Co-authored-by: Agustin Bettati * chore: Bring latest changes from master into dev branch (includes adopting latest stable SDK version) (#2491) * doc: Updates `mongodbatlas_global_cluster_config` doc about self-managed sharding clusters (#2372) * update doc * add link * test: Unifies Azure and GCP networking tests (#2371) * unify Azure and GCP tests * TEMPORARY no update * Revert "TEMPORARY no update" This reverts commit ab60d67dece8f53272b2fad4a68b60b890e7636c. * run in parallel * chore: Updates examples link in index.html.markdown for v1.17.3 release * chore: Updates CHANGELOG.md header for v1.17.3 release * doc: Updates Terraform Compatibility Matrix documentation (#2370) Co-authored-by: maastha <122359335+maastha@users.noreply.github.com> * use ComposeAggregateTestCheckFunc (#2375) * chore: Updates asdf to TF 1.9.0 and compatibility matrix body (#2376) * update asdf to TF 1.9.0 * update compatibility message * Update .github/workflows/update_tf_compatibility_matrix.yml Co-authored-by: maastha <122359335+maastha@users.noreply.github.com> * Fix actionlint --------- Co-authored-by: maastha <122359335+maastha@users.noreply.github.com> * fix: stale.yaml gh action (#2379) * doc: Updates alert-config examples (#2378) * doc: Update alert-config examples * doc: Removes other references to GROUP_CHARTS_ADMIN * chore: align table * chore: Updates Atlas Go SDK (#2380) * build(deps): bump go.mongodb.org/atlas-sdk * rename DiskBackupSnapshotAWSExportBucket to DiskBackupSnapshotExportBucket * add param to DeleteAtlasSearchDeployment * add LatestDefinition * more LatestDefinition and start using SearchIndexCreateRequest * HasElementsSliceOrMap * update * ToAnySlicePointer * fix update --------- Co-authored-by: lantoli <430982+lantoli@users.noreply.github.com> * chore: Bump github.com/aws/aws-sdk-go from 1.54.8 to 1.54.13 (#2383) Bumps [github.com/aws/aws-sdk-go](https://github.com/aws/aws-sdk-go) from 1.54.8 to 1.54.13. - [Release notes](https://github.com/aws/aws-sdk-go/releases) - [Commits](https://github.com/aws/aws-sdk-go/compare/v1.54.8...v1.54.13) --- updated-dependencies: - dependency-name: github.com/aws/aws-sdk-go dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore: Bump amannn/action-semantic-pull-request from 5.5.2 to 5.5.3 (#2382) Bumps [amannn/action-semantic-pull-request](https://github.com/amannn/action-semantic-pull-request) from 5.5.2 to 5.5.3. - [Release notes](https://github.com/amannn/action-semantic-pull-request/releases) - [Changelog](https://github.com/amannn/action-semantic-pull-request/blob/main/CHANGELOG.md) - [Commits](https://github.com/amannn/action-semantic-pull-request/compare/cfb60706e18bc85e8aec535e3c577abe8f70378e...0723387faaf9b38adef4775cd42cfd5155ed6017) --- updated-dependencies: - dependency-name: amannn/action-semantic-pull-request dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * test: Improves tests for mongodbatlas_search_index (#2384) * checkVector * checkBasic * checkWithMapping * checkWithSynonyms * checkAdditional * checkAdditionalAnalyzers and checkAdditionalMappingsFields * remove addAttrChecks and addAttrSetChecks * use commonChecks in all checks * test checks cleanup * chore: Updates nightly tests to TF 1.9.x (#2386) * update nightly tests to TF 1.9.x * use TF var * keep until 1.3.x * Update .github/workflows/update_tf_compatibility_matrix.yml Co-authored-by: maastha <122359335+maastha@users.noreply.github.com> --------- Co-authored-by: maastha <122359335+maastha@users.noreply.github.com> * fix: Emptying cloud_back_schedule "copy_settings" (#2387) * test: add test to reproduce Github Issue * fix: update copy_settings on changes (even when empty) * docs: Add changelog entry * chore: fix changelog entry * apply review comments * chore: Updates CHANGELOG.md for #2387 * chore: Updates delete logic for `mongodbatlas_search_deployment` (#2389) * update delete logic * update unit test * refactor: use advanced_cluster instead of cluster (#2392) * fix: Returns error if the analyzers attribute contains unknown fields. (#2394) * fix: Returns error if the analyzers attribute contains unknown fields. * adds changelog file. * Update .changelog/2394.txt Co-authored-by: Leo Antoli <430982+lantoli@users.noreply.github.com> --------- Co-authored-by: Leo Antoli <430982+lantoli@users.noreply.github.com> * chore: Updates CHANGELOG.md for #2394 * chore: Bump github.com/aws/aws-sdk-go from 1.54.13 to 1.54.17 (#2401) Bumps [github.com/aws/aws-sdk-go](https://github.com/aws/aws-sdk-go) from 1.54.13 to 1.54.17. - [Release notes](https://github.com/aws/aws-sdk-go/releases) - [Commits](https://github.com/aws/aws-sdk-go/compare/v1.54.13...v1.54.17) --- updated-dependencies: - dependency-name: github.com/aws/aws-sdk-go dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore: Bump github.com/hashicorp/terraform-plugin-testing (#2400) Bumps [github.com/hashicorp/terraform-plugin-testing](https://github.com/hashicorp/terraform-plugin-testing) from 1.8.0 to 1.9.0. - [Release notes](https://github.com/hashicorp/terraform-plugin-testing/releases) - [Changelog](https://github.com/hashicorp/terraform-plugin-testing/blob/main/CHANGELOG.md) - [Commits](https://github.com/hashicorp/terraform-plugin-testing/compare/v1.8.0...v1.9.0) --- updated-dependencies: - dependency-name: github.com/hashicorp/terraform-plugin-testing dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore: Bump github.com/hashicorp/terraform-plugin-framework (#2398) Bumps [github.com/hashicorp/terraform-plugin-framework](https://github.com/hashicorp/terraform-plugin-framework) from 1.9.0 to 1.10.0. - [Release notes](https://github.com/hashicorp/terraform-plugin-framework/releases) - [Changelog](https://github.com/hashicorp/terraform-plugin-framework/blob/main/CHANGELOG.md) - [Commits](https://github.com/hashicorp/terraform-plugin-framework/compare/v1.9.0...v1.10.0) --- updated-dependencies: - dependency-name: github.com/hashicorp/terraform-plugin-framework dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore: Bump github.com/hashicorp/terraform-plugin-framework-validators (#2399) Bumps [github.com/hashicorp/terraform-plugin-framework-validators](https://github.com/hashicorp/terraform-plugin-framework-validators) from 0.12.0 to 0.13.0. - [Release notes](https://github.com/hashicorp/terraform-plugin-framework-validators/releases) - [Changelog](https://github.com/hashicorp/terraform-plugin-framework-validators/blob/main/CHANGELOG.md) - [Commits](https://github.com/hashicorp/terraform-plugin-framework-validators/compare/v0.12.0...v0.13.0) --- updated-dependencies: - dependency-name: github.com/hashicorp/terraform-plugin-framework-validators dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * test: Uses hclwrite to generate the cluster for GetClusterInfo (#2404) * test: Use hclwrite to generate the cluster for GetClusterInfo * test: fix unit test * refactor: minor improvements * refactor: use Zone 1 as the default ZoneName to make tests pass * refactor: remove num_shards in request and add more tests * fix: use same default region as before * test: Support disk_size_gb for ClusterInfo and add test case for multiple dependencies * refactor: move replication specs to ClusterRequest * test: add support for CloudRegionConfig * add: suggestions from PR comments * refactor: use acc.ReplicationSpecRequest instead of admin.ReplicationSpec * fix: Fixes `disk_iops` attribute for Azure cloud provider in `mongodbatlas_advanced_cluster` resource (#2396) * fix disk_iops in Azure * expand * tests for disk_iops * chore: Updates CHANGELOG.md for #2396 * test: Refactors `mongodbatlas_private_endpoint_regional_mode` to use cluster info (#2403) * test: refactor to use cluster info * test: enable test in CI and fix duplicate zone name * test: use AWS_REGION_UPPERCASE and add pre-checks * fix: use clusterResourceName * test: fix GetClusterInfo call * fix: pre check call * fix: add UPPERCASE/LOWERCASE to network test suite * test: Skip in ci since it is slow and use new GetClusterInfo api * test: Fix the broken test and simpify assert statements * test: enable in CI, after refactorings ~1230s * test: Refactors resource tests to use GetClusterInfo `online_archive` (#2409) * feat: adds support for Tags & AutoScalingDiskGbEnabled * feat: refactor tests to use GetClusterInfo & new SDK * chore: fomatting fix * test: make unit test deterministic * test: onlinearchive force us_east_1 * spelling in comment * test: fix migration test to use package clusterRequest (with correct region) * update .tool-versions (#2417) * feat: Adds `stored_source` attribute to `mongodbatlas_search_index` resource and corresponding data sources (#2388) * fix ds schemas * add changelog * add storedSource to configBasic and checkBasic * update doc about index_id * update boolean test * first implementation of stored_source as string * create model file * marshal * don't allow update * test for objects in stored_source * TestAccSearchIndex_withStoredSourceUpdate * update StoredSource * fix merge * tests for storedSource updates * swap test names * doc * chore: Updates CHANGELOG.md for #2388 * doc: Improves Guides menu (#2408) * add 0.8.2 metadata * update old category and remove unneeded headers * update page_title * fix titles * remove old guide * test: Refactors resource tests to use GetClusterInfo `ldap_configuration` (#2411) * test: Refactors resource tests to use GetClusterInfo ldap_configuration * test: Fix depends_on clause * test: remove unused clusterName and align fields * test: Refactors resource tests to use GetClusterInfo `cloud_backup_snapshot_restore_job` (#2413) * test: Refactors resource tests to use GetClusterInfo `cloud_backup_snapshot_restore_job` * test: fix reference to clusterResourceName * doc: Clarify usage of maintenance window resource (#2418) * test: Refactors resource tests to use GetClusterInfo `cloud_backup_schedule` (#2414) * test: Cluster support PitEnabled * test: Refactors resource tests to use GetClusterInfo `mongodbatlas_cloud_backup_schedule` * apply PR suggestions * test: fix broken test after merging * test: Refactors resource tests to use GetClusterInfo `federated_database_instance` (#2412) * test: Support getting cluster info with project * test: Refactors resource tests to use GetClusterInfo `federated_database_instance` * test: refactor, use a single GetClusterInfo and support AddDefaults * test: use renamed argument in test * doc: Removes docs headers as they are not needed (#2422) * remove unneeded YAML frontmatter headers * small adjustements * change root files * remove from templates * use Deprecated category * apply feedback * test: Refactors resource tests to use GetClusterInfo `backup_compliance_policy` (#2415) * test: Support AdvancedConfiguration, MongoDBMajorVersion, RetainBackupsEnabled, EbsVolumeType in cluster * test: refactor test to use GetClusterInfo * test: Refactors resource tests to use GetClusterInfo `cluster_outage_simulation` (#2423) * test: support Priority and NodeCountReadOnly * test: Refactors resource tests to use GetClusterInfo `cluster_outage_simulation` * test: reuse test case in migration test * chore: increase timeout to ensure test is passing * test: avoid global variables to ensure no duplicate cluster names * revert delete timeout change * test: Fixes DUPLICATE_CLUSTER_NAME failures (#2424) * test: fix DUPLICATE_CLUSTER_NAME online_archive * test: fix DUPLICATE_CLUSTER_NAME backup_snapshot_restore_job * test: Refactors GetClusterInfo (#2426) * test: support creating a datasource when using GetClusterInfo * test: Add documentation for cluster methods * refactor: move out config_cluster to its own file * refactor: move configClusterGlobal to the only usage file * refactor: remove ProjectIDStr field * test: update references for cluster_info fields * chore: missing whitespace * test: fix missing quotes around projectID * Update internal/testutil/acc/cluster.go Co-authored-by: Leo Antoli <430982+lantoli@users.noreply.github.com> --------- Co-authored-by: Leo Antoli <430982+lantoli@users.noreply.github.com> * doc: Updates to new Terraform doc structure (#2425) * move to root doc folder * rename ds and resource folders * change file extension to .md * update doc links * gitignore * releasing instructions * git hook * codeowners * workflow template * gha workflows * scripts * remove website-lint * update references to html.markdown * fix compatibility script matrix * rename rest of files * fix generate doc script using docs-out folder to temporary generate all files and copying only to docs folder the specified resource files * fix typo * chore: Bump github.com/zclconf/go-cty from 1.14.4 to 1.15.0 (#2433) Bumps [github.com/zclconf/go-cty](https://github.com/zclconf/go-cty) from 1.14.4 to 1.15.0. - [Release notes](https://github.com/zclconf/go-cty/releases) - [Changelog](https://github.com/zclconf/go-cty/blob/main/CHANGELOG.md) - [Commits](https://github.com/zclconf/go-cty/compare/v1.14.4...v1.15.0) --- updated-dependencies: - dependency-name: github.com/zclconf/go-cty dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore: Bump github.com/aws/aws-sdk-go from 1.54.17 to 1.54.19 (#2432) Bumps [github.com/aws/aws-sdk-go](https://github.com/aws/aws-sdk-go) from 1.54.17 to 1.54.19. - [Release notes](https://github.com/aws/aws-sdk-go/releases) - [Commits](https://github.com/aws/aws-sdk-go/compare/v1.54.17...v1.54.19) --- updated-dependencies: - dependency-name: github.com/aws/aws-sdk-go dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore: Bump actions/setup-go from 5.0.1 to 5.0.2 (#2431) Bumps [actions/setup-go](https://github.com/actions/setup-go) from 5.0.1 to 5.0.2. - [Release notes](https://github.com/actions/setup-go/releases) - [Commits](https://github.com/actions/setup-go/compare/cdcb36043654635271a94b9a6d1392de5bb323a7...0a12ed9d6a96ab950c8f026ed9f722fe0da7ef32) --- updated-dependencies: - dependency-name: actions/setup-go dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore: Bump tj-actions/verify-changed-files (#2430) Bumps [tj-actions/verify-changed-files](https://github.com/tj-actions/verify-changed-files) from 11ea2b36f98609331b8dc9c5ad9071ee317c6d28 to 79f398ac63ab46f7f820470c821d830e5c340ef9. - [Release notes](https://github.com/tj-actions/verify-changed-files/releases) - [Changelog](https://github.com/tj-actions/verify-changed-files/blob/main/HISTORY.md) - [Commits](https://github.com/tj-actions/verify-changed-files/compare/11ea2b36f98609331b8dc9c5ad9071ee317c6d28...79f398ac63ab46f7f820470c821d830e5c340ef9) --- updated-dependencies: - dependency-name: tj-actions/verify-changed-files dependency-type: direct:production ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * refactor: avoid usage of github.com/go-test/deep (use `reflect.DeepEqual instead`) (#2427) * chore: Deletes modules folder (#2435) * remove modules folder * gitignore * chore: Makes sure doc generation is up-to-date (#2441) * generate doc * split in runs * detect changes * TEMPORARY: change 3 files to trigger doc failures * rename * Revert "TEMPORARY: change 3 files to trigger doc failures" This reverts commit cc36481d9682f46792203662db610806d6593d89. * chore: Enables GitHub Action linter errors in GitHub (#2440) * TEMPORARY: make action linter fail * problem matcher * Revert "TEMPORARY: make action linter fail" This reverts commit 2ea3cd5fee4836f9275f59d5daaf72213e78aabe. * update version (#2439) * doc: Updates examples & docs that use replicaSet clusters (#2428) * update basic examples * fix linter * fix tf-validate * update tflint version * fix validate * remove tf linter exceptions * make linter fail * simplify and show linter errors in GH * tlint problem matcher * problem matcher * minimum severity warning * fix linter * make tf-validate logic easier to be run in local * less verbose tf init * fix /mongodbatlas_network_peering/aws * doc for backup_compliance_policy * fix container_id reference * fix mongodbatlas_network_peering/azure * use temp fodler * fix examples/mongodbatlas_network_peering/gcp * remaining examples * fix mongodbatlas_clusters * fix adv_cluster doc * remaining doc changes * fix typo * fix examples with deprecated arguments * get the first value for containter_id * container_id in doc * address feedback * fix MongoDB_Atlas (#2445) * chore: Updates examples link in index.md for v1.17.4 release * chore: Updates CHANGELOG.md header for v1.17.4 release * chore: Migrates `mongodbatlas_cloud_backup_snapshot_export_job` to new auto-generated SDK (#2436) * migrate to new auto-generated SDK * refactor and deprecate err_msg field * add changelog entry * docs * change deprecation version to 1.20 * reduce changelog explanation * chore: Migrates `mongodbatlas_project_api_key` to new auto-generated SDK (#2437) * resource create * migrate update read and delete of resource * data sources migrated to new sdk * remove apiUserId from create and update in payload(is read only) * PR comments * chore: Removes usage of old Admin SDK in tests (#2442) * remove matlas from alert_configuration test * remove matlas from custom_db_role test * chore: Updates CHANGELOG.md for #2436 * chore: Clean up usages of old SDK (#2449) * remove usages of old SDK * add az2 to vpc endpoint * Revert "add az2 to vpc endpoint" This reverts commit ce6f7cc09d4d31292479cc58dd3c5d9e92dd7738. * skip flaky test * allow 0 (#2456) * fix: Fixes creation of organization (#2462) * fix TerraformVersion interface conversion * refactor organization resource * add changelog entry * PR comment * chore: Updates CHANGELOG.md for #2462 * fix: Fixes nil pointer dereference in `mongodbatlas_alert_configuration` (#2463) * fix nil pointer dereference * avoid nil pointer dereference in metric_threshold_config * changelog entry * changelog suggestion * Update .changelog/2463.txt Co-authored-by: Leo Antoli <430982+lantoli@users.noreply.github.com> * remove periods at the end of changelog entries to make it consistent --------- Co-authored-by: Leo Antoli <430982+lantoli@users.noreply.github.com> * chore: Updates CHANGELOG.md for #2463 * chore: Updates examples link in index.md for v1.17.5 release * chore: Updates CHANGELOG.md header for v1.17.5 release * chore: Bump golangci/golangci-lint-action from 6.0.1 to 6.1.0 (#2469) Bumps [golangci/golangci-lint-action](https://github.com/golangci/golangci-lint-action) from 6.0.1 to 6.1.0. - [Release notes](https://github.com/golangci/golangci-lint-action/releases) - [Commits](https://github.com/golangci/golangci-lint-action/compare/a4f60bb28d35aeee14e6880718e0c85ff1882e64...aaa42aa0628b4ae2578232a66b541047968fac86) --- updated-dependencies: - dependency-name: golangci/golangci-lint-action dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore: Bump github.com/aws/aws-sdk-go from 1.54.19 to 1.55.5 (#2468) Bumps [github.com/aws/aws-sdk-go](https://github.com/aws/aws-sdk-go) from 1.54.19 to 1.55.5. - [Release notes](https://github.com/aws/aws-sdk-go/releases) - [Commits](https://github.com/aws/aws-sdk-go/compare/v1.54.19...v1.55.5) --- updated-dependencies: - dependency-name: github.com/aws/aws-sdk-go dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * fix: Handles update of `mongodbatlas_backup_compliance_policy` as a create operation (#2480) * handle update as a create * add test to make sure no plan changes appear when reapplying config with non default values * add changelog * fix projectId * fix name of resource in test * Update .changelog/2480.txt Co-authored-by: kyuan-mongodb <78768401+kyuan-mongodb@users.noreply.github.com> --------- Co-authored-by: kyuan-mongodb <78768401+kyuan-mongodb@users.noreply.github.com> * chore: Updates CHANGELOG.md for #2480 * chore: Updates examples link in index.md for v1.17.6 release * chore: Updates CHANGELOG.md header for v1.17.6 release * feat: Adds azure support for backup snapshot export bucket (#2486) * feat: add azure support for backup snapshot export bucket * fix: add acceptance test configuration * fix changelog entry number * upgrade azuread to 2.53.1 in example * fix checks * fix checks for mongodbatlas_access_list_api_key * fix docs check * fix docs check for data source * add readme.md in examples * use acc.AddAttrChecks in tests * remove importstateverifyignore --------- Co-authored-by: Luiz Viana * chore: Updates CHANGELOG.md for #2486 * chore: Improves backup_compliance_policy test(#2484) * chore: Updates Atlas Go SDK to version 2024-08-05 (#2487) * automatic changes with renaming * fix trivial compilation errors * include 2024-05-30 version and adjust cloud-backup-schedule to use old SDK * adjust global-cluster-config to use old API * adjust advanced-cluster to use old API * fix hcl config generation remove num_shards attribute * manual fixes of versions in advanced cluster, cloud backup schedule, and other small compilations * fix incorrect merging in cloud backup schedule tests * using connV2 for import in advanced cluster * use lastest sdk model for tests that require autoscaling model * avoid using old SDK for delete operation --------- Signed-off-by: dependabot[bot] Co-authored-by: Leo Antoli <430982+lantoli@users.noreply.github.com> Co-authored-by: svc-apix-bot Co-authored-by: svc-apix-Bot <142542575+svc-apix-Bot@users.noreply.github.com> Co-authored-by: maastha <122359335+maastha@users.noreply.github.com> Co-authored-by: Andrea Angiolillo Co-authored-by: Espen Albert Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Marco Suma Co-authored-by: Oriol Co-authored-by: kyuan-mongodb <78768401+kyuan-mongodb@users.noreply.github.com> Co-authored-by: Luiz Viana * remove duplicate checks, adjust version constraint in example, fix typo in migration guide * revert version constraint change in example * chore: Updates CHANGELOG.md for #2492 * chore: revert cluster deprecatin but include migration guide (#2498) * remove!: Removes deprecated attributes targeting 1.18.0 (#2499) * removing scheme from third_party_integration * remove page_num and items_per_page in federated_settings_identity_providers * changes in id of mongodbatlas_cloud_backupsnapshot_export_bucket * changes in id of mongodbatlas_cloud_backupsnapshot_export_job * created_at attribute in cloud_backup_snapshot_restore_job * remove job_id attribute from cloud_backup_snapshot_restore data source * service attachment name removal in privatelink endpoint service * adjust test of federated settings identity provider * remove id argument in cloud backup snapshot export bucket * Rephrase to positive statement * chore: Updates CHANGELOG.md fo… * chore: Supports `options` in read and list for `stream_processor` (#2526) * feat: Support using `options` from API in resource and data sources * fix: data source options should be computed * doc: data source updates * fix: handle case of empty options {} returned * address PR comment * missing changelog * doc: Updates documentation for `mongodbatlas_stream_processor` (#2565) * docs update * PR comments * pr comments --------- Signed-off-by: dependabot[bot] Co-authored-by: EspenAlbert Co-authored-by: Espen Albert Co-authored-by: Agustin Bettati Co-authored-by: Aastha Mahendru Co-authored-by: Leo Antoli <430982+lantoli@users.noreply.github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: maastha <122359335+maastha@users.noreply.github.com> Co-authored-by: svc-apix-bot Co-authored-by: svc-apix-Bot <142542575+svc-apix-Bot@users.noreply.github.com> Co-authored-by: Andrea Angiolillo Co-authored-by: Marco Suma Co-authored-by: kyuan-mongodb <78768401+kyuan-mongodb@users.noreply.github.com> Co-authored-by: Luiz Viana Co-authored-by: lmkerbey-mdb <105309825+lmkerbey-mdb@users.noreply.github.com> Co-authored-by: kanchana-mongodb <54281287+kanchana-mongodb@users.noreply.github.com> Co-authored-by: Zuhair Ahmed Co-authored-by: rubenVB01 <95967197+rubenVB01@users.noreply.github.com> --- .changelog/2497.txt | 3 + .changelog/2501.txt | 3 + .changelog/2566.txt | 3 + .github/workflows/acceptance-tests-runner.yml | 2 + .github/workflows/code-health.yml | 2 + docs/data-sources/stream_processor.md | 141 ++++++ docs/data-sources/stream_processors.md | 156 ++++++ docs/resources/stream_processor.md | 166 ++++++ .../atlas-streams-user-journey.md | 22 - .../mongodbatlas_stream_processor/README.md | 14 + .../atlas-streams-user-journey.md | 9 + .../mongodbatlas_stream_processor/main.tf | 97 ++++ .../mongodbatlas_stream_processor/provider.tf | 4 + .../variables.tf | 29 ++ .../mongodbatlas_stream_processor/versions.tf | 9 + internal/common/fwtypes/json_string.go | 141 ++++++ internal/common/schemafunc/json.go | 28 ++ internal/common/schemafunc/json_test.go | 30 ++ internal/provider/provider.go | 5 + .../service/searchindex/model_search_index.go | 25 +- .../service/streamprocessor/data_source.go | 54 ++ .../streamprocessor/data_source_plural.go | 43 ++ .../data_source_plural_schema.go | 57 +++ .../streamprocessor/data_source_schema.go | 70 +++ internal/service/streamprocessor/main_test.go | 15 + internal/service/streamprocessor/model.go | 184 +++++++ .../service/streamprocessor/model_test.go | 296 +++++++++++ internal/service/streamprocessor/resource.go | 264 ++++++++++ .../resource_migration_test.go | 12 + .../streamprocessor/resource_schema.go | 133 +++++ .../service/streamprocessor/resource_test.go | 472 ++++++++++++++++++ .../streamprocessor/state_transition.go | 61 +++ .../streamprocessor/state_transition_test.go | 156 ++++++ .../tfplugingen/generator_config.yml | 24 + scripts/schema-scaffold.sh | 2 +- .../data-sources/stream_processor.md.tmpl | 10 + .../data-sources/stream_processors.md.tmpl | 10 + templates/resources.md.tmpl | 1 + templates/resources/stream_processor.md.tmpl | 30 ++ 39 files changed, 2737 insertions(+), 46 deletions(-) create mode 100644 .changelog/2497.txt create mode 100644 .changelog/2501.txt create mode 100644 .changelog/2566.txt create mode 100644 docs/data-sources/stream_processor.md create mode 100644 docs/data-sources/stream_processors.md create mode 100644 docs/resources/stream_processor.md delete mode 100644 examples/mongodbatlas_stream_instance/atlas-streams-user-journey.md create mode 100644 examples/mongodbatlas_stream_processor/README.md create mode 100644 examples/mongodbatlas_stream_processor/atlas-streams-user-journey.md create mode 100644 examples/mongodbatlas_stream_processor/main.tf create mode 100644 examples/mongodbatlas_stream_processor/provider.tf create mode 100644 examples/mongodbatlas_stream_processor/variables.tf create mode 100644 examples/mongodbatlas_stream_processor/versions.tf create mode 100644 internal/common/fwtypes/json_string.go create mode 100644 internal/common/schemafunc/json.go create mode 100644 internal/common/schemafunc/json_test.go create mode 100644 internal/service/streamprocessor/data_source.go create mode 100644 internal/service/streamprocessor/data_source_plural.go create mode 100644 internal/service/streamprocessor/data_source_plural_schema.go create mode 100644 internal/service/streamprocessor/data_source_schema.go create mode 100644 internal/service/streamprocessor/main_test.go create mode 100644 internal/service/streamprocessor/model.go create mode 100644 internal/service/streamprocessor/model_test.go create mode 100644 internal/service/streamprocessor/resource.go create mode 100644 internal/service/streamprocessor/resource_migration_test.go create mode 100644 internal/service/streamprocessor/resource_schema.go create mode 100644 internal/service/streamprocessor/resource_test.go create mode 100644 internal/service/streamprocessor/state_transition.go create mode 100644 internal/service/streamprocessor/state_transition_test.go create mode 100644 internal/service/streamprocessor/tfplugingen/generator_config.yml create mode 100644 templates/data-sources/stream_processor.md.tmpl create mode 100644 templates/data-sources/stream_processors.md.tmpl create mode 100644 templates/resources/stream_processor.md.tmpl diff --git a/.changelog/2497.txt b/.changelog/2497.txt new file mode 100644 index 0000000000..6b9788e416 --- /dev/null +++ b/.changelog/2497.txt @@ -0,0 +1,3 @@ +```release-note:new-datasource +data-source/mongodbatlas_stream_processor +``` diff --git a/.changelog/2501.txt b/.changelog/2501.txt new file mode 100644 index 0000000000..b358b319c4 --- /dev/null +++ b/.changelog/2501.txt @@ -0,0 +1,3 @@ +```release-note:new-resource +mongodbatlas_stream_processor +``` diff --git a/.changelog/2566.txt b/.changelog/2566.txt new file mode 100644 index 0000000000..a6347324a2 --- /dev/null +++ b/.changelog/2566.txt @@ -0,0 +1,3 @@ +```release-note:new-datasource +data-source/mongodbatlas_stream_processors +``` diff --git a/.github/workflows/acceptance-tests-runner.yml b/.github/workflows/acceptance-tests-runner.yml index 8a51636934..114b7737f8 100644 --- a/.github/workflows/acceptance-tests-runner.yml +++ b/.github/workflows/acceptance-tests-runner.yml @@ -302,6 +302,7 @@ jobs: stream: - 'internal/service/streamconnection/*.go' - 'internal/service/streaminstance/*.go' + - 'internal/service/streamprocessor/*.go' control_plane_ip_addresses: - 'internal/service/controlplaneipaddresses/*.go' @@ -871,6 +872,7 @@ jobs: ACCTEST_PACKAGES: | ./internal/service/streamconnection ./internal/service/streaminstance + ./internal/service/streamprocessor run: make testacc control_plane_ip_addresses: diff --git a/.github/workflows/code-health.yml b/.github/workflows/code-health.yml index 652ecacdc4..9fe2183f12 100644 --- a/.github/workflows/code-health.yml +++ b/.github/workflows/code-health.yml @@ -81,6 +81,8 @@ jobs: run: make generate-doc resource_name=encryption_at_rest_private_endpoint - name: Doc for project_ip_addresses run: make generate-doc resource_name=project_ip_addresses + - name: Doc for stream_processor + run: make generate-doc resource_name=stream_processor - name: Find mutations id: self_mutation run: |- diff --git a/docs/data-sources/stream_processor.md b/docs/data-sources/stream_processor.md new file mode 100644 index 0000000000..6d4f960f60 --- /dev/null +++ b/docs/data-sources/stream_processor.md @@ -0,0 +1,141 @@ +# Data Source: mongodbatlas_stream_processor + +`mongodbatlas_stream_processor` describes a stream processor. + +## Example Usages +```terraform +resource "mongodbatlas_stream_instance" "example" { + project_id = var.project_id + instance_name = "InstanceName" + data_process_region = { + region = "VIRGINIA_USA" + cloud_provider = "AWS" + } +} + +resource "mongodbatlas_stream_connection" "example-sample" { + project_id = var.project_id + instance_name = mongodbatlas_stream_instance.example.instance_name + connection_name = "sample_stream_solar" + type = "Sample" +} + +resource "mongodbatlas_stream_connection" "example-cluster" { + project_id = var.project_id + instance_name = mongodbatlas_stream_instance.example.instance_name + connection_name = "ClusterConnection" + type = "Cluster" + cluster_name = var.cluster_name + db_role_to_execute = { + role = "atlasAdmin" + type = "BUILT_IN" + } +} + +resource "mongodbatlas_stream_connection" "example-kafka" { + project_id = var.project_id + instance_name = mongodbatlas_stream_instance.example.instance_name + connection_name = "KafkaPlaintextConnection" + type = "Kafka" + authentication = { + mechanism = "PLAIN" + username = var.kafka_username + password = var.kafka_password + } + bootstrap_servers = "localhost:9092,localhost:9092" + config = { + "auto.offset.reset" : "earliest" + } + security = { + protocol = "PLAINTEXT" + } +} + +resource "mongodbatlas_stream_processor" "stream-processor-sample-example" { + project_id = var.project_id + instance_name = mongodbatlas_stream_instance.example.instance_name + processor_name = "sampleProcessorName" + pipeline = jsonencode([{ "$source" = { "connectionName" = resource.mongodbatlas_stream_connection.example-sample.connection_name } }, { "$emit" = { "connectionName" : "__testLog" } }]) + state = "CREATED" +} + +resource "mongodbatlas_stream_processor" "stream-processor-cluster-example" { + project_id = var.project_id + instance_name = mongodbatlas_stream_instance.example.instance_name + processor_name = "clusterProcessorName" + pipeline = jsonencode([{ "$source" = { "connectionName" = resource.mongodbatlas_stream_connection.example-cluster.connection_name } }, { "$emit" = { "connectionName" : "__testLog" } }]) + state = "STARTED" +} + +resource "mongodbatlas_stream_processor" "stream-processor-kafka-example" { + project_id = var.project_id + instance_name = mongodbatlas_stream_instance.example.instance_name + processor_name = "kafkaProcessorName" + pipeline = jsonencode([{ "$source" = { "connectionName" = resource.mongodbatlas_stream_connection.example-cluster.connection_name } }, { "$emit" = { "connectionName" : resource.mongodbatlas_stream_connection.example-kafka.connection_name, "topic" : "example_topic" } }]) + state = "CREATED" + options = { + dlq = { + coll = "exampleColumn" + connection_name = resource.mongodbatlas_stream_connection.example-cluster.connection_name + db = "exampleDb" + } + } +} + +data "mongodbatlas_stream_processors" "example-stream-processors" { + project_id = var.project_id + instance_name = mongodbatlas_stream_instance.example.instance_name +} + +data "mongodbatlas_stream_processor" "example-stream-processor" { + project_id = var.project_id + instance_name = mongodbatlas_stream_instance.example.instance_name + processor_name = mongodbatlas_stream_processor.stream-processor-sample-example.processor_name +} + +# example making use of data sources +output "stream_processors_state" { + value = data.mongodbatlas_stream_processor.example-stream-processor.state +} + +output "stream_processors_results" { + value = data.mongodbatlas_stream_processors.example-stream-processors.results +} +``` + + +## Schema + +### Required + +- `instance_name` (String) Human-readable label that identifies the stream instance. +- `processor_name` (String) Human-readable label that identifies the stream processor. +- `project_id` (String) Unique 24-hexadecimal digit string that identifies your project. Use the [/groups](#tag/Projects/operation/listProjects) endpoint to retrieve all projects to which the authenticated user has access. + +**NOTE**: Groups and projects are synonymous terms. Your group id is the same as your project id. For existing groups, your group/project id remains the same. The resource and corresponding endpoints use the term groups. + +### Read-Only + +- `id` (String) Unique 24-hexadecimal character string that identifies the stream processor. +- `options` (Attributes) Optional configuration for the stream processor. (see [below for nested schema](#nestedatt--options)) +- `pipeline` (String) Stream aggregation pipeline you want to apply to your streaming data. +- `state` (String) The state of the stream processor. +- `stats` (String) The stats associated with the stream processor. + + +### Nested Schema for `options` + +Read-Only: + +- `dlq` (Attributes) Dead letter queue for the stream processor. Refer to the [MongoDB Atlas Docs](https://www.mongodb.com/docs/atlas/reference/glossary/#std-term-dead-letter-queue) for more information. (see [below for nested schema](#nestedatt--options--dlq)) + + +### Nested Schema for `options.dlq` + +Read-Only: + +- `coll` (String) Name of the collection to use for the DLQ. +- `connection_name` (String) Name of the connection to write DLQ messages to. Must be an Atlas connection. +- `db` (String) Name of the database to use for the DLQ. + +For more information see: [MongoDB Atlas API - Stream Processor](https://www.mongodb.com/docs/atlas/reference/api-resources-spec/v2/#tag/Streams/operation/createStreamProcessor) Documentation. diff --git a/docs/data-sources/stream_processors.md b/docs/data-sources/stream_processors.md new file mode 100644 index 0000000000..b65f30755b --- /dev/null +++ b/docs/data-sources/stream_processors.md @@ -0,0 +1,156 @@ +# Data Source: mongodbatlas_stream_processors + +`mongodbatlas_stream_processors` returns all stream processors in a stream instance. + +## Example Usages +```terraform +resource "mongodbatlas_stream_instance" "example" { + project_id = var.project_id + instance_name = "InstanceName" + data_process_region = { + region = "VIRGINIA_USA" + cloud_provider = "AWS" + } +} + +resource "mongodbatlas_stream_connection" "example-sample" { + project_id = var.project_id + instance_name = mongodbatlas_stream_instance.example.instance_name + connection_name = "sample_stream_solar" + type = "Sample" +} + +resource "mongodbatlas_stream_connection" "example-cluster" { + project_id = var.project_id + instance_name = mongodbatlas_stream_instance.example.instance_name + connection_name = "ClusterConnection" + type = "Cluster" + cluster_name = var.cluster_name + db_role_to_execute = { + role = "atlasAdmin" + type = "BUILT_IN" + } +} + +resource "mongodbatlas_stream_connection" "example-kafka" { + project_id = var.project_id + instance_name = mongodbatlas_stream_instance.example.instance_name + connection_name = "KafkaPlaintextConnection" + type = "Kafka" + authentication = { + mechanism = "PLAIN" + username = var.kafka_username + password = var.kafka_password + } + bootstrap_servers = "localhost:9092,localhost:9092" + config = { + "auto.offset.reset" : "earliest" + } + security = { + protocol = "PLAINTEXT" + } +} + +resource "mongodbatlas_stream_processor" "stream-processor-sample-example" { + project_id = var.project_id + instance_name = mongodbatlas_stream_instance.example.instance_name + processor_name = "sampleProcessorName" + pipeline = jsonencode([{ "$source" = { "connectionName" = resource.mongodbatlas_stream_connection.example-sample.connection_name } }, { "$emit" = { "connectionName" : "__testLog" } }]) + state = "CREATED" +} + +resource "mongodbatlas_stream_processor" "stream-processor-cluster-example" { + project_id = var.project_id + instance_name = mongodbatlas_stream_instance.example.instance_name + processor_name = "clusterProcessorName" + pipeline = jsonencode([{ "$source" = { "connectionName" = resource.mongodbatlas_stream_connection.example-cluster.connection_name } }, { "$emit" = { "connectionName" : "__testLog" } }]) + state = "STARTED" +} + +resource "mongodbatlas_stream_processor" "stream-processor-kafka-example" { + project_id = var.project_id + instance_name = mongodbatlas_stream_instance.example.instance_name + processor_name = "kafkaProcessorName" + pipeline = jsonencode([{ "$source" = { "connectionName" = resource.mongodbatlas_stream_connection.example-cluster.connection_name } }, { "$emit" = { "connectionName" : resource.mongodbatlas_stream_connection.example-kafka.connection_name, "topic" : "example_topic" } }]) + state = "CREATED" + options = { + dlq = { + coll = "exampleColumn" + connection_name = resource.mongodbatlas_stream_connection.example-cluster.connection_name + db = "exampleDb" + } + } +} + +data "mongodbatlas_stream_processors" "example-stream-processors" { + project_id = var.project_id + instance_name = mongodbatlas_stream_instance.example.instance_name +} + +data "mongodbatlas_stream_processor" "example-stream-processor" { + project_id = var.project_id + instance_name = mongodbatlas_stream_instance.example.instance_name + processor_name = mongodbatlas_stream_processor.stream-processor-sample-example.processor_name +} + +# example making use of data sources +output "stream_processors_state" { + value = data.mongodbatlas_stream_processor.example-stream-processor.state +} + +output "stream_processors_results" { + value = data.mongodbatlas_stream_processors.example-stream-processors.results +} +``` + + +## Schema + +### Required + +- `instance_name` (String) Human-readable label that identifies the stream instance. +- `project_id` (String) Unique 24-hexadecimal digit string that identifies your project. Use the [/groups](#tag/Projects/operation/listProjects) endpoint to retrieve all projects to which the authenticated user has access. + +**NOTE**: Groups and projects are synonymous terms. Your group id is the same as your project id. For existing groups, your group/project id remains the same. The resource and corresponding endpoints use the term groups. + +### Read-Only + +- `results` (Attributes List) Returns all Stream Processors within the specified stream instance. + +To use this resource, the requesting API Key must have the Project Owner + +role or Project Stream Processing Owner role. (see [below for nested schema](#nestedatt--results)) + + +### Nested Schema for `results` + +Read-Only: + +- `id` (String) Unique 24-hexadecimal character string that identifies the stream processor. +- `instance_name` (String) Human-readable label that identifies the stream instance. +- `options` (Attributes) Optional configuration for the stream processor. (see [below for nested schema](#nestedatt--results--options)) +- `pipeline` (String) Stream aggregation pipeline you want to apply to your streaming data. +- `processor_name` (String) Human-readable label that identifies the stream processor. +- `project_id` (String) Unique 24-hexadecimal digit string that identifies your project. Use the [/groups](#tag/Projects/operation/listProjects) endpoint to retrieve all projects to which the authenticated user has access. + +**NOTE**: Groups and projects are synonymous terms. Your group id is the same as your project id. For existing groups, your group/project id remains the same. The resource and corresponding endpoints use the term groups. +- `state` (String) The state of the stream processor. +- `stats` (String) The stats associated with the stream processor. + + +### Nested Schema for `results.options` + +Read-Only: + +- `dlq` (Attributes) Dead letter queue for the stream processor. Refer to the [MongoDB Atlas Docs](https://www.mongodb.com/docs/atlas/reference/glossary/#std-term-dead-letter-queue) for more information. (see [below for nested schema](#nestedatt--results--options--dlq)) + + +### Nested Schema for `results.options.dlq` + +Read-Only: + +- `coll` (String) Name of the collection to use for the DLQ. +- `connection_name` (String) Name of the connection to write DLQ messages to. Must be an Atlas connection. +- `db` (String) Name of the database to use for the DLQ. + +For more information see: [MongoDB Atlas API - Stream Processor](https://www.mongodb.com/docs/atlas/reference/api-resources-spec/v2/#tag/Streams/operation/createStreamProcessor) Documentation. diff --git a/docs/resources/stream_processor.md b/docs/resources/stream_processor.md new file mode 100644 index 0000000000..68b66f978d --- /dev/null +++ b/docs/resources/stream_processor.md @@ -0,0 +1,166 @@ +# Resource: mongodbatlas_stream_processor + +`mongodbatlas_stream_processor` provides a Stream Processor resource. The resource lets you create, delete, import, start and stop a stream processor in a stream instance. + +**NOTE**: Updating an Atlas Stream Processor is currently not supported. As a result, the following steps are needed to be able to change an Atlas Stream Processor with an Atlas Change Stream Source: +1. Retrieve the value of Change Stream Source Token `changeStreamState` from the computed `stats` attribute in `mongodbatlas_stream_processor` resource or datasource or from the Terraform state file. This takes the form of a [resume token](https://www.mongodb.com/docs/manual/changeStreams/#resume-tokens-from-change-events). The Stream Processor has to be running in the state `STARTED` for the `stats` attribute to be available. However, before you retrieve the value, you should set the `state` to `STOPPED` to get the latest `changeStreamState`. + - Example: + ``` + {\"changeStreamState\":{\"_data\":\"8266C71670000000012B0429296E1404\"} + ``` +2. Update the `pipeline` argument setting `config.StartAfter` with the value retrieved in the previous step. More details in the [MongoDB Collection Change Stream](https://www.mongodb.com/docs/atlas/atlas-stream-processing/sp-agg-source/#mongodb-collection-change-stream) documentation. + - Example: + ``` + pipeline = jsonencode([{ "$source" = { "connectionName" = resource.mongodbatlas_stream_connection.example-cluster.connection_name, "config" = { "startAfter" = { "_data" : "8266C71562000000012B0429296E1404" } } } }, { "$emit" = { "connectionName" : "__testLog" } }]) + ``` +3. Delete the existing Atlas Stream Processor and then create a new Atlas Stream Processor with updated pipeline parameter and the updated values. + +## Example Usages + +```terraform +resource "mongodbatlas_stream_instance" "example" { + project_id = var.project_id + instance_name = "InstanceName" + data_process_region = { + region = "VIRGINIA_USA" + cloud_provider = "AWS" + } +} + +resource "mongodbatlas_stream_connection" "example-sample" { + project_id = var.project_id + instance_name = mongodbatlas_stream_instance.example.instance_name + connection_name = "sample_stream_solar" + type = "Sample" +} + +resource "mongodbatlas_stream_connection" "example-cluster" { + project_id = var.project_id + instance_name = mongodbatlas_stream_instance.example.instance_name + connection_name = "ClusterConnection" + type = "Cluster" + cluster_name = var.cluster_name + db_role_to_execute = { + role = "atlasAdmin" + type = "BUILT_IN" + } +} + +resource "mongodbatlas_stream_connection" "example-kafka" { + project_id = var.project_id + instance_name = mongodbatlas_stream_instance.example.instance_name + connection_name = "KafkaPlaintextConnection" + type = "Kafka" + authentication = { + mechanism = "PLAIN" + username = var.kafka_username + password = var.kafka_password + } + bootstrap_servers = "localhost:9092,localhost:9092" + config = { + "auto.offset.reset" : "earliest" + } + security = { + protocol = "PLAINTEXT" + } +} + +resource "mongodbatlas_stream_processor" "stream-processor-sample-example" { + project_id = var.project_id + instance_name = mongodbatlas_stream_instance.example.instance_name + processor_name = "sampleProcessorName" + pipeline = jsonencode([{ "$source" = { "connectionName" = resource.mongodbatlas_stream_connection.example-sample.connection_name } }, { "$emit" = { "connectionName" : "__testLog" } }]) + state = "CREATED" +} + +resource "mongodbatlas_stream_processor" "stream-processor-cluster-example" { + project_id = var.project_id + instance_name = mongodbatlas_stream_instance.example.instance_name + processor_name = "clusterProcessorName" + pipeline = jsonencode([{ "$source" = { "connectionName" = resource.mongodbatlas_stream_connection.example-cluster.connection_name } }, { "$emit" = { "connectionName" : "__testLog" } }]) + state = "STARTED" +} + +resource "mongodbatlas_stream_processor" "stream-processor-kafka-example" { + project_id = var.project_id + instance_name = mongodbatlas_stream_instance.example.instance_name + processor_name = "kafkaProcessorName" + pipeline = jsonencode([{ "$source" = { "connectionName" = resource.mongodbatlas_stream_connection.example-cluster.connection_name } }, { "$emit" = { "connectionName" : resource.mongodbatlas_stream_connection.example-kafka.connection_name, "topic" : "example_topic" } }]) + state = "CREATED" + options = { + dlq = { + coll = "exampleColumn" + connection_name = resource.mongodbatlas_stream_connection.example-cluster.connection_name + db = "exampleDb" + } + } +} + +data "mongodbatlas_stream_processors" "example-stream-processors" { + project_id = var.project_id + instance_name = mongodbatlas_stream_instance.example.instance_name +} + +data "mongodbatlas_stream_processor" "example-stream-processor" { + project_id = var.project_id + instance_name = mongodbatlas_stream_instance.example.instance_name + processor_name = mongodbatlas_stream_processor.stream-processor-sample-example.processor_name +} + +# example making use of data sources +output "stream_processors_state" { + value = data.mongodbatlas_stream_processor.example-stream-processor.state +} + +output "stream_processors_results" { + value = data.mongodbatlas_stream_processors.example-stream-processors.results +} +``` + + +## Schema + +### Required + +- `instance_name` (String) Human-readable label that identifies the stream instance. +- `pipeline` (String) Stream aggregation pipeline you want to apply to your streaming data. [MongoDB Atlas Docs](https://www.mongodb.com/docs/atlas/atlas-stream-processing/stream-aggregation/#std-label-stream-aggregation) contain more information. Using [jsonencode](https://developer.hashicorp.com/terraform/language/functions/jsonencode) is recommended when settig this attribute. For more details see [Aggregation Pipelines Documentation](https://www.mongodb.com/docs/atlas/atlas-stream-processing/stream-aggregation/) +- `processor_name` (String) Human-readable label that identifies the stream processor. +- `project_id` (String) Unique 24-hexadecimal digit string that identifies your project. Use the [/groups](#tag/Projects/operation/listProjects) endpoint to retrieve all projects to which the authenticated user has access. + +**NOTE**: Groups and projects are synonymous terms. Your group id is the same as your project id. For existing groups, your group/project id remains the same. The resource and corresponding endpoints use the term groups. + +### Optional + +- `options` (Attributes) Optional configuration for the stream processor. (see [below for nested schema](#nestedatt--options)) +- `state` (String) The state of the stream processor. Commonly occurring states are 'CREATED', 'STARTED', 'STOPPED' and 'FAILED'. Used to start or stop the Stream Processor. Valid values are `CREATED`, `STARTED` or `STOPPED`. When a Stream Processor is created without specifying the state, it will default to `CREATED` state. + +**NOTE** When a stream processor is created, the only valid states are CREATED or STARTED. A stream processor can be automatically started when creating it if the state is set to STARTED. + +### Read-Only + +- `id` (String) Unique 24-hexadecimal character string that identifies the stream processor. +- `stats` (String) The stats associated with the stream processor. Refer to the [MongoDB Atlas Docs](https://www.mongodb.com/docs/atlas/atlas-stream-processing/manage-stream-processor/#view-statistics-of-a-stream-processor) for more information. + + +### Nested Schema for `options` + +Required: + +- `dlq` (Attributes) Dead letter queue for the stream processor. Refer to the [MongoDB Atlas Docs](https://www.mongodb.com/docs/atlas/reference/glossary/#std-term-dead-letter-queue) for more information. (see [below for nested schema](#nestedatt--options--dlq)) + + +### Nested Schema for `options.dlq` + +Required: + +- `coll` (String) Name of the collection to use for the DLQ. +- `connection_name` (String) Name of the connection to write DLQ messages to. Must be an Atlas connection. +- `db` (String) Name of the database to use for the DLQ. + +# Import +Stream Processor resource can be imported using the Project ID, Stream Instance name and Stream Processor name, in the format `INSTANCE_NAME-PROJECT_ID-PROCESSOR_NAME`, e.g. +``` +$ terraform import mongodbatlas_stream_processor.test yourInstanceName-6117ac2fe2a3d04ed27a987v-yourProcessorName +``` + +For more information see: [MongoDB Atlas API - Stream Processor](https://www.mongodb.com/docs/atlas/reference/api-resources-spec/v2/#tag/Streams/operation/createStreamProcessor) Documentation. diff --git a/examples/mongodbatlas_stream_instance/atlas-streams-user-journey.md b/examples/mongodbatlas_stream_instance/atlas-streams-user-journey.md deleted file mode 100644 index a5b4e25d14..0000000000 --- a/examples/mongodbatlas_stream_instance/atlas-streams-user-journey.md +++ /dev/null @@ -1,22 +0,0 @@ -# MongoDB Atlas Provider - Atlas Streams with Terraform - -Atlas Stream Processing is composed of multiple components, and users can leverage Terraform to define a subset of these. To obtain more details on each of the components please refer to the [Atlas Stream Processing Documentation](https://www.mongodb.com/docs/atlas/atlas-sp/overview/#atlas-stream-processing-overview). - -### Resources supported by Terraform - -- `mongodbatlas_stream_instance`: Enables creating, modifying, and deleting Stream Instances. as part of this resource, a computed `hostnames` attribute is available for connecting to the created instance. -- `mongodbatlas_stream_connection`: Enables creating, modifying, and deleting Stream Instance Connections, which serve as data sources and sinks for your instance. - -### Managing Stream Processors - -Once a stream instance and its connections have been defined, `Stream Processors` can be created to define how your data will be processed in your instance. There are currently no resources defined in Terraform to provide this configuration. To obtain information on how this can be configured refer to [Manage Stream Processors](https://www.mongodb.com/docs/atlas/atlas-sp/manage-stream-processor/#manage-stream-processors). - -Connect to your stream instance defined in terraform using the following code block: - -``` -output "stream_instance_hostname" { - value = mongodbatlas_stream_instance.test.hostnames -} -``` - -This value can then be used to connect to the stream instance using `mongosh`, as described in the [Get Started Tutorial](https://www.mongodb.com/docs/atlas/atlas-sp/tutorial/). diff --git a/examples/mongodbatlas_stream_processor/README.md b/examples/mongodbatlas_stream_processor/README.md new file mode 100644 index 0000000000..91578fe4cb --- /dev/null +++ b/examples/mongodbatlas_stream_processor/README.md @@ -0,0 +1,14 @@ +# MongoDB Atlas Provider - Atlas Stream Processor defined in a Project + +This example shows how to use Atlas Stream Processors in Terraform. It also creates a project, which is a prerequisite. + +You must set the following variables: + +- `public_key`: Atlas public key +- `private_key`: Atlas private key +- `project_id`: Unique 24-hexadecimal digit string that identifies the project where the stream instance will be created. +- `kafka_username`: Username used for connecting to your external Kafka Cluster. +- `kafka_password`: Password used for connecting to your external Kafka Cluster. +- `cluster_name`: Name of Cluster that will be used for creating a connection. + +To learn more, see the [Stream Processor Documentation](https://www.mongodb.com/docs/atlas/atlas-stream-processing/manage-stream-processor/). \ No newline at end of file diff --git a/examples/mongodbatlas_stream_processor/atlas-streams-user-journey.md b/examples/mongodbatlas_stream_processor/atlas-streams-user-journey.md new file mode 100644 index 0000000000..1ad2592260 --- /dev/null +++ b/examples/mongodbatlas_stream_processor/atlas-streams-user-journey.md @@ -0,0 +1,9 @@ +# MongoDB Atlas Provider - Atlas Streams with Terraform + +Atlas Stream Processing is composed of multiple components, and users can leverage Terraform to define a subset of these. To obtain more details on each of the components please refer to the [Atlas Stream Processing Documentation](https://www.mongodb.com/docs/atlas/atlas-sp/overview/#atlas-stream-processing-overview). + +### Resources supported by Terraform + +- `mongodbatlas_stream_instance`: Enables creating, modifying, and deleting Stream Instances. As part of this resource, a computed `hostnames` attribute is available for connecting to the created instance. +- `mongodbatlas_stream_connection`: Enables creating, modifying, and deleting Stream Instance Connections, which serve as data sources and sinks for your instance. +- `mongodbatlas_stream_processor`: Enables creating, deleting, starting and stopping a Stream Processor, which define how your data will be processed in your instance. diff --git a/examples/mongodbatlas_stream_processor/main.tf b/examples/mongodbatlas_stream_processor/main.tf new file mode 100644 index 0000000000..af6839aab4 --- /dev/null +++ b/examples/mongodbatlas_stream_processor/main.tf @@ -0,0 +1,97 @@ +resource "mongodbatlas_stream_instance" "example" { + project_id = var.project_id + instance_name = "InstanceName" + data_process_region = { + region = "VIRGINIA_USA" + cloud_provider = "AWS" + } +} + +resource "mongodbatlas_stream_connection" "example-sample" { + project_id = var.project_id + instance_name = mongodbatlas_stream_instance.example.instance_name + connection_name = "sample_stream_solar" + type = "Sample" +} + +resource "mongodbatlas_stream_connection" "example-cluster" { + project_id = var.project_id + instance_name = mongodbatlas_stream_instance.example.instance_name + connection_name = "ClusterConnection" + type = "Cluster" + cluster_name = var.cluster_name + db_role_to_execute = { + role = "atlasAdmin" + type = "BUILT_IN" + } +} + +resource "mongodbatlas_stream_connection" "example-kafka" { + project_id = var.project_id + instance_name = mongodbatlas_stream_instance.example.instance_name + connection_name = "KafkaPlaintextConnection" + type = "Kafka" + authentication = { + mechanism = "PLAIN" + username = var.kafka_username + password = var.kafka_password + } + bootstrap_servers = "localhost:9092,localhost:9092" + config = { + "auto.offset.reset" : "earliest" + } + security = { + protocol = "PLAINTEXT" + } +} + +resource "mongodbatlas_stream_processor" "stream-processor-sample-example" { + project_id = var.project_id + instance_name = mongodbatlas_stream_instance.example.instance_name + processor_name = "sampleProcessorName" + pipeline = jsonencode([{ "$source" = { "connectionName" = resource.mongodbatlas_stream_connection.example-sample.connection_name } }, { "$emit" = { "connectionName" : "__testLog" } }]) + state = "CREATED" +} + +resource "mongodbatlas_stream_processor" "stream-processor-cluster-example" { + project_id = var.project_id + instance_name = mongodbatlas_stream_instance.example.instance_name + processor_name = "clusterProcessorName" + pipeline = jsonencode([{ "$source" = { "connectionName" = resource.mongodbatlas_stream_connection.example-cluster.connection_name } }, { "$emit" = { "connectionName" : "__testLog" } }]) + state = "STARTED" +} + +resource "mongodbatlas_stream_processor" "stream-processor-kafka-example" { + project_id = var.project_id + instance_name = mongodbatlas_stream_instance.example.instance_name + processor_name = "kafkaProcessorName" + pipeline = jsonencode([{ "$source" = { "connectionName" = resource.mongodbatlas_stream_connection.example-cluster.connection_name } }, { "$emit" = { "connectionName" : resource.mongodbatlas_stream_connection.example-kafka.connection_name, "topic" : "example_topic" } }]) + state = "CREATED" + options = { + dlq = { + coll = "exampleColumn" + connection_name = resource.mongodbatlas_stream_connection.example-cluster.connection_name + db = "exampleDb" + } + } +} + +data "mongodbatlas_stream_processors" "example-stream-processors" { + project_id = var.project_id + instance_name = mongodbatlas_stream_instance.example.instance_name +} + +data "mongodbatlas_stream_processor" "example-stream-processor" { + project_id = var.project_id + instance_name = mongodbatlas_stream_instance.example.instance_name + processor_name = mongodbatlas_stream_processor.stream-processor-sample-example.processor_name +} + +# example making use of data sources +output "stream_processors_state" { + value = data.mongodbatlas_stream_processor.example-stream-processor.state +} + +output "stream_processors_results" { + value = data.mongodbatlas_stream_processors.example-stream-processors.results +} diff --git a/examples/mongodbatlas_stream_processor/provider.tf b/examples/mongodbatlas_stream_processor/provider.tf new file mode 100644 index 0000000000..18c430e061 --- /dev/null +++ b/examples/mongodbatlas_stream_processor/provider.tf @@ -0,0 +1,4 @@ +provider "mongodbatlas" { + public_key = var.public_key + private_key = var.private_key +} diff --git a/examples/mongodbatlas_stream_processor/variables.tf b/examples/mongodbatlas_stream_processor/variables.tf new file mode 100644 index 0000000000..349ed8fbfa --- /dev/null +++ b/examples/mongodbatlas_stream_processor/variables.tf @@ -0,0 +1,29 @@ +variable "public_key" { + description = "Public API key to authenticate to Atlas" + type = string +} + +variable "private_key" { + description = "Private API key to authenticate to Atlas" + type = string +} + +variable "project_id" { + description = "Unique 24-hexadecimal digit string that identifies your project" + type = string +} + +variable "kafka_username" { + description = "Username for connecting to your Kafka cluster" + type = string +} + +variable "kafka_password" { + description = "Password for connecting to your Kafka cluster" + type = string +} + +variable "cluster_name" { + description = "Name of an existing cluster in your project that will be used to create a stream connection" + type = string +} diff --git a/examples/mongodbatlas_stream_processor/versions.tf b/examples/mongodbatlas_stream_processor/versions.tf new file mode 100644 index 0000000000..9b4be6c14c --- /dev/null +++ b/examples/mongodbatlas_stream_processor/versions.tf @@ -0,0 +1,9 @@ +terraform { + required_providers { + mongodbatlas = { + source = "mongodb/mongodbatlas" + version = "~> 1.18" + } + } + required_version = ">= 1.0" +} diff --git a/internal/common/fwtypes/json_string.go b/internal/common/fwtypes/json_string.go new file mode 100644 index 0000000000..6423d5fc1c --- /dev/null +++ b/internal/common/fwtypes/json_string.go @@ -0,0 +1,141 @@ +package fwtypes + +import ( + "context" + "encoding/json" + "fmt" + + "github.com/hashicorp/terraform-plugin-framework/attr" + "github.com/hashicorp/terraform-plugin-framework/diag" + "github.com/hashicorp/terraform-plugin-framework/path" + "github.com/hashicorp/terraform-plugin-framework/types" + "github.com/hashicorp/terraform-plugin-framework/types/basetypes" + "github.com/hashicorp/terraform-plugin-go/tftypes" + "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/schemafunc" +) + +var ( + _ basetypes.StringTypable = (*jsonStringType)(nil) + _ basetypes.StringValuable = (*JSONString)(nil) + _ basetypes.StringValuableWithSemanticEquals = (*JSONString)(nil) +) + +type jsonStringType struct { + basetypes.StringType +} + +var ( + JSONStringType = jsonStringType{} +) + +func (t jsonStringType) Equal(o attr.Type) bool { + other, ok := o.(jsonStringType) + if !ok { + return false + } + return t.StringType.Equal(other.StringType) +} + +func (t jsonStringType) String() string { + return "jsonStringType" +} + +func (t jsonStringType) ValueFromString(_ context.Context, in types.String) (basetypes.StringValuable, diag.Diagnostics) { + var diags diag.Diagnostics + if in.IsNull() { + return JSONStringNull(), diags + } + if in.IsUnknown() { + return JSONStringUnknown(), diags + } + return JSONString{StringValue: in}, diags +} + +func (t jsonStringType) ValueFromTerraform(ctx context.Context, in tftypes.Value) (attr.Value, error) { + attrValue, err := t.StringType.ValueFromTerraform(ctx, in) + if err != nil { + return nil, err + } + stringValue, ok := attrValue.(basetypes.StringValue) + if !ok { + return nil, fmt.Errorf("unexpected value type of %T", attrValue) + } + stringValuable, diags := t.ValueFromString(ctx, stringValue) + if diags.HasError() { + return nil, fmt.Errorf("unexpected error converting StringValue to StringValuable: %v", diags) + } + return stringValuable, nil +} + +func (t jsonStringType) ValueType(context.Context) attr.Value { + return JSONString{} +} + +func (t jsonStringType) Validate(ctx context.Context, in tftypes.Value, attrPath path.Path) diag.Diagnostics { + var diags diag.Diagnostics + if !in.IsKnown() || in.IsNull() { + return diags + } + var value string + err := in.As(&value) + if err != nil { + diags.AddAttributeError( + attrPath, + "Invalid Terraform Value", + "An unexpected error occurred while attempting to convert a Terraform value to a string. "+ + "This generally is an issue with the provider schema implementation. "+ + "Please contact the provider developers.\n\n"+ + "Path: "+attrPath.String()+"\n"+ + "Error: "+err.Error(), + ) + return diags + } + if !json.Valid([]byte(value)) { + diags.AddAttributeError( + attrPath, + "Invalid JSON String Value", + "A string value was provided that is not valid JSON string format (RFC 7159).\n\n"+ + "Path: "+attrPath.String()+"\n"+ + "Given Value: "+value+"\n", + ) + return diags + } + return diags +} + +func JSONStringNull() JSONString { + return JSONString{StringValue: basetypes.NewStringNull()} +} + +func JSONStringUnknown() JSONString { + return JSONString{StringValue: basetypes.NewStringUnknown()} +} + +func JSONStringValue(value string) JSONString { + return JSONString{StringValue: basetypes.NewStringValue(value)} +} + +type JSONString struct { + basetypes.StringValue +} + +func (v JSONString) Equal(o attr.Value) bool { + other, ok := o.(JSONString) + if !ok { + return false + } + return v.StringValue.Equal(other.StringValue) +} + +func (v JSONString) Type(context.Context) attr.Type { + return JSONStringType +} + +func (v JSONString) StringSemanticEquals(_ context.Context, newValuable basetypes.StringValuable) (bool, diag.Diagnostics) { + var diags diag.Diagnostics + newValue, ok := newValuable.(JSONString) + if !ok { + return false, diags + } + return schemafunc.EqualJSON(v.ValueString(), newValue.ValueString(), "JsonString"), diags +} diff --git a/internal/common/schemafunc/json.go b/internal/common/schemafunc/json.go new file mode 100644 index 0000000000..de02d80012 --- /dev/null +++ b/internal/common/schemafunc/json.go @@ -0,0 +1,28 @@ +package schemafunc + +import ( + "encoding/json" + "log" + "reflect" +) + +func EqualJSON(old, newStr, errContext string) bool { + var j, j2 any + + if old == "" { + old = "{}" + } + + if newStr == "" { + newStr = "{}" + } + if err := json.Unmarshal([]byte(old), &j); err != nil { + log.Printf("[ERROR] cannot unmarshal old %s json %v", errContext, err) + return false + } + if err := json.Unmarshal([]byte(newStr), &j2); err != nil { + log.Printf("[ERROR] cannot unmarshal new %s json %v", errContext, err) + return false + } + return reflect.DeepEqual(&j, &j2) +} diff --git a/internal/common/schemafunc/json_test.go b/internal/common/schemafunc/json_test.go new file mode 100644 index 0000000000..9dd3051305 --- /dev/null +++ b/internal/common/schemafunc/json_test.go @@ -0,0 +1,30 @@ +package schemafunc_test + +import ( + "testing" + + "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/schemafunc" +) + +func Test_EqualJSON(t *testing.T) { + testCases := map[string]struct { + old string + new string + expected bool + }{ + "empty strings": {"", "", true}, + "different objects": {`{"a": 1}`, `{"b": 2}`, false}, + "invalid object": {`{{"a": 1}`, `{"b": 2}`, false}, + "double invalid object": {`{{"a": 1}`, `{"b": 2}}`, false}, + "equal objects with different order": {`{"a": 1, "b": 2}`, `{"b": 2, "a": 1}`, true}, + "equal objects whitespace": {`{"a": 1, "b": 2}`, `{"a":1,"b":2}`, true}, + } + for name, tc := range testCases { + t.Run(name, func(t *testing.T) { + actual := schemafunc.EqualJSON(tc.old, tc.new, "vector search index") + if actual != tc.expected { + t.Errorf("Expected: %v, got: %v", tc.expected, actual) + } + }) + } +} diff --git a/internal/provider/provider.go b/internal/provider/provider.go index 6c2341da86..7556324185 100644 --- a/internal/provider/provider.go +++ b/internal/provider/provider.go @@ -39,6 +39,7 @@ import ( "github.com/mongodb/terraform-provider-mongodbatlas/internal/service/searchdeployment" "github.com/mongodb/terraform-provider-mongodbatlas/internal/service/streamconnection" "github.com/mongodb/terraform-provider-mongodbatlas/internal/service/streaminstance" + "github.com/mongodb/terraform-provider-mongodbatlas/internal/service/streamprocessor" "github.com/mongodb/terraform-provider-mongodbatlas/version" ) @@ -436,12 +437,15 @@ func (p *MongodbtlasProvider) DataSources(context.Context) []func() datasource.D streamconnection.PluralDataSource, controlplaneipaddresses.DataSource, projectipaddresses.DataSource, + streamprocessor.DataSource, + streamprocessor.PluralDataSource, encryptionatrest.DataSource, } previewDataSources := []func() datasource.DataSource{ // Data sources not yet in GA encryptionatrestprivateendpoint.DataSource, encryptionatrestprivateendpoint.PluralDataSource, } + if providerEnablePreview { dataSources = append(dataSources, previewDataSources...) } @@ -459,6 +463,7 @@ func (p *MongodbtlasProvider) Resources(context.Context) []func() resource.Resou pushbasedlogexport.Resource, streaminstance.Resource, streamconnection.Resource, + streamprocessor.Resource, } previewResources := []func() resource.Resource{ // Resources not yet in GA encryptionatrestprivateendpoint.Resource, diff --git a/internal/service/searchindex/model_search_index.go b/internal/service/searchindex/model_search_index.go index 16227ea7ae..fdad9a06e0 100644 --- a/internal/service/searchindex/model_search_index.go +++ b/internal/service/searchindex/model_search_index.go @@ -4,14 +4,13 @@ import ( "bytes" "context" "encoding/json" - "log" - "reflect" "strconv" "github.com/hashicorp/terraform-plugin-sdk/v2/diag" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion" + "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/schemafunc" "go.mongodb.org/atlas-sdk/v20240805003/admin" ) @@ -115,27 +114,7 @@ func UnmarshalStoredSource(str string) (any, diag.Diagnostics) { } func diffSuppressJSON(k, old, newStr string, d *schema.ResourceData) bool { - var j, j2 any - - if old == "" { - old = "{}" - } - - if newStr == "" { - newStr = "{}" - } - - if err := json.Unmarshal([]byte(old), &j); err != nil { - log.Printf("[ERROR] cannot unmarshal old search index analyzer json %v", err) - } - if err := json.Unmarshal([]byte(newStr), &j2); err != nil { - log.Printf("[ERROR] cannot unmarshal new search index analyzer json %v", err) - } - if !reflect.DeepEqual(&j, &j2) { - return false - } - - return true + return schemafunc.EqualJSON(old, newStr, "vector search index") } func resourceSearchIndexRefreshFunc(ctx context.Context, clusterName, projectID, indexID string, connV2 *admin.APIClient) retry.StateRefreshFunc { diff --git a/internal/service/streamprocessor/data_source.go b/internal/service/streamprocessor/data_source.go new file mode 100644 index 0000000000..958c12bcbf --- /dev/null +++ b/internal/service/streamprocessor/data_source.go @@ -0,0 +1,54 @@ +package streamprocessor + +import ( + "context" + + "github.com/hashicorp/terraform-plugin-framework/datasource" + "github.com/mongodb/terraform-provider-mongodbatlas/internal/config" +) + +var _ datasource.DataSource = &StreamProccesorDS{} +var _ datasource.DataSourceWithConfigure = &StreamProccesorDS{} + +func DataSource() datasource.DataSource { + return &StreamProccesorDS{ + DSCommon: config.DSCommon{ + DataSourceName: StreamProcessorName, + }, + } +} + +type StreamProccesorDS struct { + config.DSCommon +} + +func (d *StreamProccesorDS) Schema(ctx context.Context, req datasource.SchemaRequest, resp *datasource.SchemaResponse) { + // TODO: Schema and model must be defined in data_source_schema.go. Details on scaffolding this file found in contributing/development-best-practices.md under "Scaffolding Schema and Model Definitions" + resp.Schema = DataSourceSchema(ctx) +} + +func (d *StreamProccesorDS) Read(ctx context.Context, req datasource.ReadRequest, resp *datasource.ReadResponse) { + var streamProccesorConfig TFStreamProcessorDSModel + resp.Diagnostics.Append(req.Config.Get(ctx, &streamProccesorConfig)...) + if resp.Diagnostics.HasError() { + return + } + + connV2 := d.Client.AtlasV2 + projectID := streamProccesorConfig.ProjectID.ValueString() + instanceName := streamProccesorConfig.InstanceName.ValueString() + processorName := streamProccesorConfig.ProcessorName.ValueString() + apiResp, _, err := connV2.StreamsApi.GetStreamProcessor(ctx, projectID, instanceName, processorName).Execute() + + if err != nil { + resp.Diagnostics.AddError("error fetching resource", err.Error()) + return + } + + newStreamTFStreamprocessorDSModelModel, diags := NewTFStreamprocessorDSModel(ctx, projectID, instanceName, apiResp) + if diags.HasError() { + resp.Diagnostics.Append(diags...) + return + } + resp.Diagnostics.Append(resp.State.Set(ctx, newStreamTFStreamprocessorDSModelModel)...) +} diff --git a/internal/service/streamprocessor/data_source_plural.go b/internal/service/streamprocessor/data_source_plural.go new file mode 100644 index 0000000000..faec4e6a82 --- /dev/null +++ b/internal/service/streamprocessor/data_source_plural.go @@ -0,0 +1,43 @@ +package streamprocessor + +import ( + "context" + "net/http" + + "github.com/hashicorp/terraform-plugin-framework/datasource" + "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/dsschema" + "go.mongodb.org/atlas-sdk/v20240805003/admin" +) + +func (d *streamProcessorsDS) Read(ctx context.Context, req datasource.ReadRequest, resp *datasource.ReadResponse) { + var streamConnectionsConfig TFStreamProcessorsDSModel + resp.Diagnostics.Append(req.Config.Get(ctx, &streamConnectionsConfig)...) + if resp.Diagnostics.HasError() { + return + } + + connV2 := d.Client.AtlasV2 + projectID := streamConnectionsConfig.ProjectID.ValueString() + instanceName := streamConnectionsConfig.InstanceName.ValueString() + + params := admin.ListStreamProcessorsApiParams{ + GroupId: projectID, + TenantName: instanceName, + } + sdkProcessors, err := dsschema.AllPages(ctx, func(ctx context.Context, pageNum int) (dsschema.PaginateResponse[admin.StreamsProcessorWithStats], *http.Response, error) { + request := connV2.StreamsApi.ListStreamProcessorsWithParams(ctx, ¶ms) + request = request.PageNum(pageNum) + return request.Execute() + }) + if err != nil { + resp.Diagnostics.AddError("error fetching results", err.Error()) + return + } + + newStreamConnectionsModel, diags := NewTFStreamProcessors(ctx, &streamConnectionsConfig, sdkProcessors) + if diags.HasError() { + resp.Diagnostics.Append(diags...) + return + } + resp.Diagnostics.Append(resp.State.Set(ctx, newStreamConnectionsModel)...) +} diff --git a/internal/service/streamprocessor/data_source_plural_schema.go b/internal/service/streamprocessor/data_source_plural_schema.go new file mode 100644 index 0000000000..aec8a0e560 --- /dev/null +++ b/internal/service/streamprocessor/data_source_plural_schema.go @@ -0,0 +1,57 @@ +package streamprocessor + +import ( + "context" + "fmt" + + "github.com/hashicorp/terraform-plugin-framework/datasource" + "github.com/hashicorp/terraform-plugin-framework/datasource/schema" + "github.com/hashicorp/terraform-plugin-framework/types" + "github.com/mongodb/terraform-provider-mongodbatlas/internal/config" +) + +var _ datasource.DataSource = &StreamProccesorDS{} +var _ datasource.DataSourceWithConfigure = &StreamProccesorDS{} + +func PluralDataSource() datasource.DataSource { + return &streamProcessorsDS{ + DSCommon: config.DSCommon{ + DataSourceName: fmt.Sprintf("%ss", StreamProcessorName), + }, + } +} + +type streamProcessorsDS struct { + config.DSCommon +} + +func (d *streamProcessorsDS) Schema(ctx context.Context, req datasource.SchemaRequest, resp *datasource.SchemaResponse) { + resp.Schema = schema.Schema{ + Attributes: map[string]schema.Attribute{ + "project_id": schema.StringAttribute{ + Required: true, + Description: "Unique 24-hexadecimal digit string that identifies your project. Use the [/groups](#tag/Projects/operation/listProjects) endpoint to retrieve all projects to which the authenticated user has access.\n\n**NOTE**: Groups and projects are synonymous terms. Your group id is the same as your project id. For existing groups, your group/project id remains the same. The resource and corresponding endpoints use the term groups.", + MarkdownDescription: "Unique 24-hexadecimal digit string that identifies your project. Use the [/groups](#tag/Projects/operation/listProjects) endpoint to retrieve all projects to which the authenticated user has access.\n\n**NOTE**: Groups and projects are synonymous terms. Your group id is the same as your project id. For existing groups, your group/project id remains the same. The resource and corresponding endpoints use the term groups.", + }, + "instance_name": schema.StringAttribute{ + Required: true, + Description: "Human-readable label that identifies the stream instance.", + MarkdownDescription: "Human-readable label that identifies the stream instance.", + }, + "results": schema.ListNestedAttribute{ + Computed: true, + NestedObject: schema.NestedAttributeObject{ + Attributes: DSAttributes(false), + }, + Description: "Returns all Stream Processors within the specified stream instance.\n\nTo use this resource, the requesting API Key must have the Project Owner\n\nrole or Project Stream Processing Owner role.", + MarkdownDescription: "Returns all Stream Processors within the specified stream instance.\n\nTo use this resource, the requesting API Key must have the Project Owner\n\nrole or Project Stream Processing Owner role.", + }, + }, + } +} + +type TFStreamProcessorsDSModel struct { + ProjectID types.String `tfsdk:"project_id"` + InstanceName types.String `tfsdk:"instance_name"` + Results []TFStreamProcessorDSModel `tfsdk:"results"` +} diff --git a/internal/service/streamprocessor/data_source_schema.go b/internal/service/streamprocessor/data_source_schema.go new file mode 100644 index 0000000000..7e2aa23191 --- /dev/null +++ b/internal/service/streamprocessor/data_source_schema.go @@ -0,0 +1,70 @@ +package streamprocessor + +import ( + "context" + + "github.com/hashicorp/terraform-plugin-framework/types" + + "github.com/hashicorp/terraform-plugin-framework/datasource/schema" +) + +func DataSourceSchema(ctx context.Context) schema.Schema { + return schema.Schema{ + Attributes: DSAttributes(true), + } +} + +func DSAttributes(withArguments bool) map[string]schema.Attribute { + return map[string]schema.Attribute{ + "id": schema.StringAttribute{ + Computed: true, + Description: "Unique 24-hexadecimal character string that identifies the stream processor.", + MarkdownDescription: "Unique 24-hexadecimal character string that identifies the stream processor.", + }, + "instance_name": schema.StringAttribute{ + Required: withArguments, + Computed: !withArguments, + Description: "Human-readable label that identifies the stream instance.", + MarkdownDescription: "Human-readable label that identifies the stream instance.", + }, + "pipeline": schema.StringAttribute{ + Computed: true, + Description: "Stream aggregation pipeline you want to apply to your streaming data.", + MarkdownDescription: "Stream aggregation pipeline you want to apply to your streaming data.", + }, + "processor_name": schema.StringAttribute{ + Required: withArguments, + Computed: !withArguments, + Description: "Human-readable label that identifies the stream processor.", + MarkdownDescription: "Human-readable label that identifies the stream processor.", + }, + "project_id": schema.StringAttribute{ + Required: withArguments, + Computed: !withArguments, + Description: "Unique 24-hexadecimal digit string that identifies your project. Use the [/groups](#tag/Projects/operation/listProjects) endpoint to retrieve all projects to which the authenticated user has access.\n\n**NOTE**: Groups and projects are synonymous terms. Your group id is the same as your project id. For existing groups, your group/project id remains the same. The resource and corresponding endpoints use the term groups.", + MarkdownDescription: "Unique 24-hexadecimal digit string that identifies your project. Use the [/groups](#tag/Projects/operation/listProjects) endpoint to retrieve all projects to which the authenticated user has access.\n\n**NOTE**: Groups and projects are synonymous terms. Your group id is the same as your project id. For existing groups, your group/project id remains the same. The resource and corresponding endpoints use the term groups.", + }, + "state": schema.StringAttribute{ + Computed: true, + Description: "The state of the stream processor.", + MarkdownDescription: "The state of the stream processor.", + }, + "stats": schema.StringAttribute{ + Computed: true, + Description: "The stats associated with the stream processor.", + MarkdownDescription: "The stats associated with the stream processor.", + }, + "options": optionsSchema(true), + } +} + +type TFStreamProcessorDSModel struct { + ID types.String `tfsdk:"id"` + InstanceName types.String `tfsdk:"instance_name"` + Options types.Object `tfsdk:"options"` + Pipeline types.String `tfsdk:"pipeline"` + ProcessorName types.String `tfsdk:"processor_name"` + ProjectID types.String `tfsdk:"project_id"` + State types.String `tfsdk:"state"` + Stats types.String `tfsdk:"stats"` +} diff --git a/internal/service/streamprocessor/main_test.go b/internal/service/streamprocessor/main_test.go new file mode 100644 index 0000000000..4c663869b2 --- /dev/null +++ b/internal/service/streamprocessor/main_test.go @@ -0,0 +1,15 @@ +package streamprocessor_test + +import ( + "os" + "testing" + + "github.com/mongodb/terraform-provider-mongodbatlas/internal/testutil/acc" +) + +func TestMain(m *testing.M) { + cleanup := acc.SetupSharedResources() + exitCode := m.Run() + cleanup() + os.Exit(exitCode) +} diff --git a/internal/service/streamprocessor/model.go b/internal/service/streamprocessor/model.go new file mode 100644 index 0000000000..e6e4861b17 --- /dev/null +++ b/internal/service/streamprocessor/model.go @@ -0,0 +1,184 @@ +package streamprocessor + +import ( + "context" + "encoding/json" + + "github.com/hashicorp/terraform-plugin-framework/diag" + "github.com/hashicorp/terraform-plugin-framework/types" + "github.com/hashicorp/terraform-plugin-framework/types/basetypes" + "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/fwtypes" + "go.mongodb.org/atlas-sdk/v20240805003/admin" +) + +func NewStreamProcessorReq(ctx context.Context, plan *TFStreamProcessorRSModel) (*admin.StreamsProcessor, diag.Diagnostics) { + pipeline, diags := convertPipelineToSdk(plan.Pipeline.ValueString()) + if diags != nil { + return nil, diags + } + streamProcessor := &admin.StreamsProcessor{ + Name: plan.ProcessorName.ValueStringPointer(), + Pipeline: &pipeline, + } + + if !plan.Options.IsNull() && !plan.Options.IsUnknown() { + optionsModel := &TFOptionsModel{} + if diags := plan.Options.As(ctx, optionsModel, basetypes.ObjectAsOptions{}); diags.HasError() { + return nil, diags + } + dlqModel := &TFDlqModel{} + if diags := optionsModel.Dlq.As(ctx, dlqModel, basetypes.ObjectAsOptions{}); diags.HasError() { + return nil, diags + } + streamProcessor.Options = &admin.StreamsOptions{ + Dlq: &admin.StreamsDLQ{ + Coll: dlqModel.Coll.ValueStringPointer(), + ConnectionName: dlqModel.ConnectionName.ValueStringPointer(), + Db: dlqModel.DB.ValueStringPointer(), + }, + } + } + + return streamProcessor, nil +} + +func NewStreamProcessorWithStats(ctx context.Context, projectID, instanceName string, apiResp *admin.StreamsProcessorWithStats) (*TFStreamProcessorRSModel, diag.Diagnostics) { + if apiResp == nil { + return nil, diag.Diagnostics{diag.NewErrorDiagnostic("streamProcessor API response is nil", "")} + } + pipelineTF, diags := convertPipelineToTF(apiResp.GetPipeline()) + if diags.HasError() { + return nil, diags + } + statsTF, diags := convertStatsToTF(apiResp.GetStats()) + if diags.HasError() { + return nil, diags + } + optionsTF, diags := ConvertOptionsToTF(ctx, apiResp.Options) + if diags.HasError() { + return nil, diags + } + tfModel := &TFStreamProcessorRSModel{ + InstanceName: types.StringPointerValue(&instanceName), + Options: *optionsTF, + Pipeline: pipelineTF, + ProcessorID: types.StringPointerValue(&apiResp.Id), + ProcessorName: types.StringPointerValue(&apiResp.Name), + ProjectID: types.StringPointerValue(&projectID), + State: types.StringPointerValue(&apiResp.State), + Stats: statsTF, + } + return tfModel, nil +} + +func NewTFStreamprocessorDSModel(ctx context.Context, projectID, instanceName string, apiResp *admin.StreamsProcessorWithStats) (*TFStreamProcessorDSModel, diag.Diagnostics) { + if apiResp == nil { + return nil, diag.Diagnostics{diag.NewErrorDiagnostic("streamProcessor API response is nil", "")} + } + pipelineTF, diags := convertPipelineToTF(apiResp.GetPipeline()) + if diags.HasError() { + return nil, diags + } + statsTF, diags := convertStatsToTF(apiResp.GetStats()) + if diags.HasError() { + return nil, diags + } + optionsTF, diags := ConvertOptionsToTF(ctx, apiResp.Options) + if diags.HasError() { + return nil, diags + } + tfModel := &TFStreamProcessorDSModel{ + ID: types.StringPointerValue(&apiResp.Id), + InstanceName: types.StringPointerValue(&instanceName), + Options: *optionsTF, + Pipeline: types.StringValue(pipelineTF.ValueString()), + ProcessorName: types.StringPointerValue(&apiResp.Name), + ProjectID: types.StringPointerValue(&projectID), + State: types.StringPointerValue(&apiResp.State), + Stats: statsTF, + } + return tfModel, nil +} + +func ConvertOptionsToTF(ctx context.Context, options *admin.StreamsOptions) (*types.Object, diag.Diagnostics) { + if options == nil || !options.HasDlq() { + optionsTF := types.ObjectNull(OptionsObjectType.AttributeTypes()) + return &optionsTF, nil + } + dlqTF, diags := convertDlqToTF(ctx, options.Dlq) + if diags.HasError() { + return nil, diags + } + optionsTF := &TFOptionsModel{ + Dlq: *dlqTF, + } + optionsObject, diags := types.ObjectValueFrom(ctx, OptionsObjectType.AttributeTypes(), optionsTF) + if diags.HasError() { + return nil, diags + } + return &optionsObject, nil +} + +func convertDlqToTF(ctx context.Context, dlq *admin.StreamsDLQ) (*types.Object, diag.Diagnostics) { + if dlq == nil { + dlqTF := types.ObjectNull(DlqObjectType.AttributeTypes()) + return &dlqTF, nil + } + dlqModel := TFDlqModel{ + Coll: types.StringPointerValue(dlq.Coll), + ConnectionName: types.StringPointerValue(dlq.ConnectionName), + DB: types.StringPointerValue(dlq.Db), + } + dlqObject, diags := types.ObjectValueFrom(ctx, DlqObjectType.AttributeTypes(), dlqModel) + if diags.HasError() { + return nil, diags + } + return &dlqObject, nil +} +func convertPipelineToTF(pipeline []any) (fwtypes.JSONString, diag.Diagnostics) { + pipelineJSON, err := json.Marshal(pipeline) + if err != nil { + return fwtypes.JSONStringValue(""), diag.Diagnostics{diag.NewErrorDiagnostic("failed to marshal pipeline", err.Error())} + } + return fwtypes.JSONStringValue(string(pipelineJSON)), nil +} + +func convertStatsToTF(stats any) (types.String, diag.Diagnostics) { + if stats == nil { + return types.StringNull(), nil + } + statsJSON, err := json.Marshal(stats) + if err != nil { + return types.StringValue(""), diag.Diagnostics{diag.NewErrorDiagnostic("failed to marshal stats", err.Error())} + } + return types.StringValue(string(statsJSON)), nil +} + +func NewTFStreamProcessors(ctx context.Context, + streamProcessorsConfig *TFStreamProcessorsDSModel, + sdkResults []admin.StreamsProcessorWithStats) (*TFStreamProcessorsDSModel, diag.Diagnostics) { + results := make([]TFStreamProcessorDSModel, len(sdkResults)) + projectID := streamProcessorsConfig.ProjectID.ValueString() + instanceName := streamProcessorsConfig.InstanceName.ValueString() + for i := range sdkResults { + processorModel, diags := NewTFStreamprocessorDSModel(ctx, projectID, instanceName, &sdkResults[i]) + if diags.HasError() { + return nil, diags + } + results[i] = *processorModel + } + return &TFStreamProcessorsDSModel{ + ProjectID: streamProcessorsConfig.ProjectID, + InstanceName: streamProcessorsConfig.InstanceName, + Results: results, + }, nil +} + +func convertPipelineToSdk(pipeline string) ([]any, diag.Diagnostics) { + var pipelineSliceOfMaps []any + err := json.Unmarshal([]byte(pipeline), &pipelineSliceOfMaps) + if err != nil { + return nil, diag.Diagnostics{diag.NewErrorDiagnostic("failed to unmarshal pipeline", err.Error())} + } + return pipelineSliceOfMaps, nil +} diff --git a/internal/service/streamprocessor/model_test.go b/internal/service/streamprocessor/model_test.go new file mode 100644 index 0000000000..68f5733dac --- /dev/null +++ b/internal/service/streamprocessor/model_test.go @@ -0,0 +1,296 @@ +package streamprocessor_test + +import ( + "context" + "encoding/json" + "testing" + + "github.com/hashicorp/terraform-plugin-framework/types" + "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion" + "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/fwtypes" + "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/schemafunc" + "github.com/mongodb/terraform-provider-mongodbatlas/internal/service/streamprocessor" + "github.com/stretchr/testify/assert" + "go.mongodb.org/atlas-sdk/v20240805003/admin" +) + +var ( + projectID = "661fe3ad234b02027dabcabc" + instanceName = "test-instance-name" + pipelineStageSourceSample = map[string]any{ + "$source": map[string]any{ + "connectionName": "sample_stream_solar", + }, + } + pipelineStageEmitLog = map[string]any{ + "$emit": map[string]any{ + "connectionName": "__testLog", + }, + } + processorName = "processor1" + processorID = "66b39806187592e8d721215d" + stateCreated = streamprocessor.CreatedState + stateStarted = streamprocessor.StartedState + streamOptionsExample = admin.StreamsOptions{ + Dlq: &admin.StreamsDLQ{ + Coll: conversion.StringPtr("testColl"), + ConnectionName: conversion.StringPtr("testConnection"), + Db: conversion.StringPtr("testDB"), + }, + } +) + +var statsExample = ` +{ + "dlqMessageCount": 0, + "dlqMessageSize": 0.0, + "inputMessageCount": 12, + "inputMessageSize": 4681.0, + "memoryTrackerBytes": 0.0, + "name": "processor1", + "ok": 1.0, + "changeStreamState": { "_data": "8266C37388000000012B0429296E1404" }, + "operatorStats": [ + { + "dlqMessageCount": 0, + "dlqMessageSize": 0.0, + "executionTimeSecs": 0, + "inputMessageCount": 12, + "inputMessageSize": 4681.0, + "maxMemoryUsage": 0.0, + "name": "SampleDataSourceOperator", + "outputMessageCount": 12, + "outputMessageSize": 0.0, + "stateSize": 0.0, + "timeSpentMillis": 0 + }, + { + "dlqMessageCount": 0, + "dlqMessageSize": 0.0, + "executionTimeSecs": 0, + "inputMessageCount": 12, + "inputMessageSize": 4681.0, + "maxMemoryUsage": 0.0, + "name": "LogSinkOperator", + "outputMessageCount": 12, + "outputMessageSize": 4681.0, + "stateSize": 0.0, + "timeSpentMillis": 0 + } + ], + "outputMessageCount": 12, + "outputMessageSize": 4681.0, + "processorId": "66b3941109bbccf048ccff06", + "scaleFactor": 1, + "stateSize": 0.0, + "status": "running" +}` + +func streamProcessorWithStats(t *testing.T, options *admin.StreamsOptions) *admin.StreamsProcessorWithStats { + t.Helper() + processor := admin.NewStreamsProcessorWithStats( + processorID, processorName, []any{pipelineStageSourceSample, pipelineStageEmitLog}, stateStarted, + ) + var stats any + err := json.Unmarshal([]byte(statsExample), &stats) + if err != nil { + t.Fatal(err) + } + processor.SetStats(stats) + if options != nil { + processor.SetOptions(*options) + } + return processor +} + +func streamProcessorDSTFModel(t *testing.T, state, stats string, options types.Object) *streamprocessor.TFStreamProcessorDSModel { + t.Helper() + return &streamprocessor.TFStreamProcessorDSModel{ + ID: types.StringValue(processorID), + InstanceName: types.StringValue(instanceName), + Options: options, + Pipeline: types.StringValue("[{\"$source\":{\"connectionName\":\"sample_stream_solar\"}},{\"$emit\":{\"connectionName\":\"__testLog\"}}]"), + ProcessorName: types.StringValue(processorName), + ProjectID: types.StringValue(projectID), + State: conversion.StringNullIfEmpty(state), + Stats: conversion.StringNullIfEmpty(stats), + } +} + +func optionsToTFModel(t *testing.T, options *admin.StreamsOptions) types.Object { + t.Helper() + ctx := context.Background() + result, diags := streamprocessor.ConvertOptionsToTF(ctx, options) + if diags.HasError() { + t.Fatal(diags) + } + assert.NotNil(t, result) + return *result +} + +func TestDSSDKToTFModel(t *testing.T) { + testCases := []struct { + sdkModel *admin.StreamsProcessorWithStats + expectedTFModel *streamprocessor.TFStreamProcessorDSModel + name string + }{ + { + name: "afterCreate", + sdkModel: admin.NewStreamsProcessorWithStats( + processorID, processorName, []any{pipelineStageSourceSample, pipelineStageEmitLog}, stateCreated, + ), + expectedTFModel: streamProcessorDSTFModel(t, stateCreated, "", optionsToTFModel(t, nil)), + }, + { + name: "afterStarted", + sdkModel: streamProcessorWithStats(t, nil), + expectedTFModel: streamProcessorDSTFModel(t, stateStarted, statsExample, optionsToTFModel(t, nil)), + }, + { + name: "withOptions", + sdkModel: streamProcessorWithStats(t, &streamOptionsExample), + expectedTFModel: streamProcessorDSTFModel(t, stateStarted, statsExample, optionsToTFModel(t, &streamOptionsExample)), + }, + } + + for _, tc := range testCases { + t.Run(tc.name, func(t *testing.T) { + sdkModel := tc.sdkModel + resultModel, diags := streamprocessor.NewTFStreamprocessorDSModel(context.Background(), projectID, instanceName, sdkModel) + if diags.HasError() { + t.Fatalf("unexpected errors found: %s", diags.Errors()[0].Summary()) + } + assert.Equal(t, tc.expectedTFModel.Options, resultModel.Options) + if sdkModel.Stats != nil { + assert.True(t, schemafunc.EqualJSON(resultModel.Pipeline.String(), tc.expectedTFModel.Pipeline.String(), "test stream processor schema")) + var statsResult any + err := json.Unmarshal([]byte(resultModel.Stats.ValueString()), &statsResult) + if err != nil { + t.Fatal(err) + } + assert.Len(t, sdkModel.Stats, 15) + assert.Len(t, statsResult, 15) + } else { + assert.Equal(t, tc.expectedTFModel, resultModel) + } + }) + } +} + +func TestSDKToTFModel(t *testing.T) { + testCases := []struct { + sdkModel *admin.StreamsProcessorWithStats + expectedTFModel *streamprocessor.TFStreamProcessorRSModel + name string + }{ + { + name: "afterCreate", + sdkModel: admin.NewStreamsProcessorWithStats( + processorID, processorName, []any{pipelineStageSourceSample, pipelineStageEmitLog}, "CREATED", + ), + expectedTFModel: &streamprocessor.TFStreamProcessorRSModel{ + InstanceName: types.StringValue(instanceName), + Options: types.ObjectNull(streamprocessor.OptionsObjectType.AttrTypes), + ProcessorID: types.StringValue(processorID), + Pipeline: fwtypes.JSONStringValue("[{\"$source\":{\"connectionName\":\"sample_stream_solar\"}},{\"$emit\":{\"connectionName\":\"__testLog\"}}]"), + ProcessorName: types.StringValue(processorName), + ProjectID: types.StringValue(projectID), + State: types.StringValue("CREATED"), + Stats: types.StringNull(), + }, + }, + { + name: "afterStarted", + sdkModel: streamProcessorWithStats(t, nil), + expectedTFModel: &streamprocessor.TFStreamProcessorRSModel{ + InstanceName: types.StringValue(instanceName), + Options: types.ObjectNull(streamprocessor.OptionsObjectType.AttrTypes), + ProcessorID: types.StringValue(processorID), + Pipeline: fwtypes.JSONStringValue("[{\"$source\":{\"connectionName\":\"sample_stream_solar\"}},{\"$emit\":{\"connectionName\":\"__testLog\"}}]"), + ProcessorName: types.StringValue(processorName), + ProjectID: types.StringValue(projectID), + State: types.StringValue("STARTED"), + Stats: types.StringValue(statsExample), + }, + }, + { + name: "withOptions", + sdkModel: streamProcessorWithStats(t, &streamOptionsExample), + expectedTFModel: &streamprocessor.TFStreamProcessorRSModel{ + InstanceName: types.StringValue(instanceName), + Options: optionsToTFModel(t, &streamOptionsExample), + ProcessorID: types.StringValue(processorID), + Pipeline: fwtypes.JSONStringValue("[{\"$source\":{\"connectionName\":\"sample_stream_solar\"}},{\"$emit\":{\"connectionName\":\"__testLog\"}}]"), + ProcessorName: types.StringValue(processorName), + ProjectID: types.StringValue(projectID), + State: types.StringValue("STARTED"), + Stats: types.StringNull(), + }, + }, + } + + for _, tc := range testCases { + t.Run(tc.name, func(t *testing.T) { + sdkModel := tc.sdkModel + resultModel, diags := streamprocessor.NewStreamProcessorWithStats(context.Background(), projectID, instanceName, sdkModel) + if diags.HasError() { + t.Fatalf("unexpected errors found: %s", diags.Errors()[0].Summary()) + } + assert.Equal(t, tc.expectedTFModel.Options, resultModel.Options) + if sdkModel.Stats != nil { + assert.True(t, schemafunc.EqualJSON(resultModel.Pipeline.String(), tc.expectedTFModel.Pipeline.String(), "test stream processor schema")) + var statsResult any + err := json.Unmarshal([]byte(resultModel.Stats.ValueString()), &statsResult) + if err != nil { + t.Fatal(err) + } + assert.Len(t, sdkModel.Stats, 15) + assert.Len(t, statsResult, 15) + } else { + assert.Equal(t, tc.expectedTFModel, resultModel) + } + }) + } +} +func TestPluralDSSDKToTFModel(t *testing.T) { + testCases := map[string]struct { + sdkModel *admin.PaginatedApiStreamsStreamProcessorWithStats + expectedTFModel *streamprocessor.TFStreamProcessorsDSModel + }{ + "noResults": {sdkModel: &admin.PaginatedApiStreamsStreamProcessorWithStats{ + Results: &[]admin.StreamsProcessorWithStats{}, + TotalCount: admin.PtrInt(0), + }, expectedTFModel: &streamprocessor.TFStreamProcessorsDSModel{ + ProjectID: types.StringValue(projectID), + InstanceName: types.StringValue(instanceName), + Results: []streamprocessor.TFStreamProcessorDSModel{}, + }}, + "oneResult": {sdkModel: &admin.PaginatedApiStreamsStreamProcessorWithStats{ + Results: &[]admin.StreamsProcessorWithStats{*admin.NewStreamsProcessorWithStats( + processorID, processorName, []any{pipelineStageSourceSample, pipelineStageEmitLog}, stateCreated, + )}, + TotalCount: admin.PtrInt(1), + }, expectedTFModel: &streamprocessor.TFStreamProcessorsDSModel{ + ProjectID: types.StringValue(projectID), + InstanceName: types.StringValue(instanceName), + Results: []streamprocessor.TFStreamProcessorDSModel{ + *streamProcessorDSTFModel(t, stateCreated, "", optionsToTFModel(t, nil)), + }, + }}, + } + + for name, tc := range testCases { + t.Run(name, func(t *testing.T) { + sdkModel := tc.sdkModel + existingConfig := &streamprocessor.TFStreamProcessorsDSModel{ + ProjectID: types.StringValue(projectID), + InstanceName: types.StringValue(instanceName), + } + resultModel, diags := streamprocessor.NewTFStreamProcessors(context.Background(), existingConfig, sdkModel.GetResults()) + if diags.HasError() { + t.Fatalf("unexpected errors found: %s", diags.Errors()[0].Summary()) + } + assert.Equal(t, tc.expectedTFModel, resultModel) + }) + } +} diff --git a/internal/service/streamprocessor/resource.go b/internal/service/streamprocessor/resource.go new file mode 100644 index 0000000000..a83d090591 --- /dev/null +++ b/internal/service/streamprocessor/resource.go @@ -0,0 +1,264 @@ +package streamprocessor + +import ( + "context" + "errors" + "fmt" + "net/http" + "regexp" + + "github.com/hashicorp/terraform-plugin-framework/path" + "github.com/hashicorp/terraform-plugin-framework/resource" + "github.com/mongodb/terraform-provider-mongodbatlas/internal/config" + "go.mongodb.org/atlas-sdk/v20240805003/admin" +) + +const StreamProcessorName = "stream_processor" + +var _ resource.ResourceWithConfigure = &streamProcessorRS{} +var _ resource.ResourceWithImportState = &streamProcessorRS{} + +func Resource() resource.Resource { + return &streamProcessorRS{ + RSCommon: config.RSCommon{ + ResourceName: StreamProcessorName, + }, + } +} + +type streamProcessorRS struct { + config.RSCommon +} + +func (r *streamProcessorRS) Schema(ctx context.Context, req resource.SchemaRequest, resp *resource.SchemaResponse) { + resp.Schema = ResourceSchema(ctx) +} + +func (r *streamProcessorRS) Create(ctx context.Context, req resource.CreateRequest, resp *resource.CreateResponse) { + var plan TFStreamProcessorRSModel + resp.Diagnostics.Append(req.Plan.Get(ctx, &plan)...) + if resp.Diagnostics.HasError() { + return + } + + streamProcessorReq, diags := NewStreamProcessorReq(ctx, &plan) + if diags.HasError() { + resp.Diagnostics.Append(diags...) + return + } + + var needsStarting bool + if !plan.State.IsNull() && !plan.State.IsUnknown() { + switch plan.State.ValueString() { + case StartedState: + needsStarting = true + case CreatedState: + needsStarting = false + default: + resp.Diagnostics.AddError("When creating a stream processor, the only valid states are CREATED and STARTED", "") + return + } + } + + connV2 := r.Client.AtlasV2 + projectID := plan.ProjectID.ValueString() + instanceName := plan.InstanceName.ValueString() + processorName := plan.ProcessorName.ValueString() + _, _, err := connV2.StreamsApi.CreateStreamProcessor(ctx, projectID, instanceName, streamProcessorReq).Execute() + + if err != nil { + resp.Diagnostics.AddError("error creating resource", err.Error()) + return + } + + streamProcessorParams := &admin.GetStreamProcessorApiParams{ + GroupId: projectID, + TenantName: instanceName, + ProcessorName: processorName, + } + + streamProcessorResp, err := WaitStateTransition(ctx, streamProcessorParams, connV2.StreamsApi, []string{InitiatingState, CreatingState}, []string{CreatedState}) + if err != nil { + resp.Diagnostics.AddError("Error creating stream processor", err.Error()) + } + + if needsStarting { + _, _, err := connV2.StreamsApi.StartStreamProcessorWithParams(ctx, + &admin.StartStreamProcessorApiParams{ + GroupId: projectID, + TenantName: instanceName, + ProcessorName: processorName, + }, + ).Execute() + if err != nil { + resp.Diagnostics.AddError("Error starting stream processor", err.Error()) + } + streamProcessorResp, err = WaitStateTransition(ctx, streamProcessorParams, connV2.StreamsApi, []string{CreatedState}, []string{StartedState}) + if err != nil { + resp.Diagnostics.AddError("Error changing state of stream processor", err.Error()) + } + } + + newStreamProcessorModel, diags := NewStreamProcessorWithStats(ctx, projectID, instanceName, streamProcessorResp) + if diags.HasError() { + resp.Diagnostics.Append(diags...) + return + } + resp.Diagnostics.Append(resp.State.Set(ctx, newStreamProcessorModel)...) +} + +func (r *streamProcessorRS) Read(ctx context.Context, req resource.ReadRequest, resp *resource.ReadResponse) { + var state TFStreamProcessorRSModel + resp.Diagnostics.Append(req.State.Get(ctx, &state)...) + if resp.Diagnostics.HasError() { + return + } + + connV2 := r.Client.AtlasV2 + + projectID := state.ProjectID.ValueString() + instanceName := state.InstanceName.ValueString() + streamProcessor, apiResp, err := connV2.StreamsApi.GetStreamProcessor(ctx, projectID, instanceName, state.ProcessorName.ValueString()).Execute() + if err != nil { + if apiResp != nil && apiResp.StatusCode == http.StatusNotFound { + resp.State.RemoveResource(ctx) + return + } + resp.Diagnostics.AddError("error fetching resource", err.Error()) + return + } + + newStreamProcessorModel, diags := NewStreamProcessorWithStats(ctx, projectID, instanceName, streamProcessor) + if diags.HasError() { + resp.Diagnostics.Append(diags...) + return + } + resp.Diagnostics.Append(resp.State.Set(ctx, newStreamProcessorModel)...) +} + +func (r *streamProcessorRS) Update(ctx context.Context, req resource.UpdateRequest, resp *resource.UpdateResponse) { + var plan TFStreamProcessorRSModel + var state TFStreamProcessorRSModel + resp.Diagnostics.Append(req.Plan.Get(ctx, &plan)...) + resp.Diagnostics.Append(req.State.Get(ctx, &state)...) + + if resp.Diagnostics.HasError() { + return + } + + connV2 := r.Client.AtlasV2 + pendingStates := []string{CreatedState} + desiredState := []string{} + projectID := plan.ProjectID.ValueString() + instanceName := plan.InstanceName.ValueString() + processorName := plan.ProcessorName.ValueString() + currentState := state.State.ValueString() + if !updatedStateOnly(&plan, &state) { + resp.Diagnostics.AddError("updating a Stream Processor is not supported", "") + return + } + switch plan.State.ValueString() { + case StartedState: + desiredState = append(desiredState, StartedState) + pendingStates = append(pendingStates, StoppedState) + _, _, err := connV2.StreamsApi.StartStreamProcessorWithParams(ctx, + &admin.StartStreamProcessorApiParams{ + GroupId: projectID, + TenantName: instanceName, + ProcessorName: processorName, + }, + ).Execute() + if err != nil { + resp.Diagnostics.AddError("Error starting stream processor", err.Error()) + } + case StoppedState: + if currentState != StartedState { + resp.Diagnostics.AddError(fmt.Sprintf("Stream Processor must be in %s state to transition to %s state", StartedState, StoppedState), "") + return + } + desiredState = append(desiredState, StoppedState) + pendingStates = append(pendingStates, StartedState) + _, _, err := connV2.StreamsApi.StopStreamProcessorWithParams(ctx, + &admin.StopStreamProcessorApiParams{ + GroupId: projectID, + TenantName: instanceName, + ProcessorName: processorName, + }, + ).Execute() + if err != nil { + resp.Diagnostics.AddError("Error stopping stream processor", err.Error()) + } + default: + resp.Diagnostics.AddError("transitions to states other than STARTED or STOPPED are not supported", "") + return + } + + requestParams := &admin.GetStreamProcessorApiParams{ + GroupId: projectID, + TenantName: instanceName, + ProcessorName: processorName, + } + + streamProcessorResp, err := WaitStateTransition(ctx, requestParams, connV2.StreamsApi, pendingStates, desiredState) + if err != nil { + resp.Diagnostics.AddError("Error changing state of stream processor", err.Error()) + } + + newStreamProcessorModel, diags := NewStreamProcessorWithStats(ctx, projectID, instanceName, streamProcessorResp) + if diags.HasError() { + resp.Diagnostics.Append(diags...) + return + } + resp.Diagnostics.Append(resp.State.Set(ctx, newStreamProcessorModel)...) +} + +func (r *streamProcessorRS) Delete(ctx context.Context, req resource.DeleteRequest, resp *resource.DeleteResponse) { + var streamProcessorState *TFStreamProcessorRSModel + resp.Diagnostics.Append(req.State.Get(ctx, &streamProcessorState)...) + if resp.Diagnostics.HasError() { + return + } + + connV2 := r.Client.AtlasV2 + if _, err := connV2.StreamsApi.DeleteStreamProcessor(ctx, streamProcessorState.ProjectID.ValueString(), streamProcessorState.InstanceName.ValueString(), streamProcessorState.ProcessorName.ValueString()).Execute(); err != nil { + resp.Diagnostics.AddError("error deleting resource", err.Error()) + return + } +} + +func (r *streamProcessorRS) ImportState(ctx context.Context, req resource.ImportStateRequest, resp *resource.ImportStateResponse) { + projectID, instanceName, processorName, err := splitImportID(req.ID) + if err != nil { + resp.Diagnostics.AddError("error splitting import ID", err.Error()) + return + } + + resp.Diagnostics.Append(resp.State.SetAttribute(ctx, path.Root("project_id"), projectID)...) + resp.Diagnostics.Append(resp.State.SetAttribute(ctx, path.Root("instance_name"), instanceName)...) + resp.Diagnostics.Append(resp.State.SetAttribute(ctx, path.Root("processor_name"), processorName)...) +} + +func splitImportID(id string) (projectID, instanceName, processorName *string, err error) { + var re = regexp.MustCompile(`^(.*)-([0-9a-fA-F]{24})-(.*)$`) + parts := re.FindStringSubmatch(id) + + if len(parts) != 4 { + err = errors.New("import format error: to import a stream processor, use the format {instance_name}-{project_id}-(processor_name)") + return + } + + instanceName = &parts[1] + projectID = &parts[2] + processorName = &parts[3] + + return +} + +func updatedStateOnly(plan, state *TFStreamProcessorRSModel) bool { + return plan.ProjectID.Equal(state.ProjectID) && + plan.InstanceName.Equal(state.InstanceName) && + plan.ProcessorName.Equal(state.ProcessorName) && + plan.Pipeline.Equal(state.Pipeline) && + (plan.Options.Equal(state.Options) || plan.Options.IsUnknown()) && + !plan.State.Equal(state.State) +} diff --git a/internal/service/streamprocessor/resource_migration_test.go b/internal/service/streamprocessor/resource_migration_test.go new file mode 100644 index 0000000000..e01a4b27cc --- /dev/null +++ b/internal/service/streamprocessor/resource_migration_test.go @@ -0,0 +1,12 @@ +package streamprocessor_test + +import ( + "testing" + + "github.com/mongodb/terraform-provider-mongodbatlas/internal/testutil/mig" +) + +func TestMigStreamProcessor_basic(t *testing.T) { + mig.SkipIfVersionBelow(t, "1.19.0") // when resource 1st released + mig.CreateAndRunTest(t, basicTestCase(t)) +} diff --git a/internal/service/streamprocessor/resource_schema.go b/internal/service/streamprocessor/resource_schema.go new file mode 100644 index 0000000000..2e8ce79d12 --- /dev/null +++ b/internal/service/streamprocessor/resource_schema.go @@ -0,0 +1,133 @@ +package streamprocessor + +import ( + "context" + + "github.com/hashicorp/terraform-plugin-framework/attr" + "github.com/hashicorp/terraform-plugin-framework/resource/schema" + "github.com/hashicorp/terraform-plugin-framework/resource/schema/planmodifier" + "github.com/hashicorp/terraform-plugin-framework/resource/schema/stringplanmodifier" + "github.com/hashicorp/terraform-plugin-framework/types" + "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/fwtypes" +) + +func optionsSchema(isDatasource bool) schema.SingleNestedAttribute { + return schema.SingleNestedAttribute{ + Attributes: map[string]schema.Attribute{ + "dlq": schema.SingleNestedAttribute{ + Attributes: map[string]schema.Attribute{ + "coll": schema.StringAttribute{ + Required: !isDatasource, + Computed: isDatasource, + Description: "Name of the collection to use for the DLQ.", + MarkdownDescription: "Name of the collection to use for the DLQ.", + }, + "connection_name": schema.StringAttribute{ + Required: !isDatasource, + Computed: isDatasource, + Description: "Name of the connection to write DLQ messages to. Must be an Atlas connection.", + MarkdownDescription: "Name of the connection to write DLQ messages to. Must be an Atlas connection.", + }, + "db": schema.StringAttribute{ + Required: !isDatasource, + Computed: isDatasource, + Description: "Name of the database to use for the DLQ.", + MarkdownDescription: "Name of the database to use for the DLQ.", + }, + }, + Required: !isDatasource, + Computed: isDatasource, + Description: "Dead letter queue for the stream processor. Refer to the [MongoDB Atlas Docs](https://www.mongodb.com/docs/atlas/reference/glossary/#std-term-dead-letter-queue) for more information.", + MarkdownDescription: "Dead letter queue for the stream processor. Refer to the [MongoDB Atlas Docs](https://www.mongodb.com/docs/atlas/reference/glossary/#std-term-dead-letter-queue) for more information.", + }, + }, + Optional: !isDatasource, + Computed: isDatasource, + Description: "Optional configuration for the stream processor.", + MarkdownDescription: "Optional configuration for the stream processor.", + } +} + +func ResourceSchema(ctx context.Context) schema.Schema { + return schema.Schema{ + Attributes: map[string]schema.Attribute{ + "id": schema.StringAttribute{ + Computed: true, + Description: "Unique 24-hexadecimal character string that identifies the stream processor.", + MarkdownDescription: "Unique 24-hexadecimal character string that identifies the stream processor.", + PlanModifiers: []planmodifier.String{ + stringplanmodifier.UseStateForUnknown(), + }, + }, + "instance_name": schema.StringAttribute{ + Required: true, + Description: "Human-readable label that identifies the stream instance.", + MarkdownDescription: "Human-readable label that identifies the stream instance.", + }, + "pipeline": schema.StringAttribute{ + CustomType: fwtypes.JSONStringType, + Required: true, + Description: "Stream aggregation pipeline you want to apply to your streaming data. [MongoDB Atlas Docs](https://www.mongodb.com/docs/atlas/atlas-stream-processing/stream-aggregation/#std-label-stream-aggregation)" + + " contain more information. Using [jsonencode](https://developer.hashicorp.com/terraform/language/functions/jsonencode) is recommended when settig this attribute. For more details see [Aggregation Pipelines Documentation](https://www.mongodb.com/docs/atlas/atlas-stream-processing/stream-aggregation/)", + MarkdownDescription: "Stream aggregation pipeline you want to apply to your streaming data. [MongoDB Atlas Docs](https://www.mongodb.com/docs/atlas/atlas-stream-processing/stream-aggregation/#std-label-stream-aggregation)" + + " contain more information. Using [jsonencode](https://developer.hashicorp.com/terraform/language/functions/jsonencode) is recommended when settig this attribute. For more details see [Aggregation Pipelines Documentation](https://www.mongodb.com/docs/atlas/atlas-stream-processing/stream-aggregation/)", + }, + "processor_name": schema.StringAttribute{ + Required: true, + Description: "Human-readable label that identifies the stream processor.", + MarkdownDescription: "Human-readable label that identifies the stream processor.", + }, + "project_id": schema.StringAttribute{ + Required: true, + Description: "Unique 24-hexadecimal digit string that identifies your project. Use the [/groups](#tag/Projects/operation/listProjects) endpoint to retrieve all projects to which the authenticated user has access.\n\n**NOTE**: Groups and projects are synonymous terms. Your group id is the same as your project id. For existing groups, your group/project id remains the same. The resource and corresponding endpoints use the term groups.", + MarkdownDescription: "Unique 24-hexadecimal digit string that identifies your project. Use the [/groups](#tag/Projects/operation/listProjects) endpoint to retrieve all projects to which the authenticated user has access.\n\n**NOTE**: Groups and projects are synonymous terms. Your group id is the same as your project id. For existing groups, your group/project id remains the same. The resource and corresponding endpoints use the term groups.", + }, + "state": schema.StringAttribute{ + Optional: true, + Computed: true, + Description: "The state of the stream processor. Commonly occurring states are 'CREATED', 'STARTED', 'STOPPED' and 'FAILED'. Used to start or stop the Stream Processor. Valid values are `CREATED`, `STARTED` or `STOPPED`." + + " When a Stream Processor is created without specifying the state, it will default to `CREATED` state.\n\n**NOTE** When a stream processor is created, the only valid states are CREATED or STARTED. A stream processor can be automatically started when creating it if the state is set to STARTED.", + MarkdownDescription: "The state of the stream processor. Commonly occurring states are 'CREATED', 'STARTED', 'STOPPED' and 'FAILED'. Used to start or stop the Stream Processor. Valid values are `CREATED`, `STARTED` or `STOPPED`." + + " When a Stream Processor is created without specifying the state, it will default to `CREATED` state.\n\n**NOTE** When a stream processor is created, the only valid states are CREATED or STARTED. A stream processor can be automatically started when creating it if the state is set to STARTED.", + }, + "options": optionsSchema(false), + "stats": schema.StringAttribute{ + Computed: true, + Description: "The stats associated with the stream processor. Refer to the [MongoDB Atlas Docs](https://www.mongodb.com/docs/atlas/atlas-stream-processing/manage-stream-processor/#view-statistics-of-a-stream-processor) for more information.", + MarkdownDescription: "The stats associated with the stream processor. Refer to the [MongoDB Atlas Docs](https://www.mongodb.com/docs/atlas/atlas-stream-processing/manage-stream-processor/#view-statistics-of-a-stream-processor) for more information.", + }, + }, + } +} + +type TFStreamProcessorRSModel struct { + InstanceName types.String `tfsdk:"instance_name"` + Options types.Object `tfsdk:"options"` + Pipeline fwtypes.JSONString `tfsdk:"pipeline"` + ProcessorID types.String `tfsdk:"id"` + ProcessorName types.String `tfsdk:"processor_name"` + ProjectID types.String `tfsdk:"project_id"` + State types.String `tfsdk:"state"` + Stats types.String `tfsdk:"stats"` +} + +type TFOptionsModel struct { + Dlq types.Object `tfsdk:"dlq"` +} + +type TFDlqModel struct { + Coll types.String `tfsdk:"coll"` + ConnectionName types.String `tfsdk:"connection_name"` + DB types.String `tfsdk:"db"` +} + +var OptionsObjectType = types.ObjectType{AttrTypes: map[string]attr.Type{ + "dlq": DlqObjectType, +}} + +var DlqObjectType = types.ObjectType{AttrTypes: map[string]attr.Type{ + "coll": types.StringType, + "connection_name": types.StringType, + "db": types.StringType, +}, +} diff --git a/internal/service/streamprocessor/resource_test.go b/internal/service/streamprocessor/resource_test.go new file mode 100644 index 0000000000..ea7ce98b0a --- /dev/null +++ b/internal/service/streamprocessor/resource_test.go @@ -0,0 +1,472 @@ +package streamprocessor_test + +import ( + "context" + "fmt" + "regexp" + "strings" + "testing" + + "github.com/hashicorp/terraform-plugin-testing/helper/resource" + "github.com/hashicorp/terraform-plugin-testing/terraform" + "github.com/stretchr/testify/assert" + + "github.com/mongodb/terraform-provider-mongodbatlas/internal/service/streamprocessor" + "github.com/mongodb/terraform-provider-mongodbatlas/internal/testutil/acc" +) + +type connectionConfig struct { + connectionType string + clusterName string + pipelineStepIsSource bool + useAsDLQ bool + extraWhitespace bool + invalidJSON bool +} + +var ( + resourceName = "mongodbatlas_stream_processor.processor" + dataSourceName = "data.mongodbatlas_stream_processor.test" + pluralDataSourceName = "data.mongodbatlas_stream_processors.test" + connTypeSample = "Sample" + connTypeCluster = "Cluster" + connTypeKafka = "Kafka" + connTypeTestLog = "TestLog" + sampleSrcConfig = connectionConfig{connectionType: connTypeSample, pipelineStepIsSource: true} + testLogDestConfig = connectionConfig{connectionType: connTypeTestLog, pipelineStepIsSource: false} +) + +func TestAccStreamProcessor_basic(t *testing.T) { + resource.ParallelTest(t, *basicTestCase(t)) +} + +func basicTestCase(t *testing.T) *resource.TestCase { + t.Helper() + var ( + projectID = acc.ProjectIDExecution(t) + processorName = "new-processor" + instanceName = acc.RandomName() + ) + + return &resource.TestCase{ + PreCheck: func() { acc.PreCheckBasic(t) }, + ProtoV6ProviderFactories: acc.TestAccProviderV6Factories, + CheckDestroy: checkDestroyStreamProcessor, + Steps: []resource.TestStep{ + { + Config: config(t, projectID, instanceName, processorName, "", sampleSrcConfig, testLogDestConfig), + Check: composeStreamProcessorChecks(projectID, instanceName, processorName, streamprocessor.CreatedState, false, false), + }, + { + Config: config(t, projectID, instanceName, processorName, streamprocessor.StartedState, sampleSrcConfig, testLogDestConfig), + Check: composeStreamProcessorChecks(projectID, instanceName, processorName, streamprocessor.StartedState, true, false), + }, + { + ResourceName: resourceName, + ImportStateIdFunc: importStateIDFunc(resourceName), + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"stats"}, + }, + }} +} + +func TestAccStreamProcessor_JSONWhiteSpaceFormat(t *testing.T) { + var ( + projectID = acc.ProjectIDExecution(t) + processorName = "new-processor-json-unchanged" + instanceName = acc.RandomName() + sampleSrcConfigExtraSpaces = connectionConfig{connectionType: connTypeSample, pipelineStepIsSource: true, extraWhitespace: true} + ) + resource.ParallelTest(t, resource.TestCase{ + ProtoV6ProviderFactories: acc.TestAccProviderV6Factories, + PreCheck: func() { acc.PreCheckBasic(t) }, + CheckDestroy: checkDestroyStreamProcessor, + Steps: []resource.TestStep{ + { + Config: config(t, projectID, instanceName, processorName, streamprocessor.CreatedState, sampleSrcConfigExtraSpaces, testLogDestConfig), + Check: composeStreamProcessorChecks(projectID, instanceName, processorName, streamprocessor.CreatedState, false, false), + }, + }}) +} + +func TestAccStreamProcessor_withOptions(t *testing.T) { + var ( + projectID, clusterName = acc.ClusterNameExecution(t) + processorName = "new-processor" + instanceName = acc.RandomName() + src = connectionConfig{connectionType: connTypeCluster, clusterName: clusterName, pipelineStepIsSource: true, useAsDLQ: true} + dest = connectionConfig{connectionType: connTypeKafka, pipelineStepIsSource: false} + ) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acc.PreCheckBasic(t) }, + ProtoV6ProviderFactories: acc.TestAccProviderV6Factories, + CheckDestroy: checkDestroyStreamProcessor, + Steps: []resource.TestStep{ + { + Config: config(t, projectID, instanceName, processorName, streamprocessor.CreatedState, src, dest), + Check: composeStreamProcessorChecks(projectID, instanceName, processorName, streamprocessor.CreatedState, false, true), + }, + { + ResourceName: resourceName, + ImportStateIdFunc: importStateIDFunc(resourceName), + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"stats"}, + }, + }}) +} + +func TestAccStreamProcessor_createWithAutoStartAndStop(t *testing.T) { + var ( + projectID = acc.ProjectIDExecution(t) + processorName = "new-processor" + instanceName = acc.RandomName() + ) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acc.PreCheckBasic(t) }, + ProtoV6ProviderFactories: acc.TestAccProviderV6Factories, + CheckDestroy: checkDestroyStreamProcessor, + Steps: []resource.TestStep{ + { + Config: config(t, projectID, instanceName, processorName, streamprocessor.StartedState, sampleSrcConfig, testLogDestConfig), + Check: composeStreamProcessorChecks(projectID, instanceName, processorName, streamprocessor.StartedState, true, false), + }, + { + Config: config(t, projectID, instanceName, processorName, streamprocessor.StoppedState, sampleSrcConfig, testLogDestConfig), + Check: composeStreamProcessorChecks(projectID, instanceName, processorName, streamprocessor.StoppedState, true, false), + }, + }}) +} + +func TestAccStreamProcessor_clusterType(t *testing.T) { + var ( + projectID, clusterName = acc.ClusterNameExecution(t) + processorName = "new-processor" + instanceName = acc.RandomName() + srcConfig = connectionConfig{connectionType: connTypeCluster, clusterName: clusterName, pipelineStepIsSource: true} + ) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acc.PreCheckBasic(t) }, + ProtoV6ProviderFactories: acc.TestAccProviderV6Factories, + CheckDestroy: checkDestroyStreamProcessor, + Steps: []resource.TestStep{ + { + Config: config(t, projectID, instanceName, processorName, streamprocessor.StartedState, srcConfig, testLogDestConfig), + Check: composeStreamProcessorChecks(projectID, instanceName, processorName, streamprocessor.StartedState, true, false), + }, + }}) +} + +func TestAccStreamProcessor_createErrors(t *testing.T) { + var ( + projectID = acc.ProjectIDExecution(t) + processorName = "new-processor" + instanceName = acc.RandomName() + invalidJSONConfig = connectionConfig{connectionType: connTypeSample, pipelineStepIsSource: true, invalidJSON: true} + ) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acc.PreCheckBasic(t) }, + ProtoV6ProviderFactories: acc.TestAccProviderV6Factories, + CheckDestroy: checkDestroyStreamProcessor, + Steps: []resource.TestStep{ + { + Config: config(t, projectID, instanceName, processorName, streamprocessor.StoppedState, invalidJSONConfig, testLogDestConfig), + ExpectError: regexp.MustCompile("Invalid JSON String Value"), + }, + { + Config: config(t, projectID, instanceName, processorName, streamprocessor.StoppedState, sampleSrcConfig, testLogDestConfig), + ExpectError: regexp.MustCompile("When creating a stream processor, the only valid states are CREATED and STARTED"), + }, + }}) +} + +func TestAccStreamProcessor_updateErrors(t *testing.T) { + var ( + processorName = "new-processor" + instanceName = acc.RandomName() + projectID, clusterName = acc.ClusterNameExecution(t) + src = connectionConfig{connectionType: connTypeCluster, clusterName: clusterName, pipelineStepIsSource: true, useAsDLQ: false} + srcWithOptions = connectionConfig{connectionType: connTypeCluster, clusterName: clusterName, pipelineStepIsSource: true, useAsDLQ: true} + ) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acc.PreCheckBasic(t) }, + ProtoV6ProviderFactories: acc.TestAccProviderV6Factories, + CheckDestroy: checkDestroyStreamProcessor, + Steps: []resource.TestStep{ + { + Config: config(t, projectID, instanceName, processorName, streamprocessor.CreatedState, src, testLogDestConfig), + Check: composeStreamProcessorChecks(projectID, instanceName, processorName, streamprocessor.CreatedState, false, false), + }, + { + Config: config(t, projectID, instanceName, processorName, streamprocessor.StoppedState, src, testLogDestConfig), + ExpectError: regexp.MustCompile(`Stream Processor must be in \w+ state to transition to \w+ state`), + }, + { + Config: config(t, projectID, instanceName, processorName, streamprocessor.StartedState, srcWithOptions, testLogDestConfig), + ExpectError: regexp.MustCompile("updating a Stream Processor is not supported"), + }, + }}) +} + +func checkExists(resourceName string) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[resourceName] + if !ok { + return fmt.Errorf("not found: %s", resourceName) + } + projectID := rs.Primary.Attributes["project_id"] + instanceName := rs.Primary.Attributes["instance_name"] + processorName := rs.Primary.Attributes["processor_name"] + _, _, err := acc.ConnV2().StreamsApi.GetStreamProcessor(context.Background(), projectID, instanceName, processorName).Execute() + + if err != nil { + return fmt.Errorf("Stream processor (%s) does not exist", processorName) + } + + return nil + } +} + +func checkDestroyStreamProcessor(s *terraform.State) error { + for _, rs := range s.RootModule().Resources { + if rs.Type != "mongodbatlas_stream_processor" { + continue + } + projectID := rs.Primary.Attributes["project_id"] + instanceName := rs.Primary.Attributes["instance_name"] + processorName := rs.Primary.Attributes["processor_name"] + _, _, err := acc.ConnV2().StreamsApi.GetStreamProcessor(context.Background(), projectID, instanceName, processorName).Execute() + if err == nil { + return fmt.Errorf("Stream processor (%s) still exists", processorName) + } + } + + return nil +} + +func importStateIDFunc(resourceName string) resource.ImportStateIdFunc { + return func(s *terraform.State) (string, error) { + rs, ok := s.RootModule().Resources[resourceName] + if !ok { + return "", fmt.Errorf("not found: %s", resourceName) + } + + return fmt.Sprintf("%s-%s-%s", rs.Primary.Attributes["instance_name"], rs.Primary.Attributes["project_id"], rs.Primary.Attributes["processor_name"]), nil + } +} + +func composeStreamProcessorChecks(projectID, instanceName, processorName, state string, includeStats, includeOptions bool) resource.TestCheckFunc { + checks := []resource.TestCheckFunc{checkExists(resourceName)} + attributes := map[string]string{ + "project_id": projectID, + "instance_name": instanceName, + "processor_name": processorName, + "state": state, + } + checks = acc.AddAttrChecks(resourceName, checks, attributes) + checks = acc.AddAttrChecks(dataSourceName, checks, attributes) + checks = acc.AddAttrChecks(pluralDataSourceName, checks, map[string]string{ + "project_id": projectID, + "instance_name": instanceName, + "results.#": "1", + "results.0.processor_name": processorName, + "results.0.state": state, + "results.0.instance_name": instanceName, + }) + if includeStats { + checks = acc.AddAttrSetChecks(resourceName, checks, "stats", "pipeline") + checks = acc.AddAttrSetChecks(dataSourceName, checks, "stats", "pipeline") + checks = acc.AddAttrSetChecks(pluralDataSourceName, checks, "results.0.stats", "results.0.pipeline") + } + if includeOptions { + checks = acc.AddAttrSetChecks(resourceName, checks, "options.dlq.db", "options.dlq.coll", "options.dlq.connection_name") + checks = acc.AddAttrSetChecks(dataSourceName, checks, "options.dlq.db", "options.dlq.coll", "options.dlq.connection_name") + checks = acc.AddAttrSetChecks(pluralDataSourceName, checks, "results.0.options.dlq.db", "results.0.options.dlq.coll", "results.0.options.dlq.connection_name") + } + return resource.ComposeAggregateTestCheckFunc(checks...) +} + +func config(t *testing.T, projectID, instanceName, processorName, state string, src, dest connectionConfig) string { + t.Helper() + stateConfig := "" + if state != "" { + stateConfig = fmt.Sprintf(`state = %[1]q`, state) + } + + connectionConfigSrc, connectionIDSrc, pipelineStepSrc := configConnection(t, projectID, src) + connectionConfigDest, connectionIDDest, pipelineStepDest := configConnection(t, projectID, dest) + dependsOn := []string{} + if connectionIDSrc != "" { + dependsOn = append(dependsOn, connectionIDSrc) + } + if connectionIDDest != "" { + dependsOn = append(dependsOn, connectionIDDest) + } + dependsOnStr := strings.Join(dependsOn, ", ") + pipeline := fmt.Sprintf("[{\"$source\":%1s},{\"$emit\":%2s}]", pipelineStepSrc, pipelineStepDest) + optionsStr := "" + if src.useAsDLQ { + assert.Equal(t, connTypeCluster, src.connectionType) + optionsStr = fmt.Sprintf(` + options = { + dlq = { + coll = "dlq_coll" + connection_name = %[1]s.connection_name + db = "dlq_db" + } + }`, connectionIDSrc) + } + + dataSource := fmt.Sprintf(` + data "mongodbatlas_stream_processor" "test" { + project_id = %[1]q + instance_name = %[2]q + processor_name = %[3]q + depends_on = [%4s] + }`, projectID, instanceName, processorName, resourceName) + dataSourcePlural := fmt.Sprintf(` + data "mongodbatlas_stream_processors" "test" { + project_id = %[1]q + instance_name = %[2]q + depends_on = [%3s] + }`, projectID, instanceName, resourceName) + + return fmt.Sprintf(` + resource "mongodbatlas_stream_instance" "instance" { + project_id = %[1]q + instance_name = %[2]q + data_process_region = { + region = "VIRGINIA_USA" + cloud_provider = "AWS" + } + } + + %[3]s + %[4]s + + resource "mongodbatlas_stream_processor" "processor" { + project_id = %[1]q + instance_name = mongodbatlas_stream_instance.instance.instance_name + processor_name = %[5]q + pipeline = %[6]q + %[7]s + %[8]s + depends_on = [%[9]s] + } + %[10]s + %[11]s + + `, projectID, instanceName, connectionConfigSrc, connectionConfigDest, processorName, pipeline, stateConfig, optionsStr, dependsOnStr, dataSource, dataSourcePlural) +} + +func configConnection(t *testing.T, projectID string, config connectionConfig) (connectionConfig, resourceID, pipelineStep string) { + t.Helper() + assert.False(t, config.extraWhitespace && config.connectionType != connTypeSample, "extraWhitespace is only supported for Sample connection") + assert.False(t, config.invalidJSON && config.connectionType != connTypeSample, "invalidJson is only supported for Sample connection") + connectionType := config.connectionType + pipelineStepIsSource := config.pipelineStepIsSource + switch connectionType { + case "Cluster": + var connectionName, resourceName string + clusterName := config.clusterName + assert.NotEqual(t, "", clusterName) + if pipelineStepIsSource { + connectionName = "ClusterConnectionSrc" + resourceName = "cluster_src" + } else { + connectionName = "ClusterConnectionDest" + resourceName = "cluster_dest" + } + connectionConfig = fmt.Sprintf(` + resource "mongodbatlas_stream_connection" %[4]q { + project_id = %[1]q + cluster_name = %[2]q + instance_name = mongodbatlas_stream_instance.instance.instance_name + connection_name = %[3]q + type = "Cluster" + db_role_to_execute = { + role = "atlasAdmin" + type = "BUILT_IN" + } + depends_on = [mongodbatlas_stream_instance.instance] + } + `, projectID, clusterName, connectionName, resourceName) + resourceID = fmt.Sprintf("mongodbatlas_stream_connection.%s", resourceName) + pipelineStep = fmt.Sprintf("{\"connectionName\":%q}", connectionName) + return connectionConfig, resourceID, pipelineStep + case "Kafka": + var connectionName, resourceName, pipelineStep string + if pipelineStepIsSource { + connectionName = "KafkaConnectionSrc" + resourceName = "kafka_src" + pipelineStep = fmt.Sprintf("{\"connectionName\":%q}", connectionName) + } else { + connectionName = "KafkaConnectionDest" + resourceName = "kafka_dest" + pipelineStep = fmt.Sprintf("{\"connectionName\":%q,\"topic\":\"random_topic\"}", connectionName) + } + connectionConfig = fmt.Sprintf(` + resource "mongodbatlas_stream_connection" %[3]q{ + project_id = %[1]q + instance_name = mongodbatlas_stream_instance.instance.instance_name + connection_name = %[2]q + type = "Kafka" + authentication = { + mechanism = "PLAIN" + username = "user" + password = "rawpassword" + } + bootstrap_servers = "localhost:9092,localhost:9092" + config = { + "auto.offset.reset" : "earliest" + } + security = { + protocol = "PLAINTEXT" + } + depends_on = [mongodbatlas_stream_instance.instance] + } + `, projectID, connectionName, resourceName) + resourceID = fmt.Sprintf("mongodbatlas_stream_connection.%s", resourceName) + return connectionConfig, resourceID, pipelineStep + case "Sample": + if !pipelineStepIsSource { + t.Fatal("Sample connection must be used as a source") + } + connectionConfig = fmt.Sprintf(` + resource "mongodbatlas_stream_connection" "sample" { + project_id = %[1]q + instance_name = mongodbatlas_stream_instance.instance.instance_name + connection_name = "sample_stream_solar" + type = "Sample" + depends_on = [mongodbatlas_stream_instance.instance] + } + `, projectID) + resourceID = "mongodbatlas_stream_connection.sample" + if config.extraWhitespace { + pipelineStep = "{\"connectionName\": \"sample_stream_solar\"}" + } else { + pipelineStep = "{\"connectionName\":\"sample_stream_solar\"}" + } + if config.invalidJSON { + pipelineStep = "{\"connectionName\": \"sample_stream_solar\"" // missing closing bracket + } + return connectionConfig, resourceID, pipelineStep + + case "TestLog": + if pipelineStepIsSource { + t.Fatal("TestLog connection must be used as a destination") + } + connectionConfig = "" + resourceID = "" + pipelineStep = "{\"connectionName\":\"__testLog\"}" + return connectionConfig, resourceID, pipelineStep + } + t.Fatalf("Unknown connection type: %s", connectionType) + return connectionConfig, resourceID, pipelineStep +} diff --git a/internal/service/streamprocessor/state_transition.go b/internal/service/streamprocessor/state_transition.go new file mode 100644 index 0000000000..66bfc62289 --- /dev/null +++ b/internal/service/streamprocessor/state_transition.go @@ -0,0 +1,61 @@ +package streamprocessor + +import ( + "context" + "errors" + "fmt" + "net/http" + "time" + + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" + "go.mongodb.org/atlas-sdk/v20240805003/admin" +) + +const ( + InitiatingState = "INIT" + CreatingState = "CREATING" + CreatedState = "CREATED" + StartedState = "STARTED" + StoppedState = "STOPPED" + DroppedState = "DROPPED" + FailedState = "FAILED" +) + +func WaitStateTransition(ctx context.Context, requestParams *admin.GetStreamProcessorApiParams, client admin.StreamsApi, pendingStates, desiredStates []string) (*admin.StreamsProcessorWithStats, error) { + stateConf := &retry.StateChangeConf{ + Pending: pendingStates, + Target: desiredStates, + Refresh: refreshFunc(ctx, requestParams, client), + Timeout: 5 * time.Minute, // big pipelines can take a while to stop due to checkpointing. We prefer the API to raise the error (~ 3min) than having to expose custom timeouts. + MinTimeout: 3 * time.Second, + Delay: 0, + } + + streamProcessorResp, err := stateConf.WaitForStateContext(ctx) + if err != nil { + return nil, err + } + + if streamProcessor, ok := streamProcessorResp.(*admin.StreamsProcessorWithStats); ok && streamProcessor != nil { + return streamProcessor, nil + } + + return nil, errors.New("did not obtain valid result when waiting for stream processor state transition") +} + +func refreshFunc(ctx context.Context, requestParams *admin.GetStreamProcessorApiParams, client admin.StreamsApi) retry.StateRefreshFunc { + return func() (any, string, error) { + streamProcessor, resp, err := client.GetStreamProcessorWithParams(ctx, requestParams).Execute() + if err != nil { + if resp.StatusCode == http.StatusNotFound { + return "", DroppedState, err + } + return nil, FailedState, err + } + state := streamProcessor.GetState() + if state == FailedState { + return nil, state, fmt.Errorf("error creating MongoDB Stream Processor(%s) status was: %s", requestParams.ProcessorName, state) + } + return streamProcessor, state, nil + } +} diff --git a/internal/service/streamprocessor/state_transition_test.go b/internal/service/streamprocessor/state_transition_test.go new file mode 100644 index 0000000000..783e41006a --- /dev/null +++ b/internal/service/streamprocessor/state_transition_test.go @@ -0,0 +1,156 @@ +package streamprocessor_test + +import ( + "context" + "errors" + "net/http" + "testing" + + "go.mongodb.org/atlas-sdk/v20240805003/admin" + "go.mongodb.org/atlas-sdk/v20240805003/mockadmin" + + "github.com/stretchr/testify/assert" + "github.com/stretchr/testify/mock" + + "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion" + "github.com/mongodb/terraform-provider-mongodbatlas/internal/service/streamprocessor" +) + +var ( + InitiatingState = "INIT" + CreatingState = "CREATING" + CreatedState = "CREATED" + StartedState = "STARTED" + StoppedState = "STOPPED" + DroppedState = "DROPPED" + FailedState = "FAILED" + sc500 = conversion.IntPtr(500) + sc200 = conversion.IntPtr(200) + sc404 = conversion.IntPtr(404) + streamProcessorName = "processorName" + requestParams = &admin.GetStreamProcessorApiParams{ + GroupId: "groupId", + TenantName: "tenantName", + ProcessorName: streamProcessorName, + } +) + +type testCase struct { + expectedState *string + name string + mockResponses []response + desiredStates []string + pendingStates []string + expectedError bool +} + +func TestStreamProcessorStateTransition(t *testing.T) { + testCases := []testCase{ + { + name: "Successful transition to CREATED", + mockResponses: []response{ + {state: &InitiatingState, statusCode: sc200}, + {state: &CreatingState, statusCode: sc200}, + {state: &CreatedState, statusCode: sc200}, + }, + expectedState: &CreatedState, + expectedError: false, + desiredStates: []string{CreatedState}, + pendingStates: []string{InitiatingState, CreatingState}, + }, + { + name: "Successful transition to STARTED", + mockResponses: []response{ + {state: &CreatedState, statusCode: sc200}, + {state: &StartedState, statusCode: sc200}, + }, + expectedState: &StartedState, + expectedError: false, + desiredStates: []string{StartedState}, + pendingStates: []string{CreatedState, StoppedState}, + }, + { + name: "Successful transition to STOPPED", + mockResponses: []response{ + {state: &StartedState, statusCode: sc200}, + {state: &StoppedState, statusCode: sc200}, + }, + expectedState: &StoppedState, + expectedError: false, + desiredStates: []string{StoppedState}, + pendingStates: []string{StartedState}, + }, + { + name: "Error when transitioning to FAILED state", + mockResponses: []response{ + {state: &InitiatingState, statusCode: sc200}, + {state: &FailedState, statusCode: sc200}, + }, + expectedState: nil, + expectedError: true, + desiredStates: []string{CreatedState}, + pendingStates: []string{InitiatingState, CreatingState}, + }, + { + name: "Error when API responds with error", + mockResponses: []response{ + {statusCode: sc500, err: errors.New("Internal server error")}, + }, + expectedState: nil, + expectedError: true, + desiredStates: []string{CreatedState, FailedState}, + pendingStates: []string{InitiatingState, CreatingState}, + }, + { + name: "Dropped state when 404 is returned", + mockResponses: []response{ + {statusCode: sc404, err: errors.New("Not found")}, + }, + expectedState: &DroppedState, + expectedError: true, + desiredStates: []string{CreatedState, FailedState}, + pendingStates: []string{InitiatingState, CreatingState}, + }, + } + + for _, tc := range testCases { + t.Run(tc.name, func(t *testing.T) { + m := mockadmin.NewStreamsApi(t) + m.EXPECT().GetStreamProcessorWithParams(mock.Anything, mock.Anything).Return(admin.GetStreamProcessorApiRequest{ApiService: m}) + + for _, resp := range tc.mockResponses { + modelResp, httpResp, err := resp.get() + m.EXPECT().GetStreamProcessorExecute(mock.Anything).Return(modelResp, httpResp, err).Once() + } + resp, err := streamprocessor.WaitStateTransition(context.Background(), requestParams, m, tc.pendingStates, tc.desiredStates) + assert.Equal(t, tc.expectedError, err != nil) + if resp != nil { + assert.Equal(t, *tc.expectedState, resp.State) + } + }) + } +} + +type response struct { + state *string + statusCode *int + err error +} + +func (r *response) get() (*admin.StreamsProcessorWithStats, *http.Response, error) { + var httpResp *http.Response + if r.statusCode != nil { + httpResp = &http.Response{StatusCode: *r.statusCode} + } + return responseWithState(r.state), httpResp, r.err +} + +func responseWithState(state *string) *admin.StreamsProcessorWithStats { + if state == nil { + return nil + } + return &admin.StreamsProcessorWithStats{ + Name: streamProcessorName, + State: *state, + } +} diff --git a/internal/service/streamprocessor/tfplugingen/generator_config.yml b/internal/service/streamprocessor/tfplugingen/generator_config.yml new file mode 100644 index 0000000000..7e36507e66 --- /dev/null +++ b/internal/service/streamprocessor/tfplugingen/generator_config.yml @@ -0,0 +1,24 @@ +provider: + name: mongodbatlas + +resources: + stream_processor: + read: + path: /api/atlas/v2/groups/{groupId}/streams/{tenantName}/processor/{processorName} + method: GET + create: + path: /api/atlas/v2/groups/{groupId}/streams/{tenantName}/processor + method: POST + delete: + path: /api/atlas/v2/groups/{groupId}/streams/{tenantName}/processor/{processorName} + method: DELETE + +data_sources: + stream_processor: + read: + path: /api/atlas/v2/groups/{groupId}/streams/{tenantName}/processor/{processorName} + method: GET + stream_processors: + read: + path: /api/atlas/v2/groups/{groupId}/streams/{tenantName}/processors + method: GET diff --git a/scripts/schema-scaffold.sh b/scripts/schema-scaffold.sh index 1438fe73c3..070e96d99c 100755 --- a/scripts/schema-scaffold.sh +++ b/scripts/schema-scaffold.sh @@ -50,5 +50,5 @@ rename_file() { } rename_file "${resource_name_snake_case}_data_source_gen.go" "data_source_schema.go" -rename_file "${resource_name_snake_case}s_data_source_gen.go" "pural_data_source_schema.go" +rename_file "${resource_name_snake_case}s_data_source_gen.go" "plural_data_source_schema.go" rename_file "${resource_name_snake_case}_resource_gen.go" "resource_schema.go" diff --git a/templates/data-sources/stream_processor.md.tmpl b/templates/data-sources/stream_processor.md.tmpl new file mode 100644 index 0000000000..07dc70b478 --- /dev/null +++ b/templates/data-sources/stream_processor.md.tmpl @@ -0,0 +1,10 @@ +# {{.Type}}: {{.Name}} + +`{{.Name}}` describes a stream processor. + +## Example Usages +{{ tffile (printf "examples/%s/main.tf" .Name )}} + +{{ .SchemaMarkdown | trimspace }} + +For more information see: [MongoDB Atlas API - Stream Processor](https://www.mongodb.com/docs/atlas/reference/api-resources-spec/v2/#tag/Streams/operation/createStreamProcessor) Documentation. diff --git a/templates/data-sources/stream_processors.md.tmpl b/templates/data-sources/stream_processors.md.tmpl new file mode 100644 index 0000000000..df4a654e56 --- /dev/null +++ b/templates/data-sources/stream_processors.md.tmpl @@ -0,0 +1,10 @@ +# {{.Type}}: {{.Name}} + +`{{.Name}}` returns all stream processors in a stream instance. + +## Example Usages +{{ tffile (printf "examples/mongodbatlas_stream_processor/main.tf" )}} + +{{ .SchemaMarkdown | trimspace }} + +For more information see: [MongoDB Atlas API - Stream Processor](https://www.mongodb.com/docs/atlas/reference/api-resources-spec/v2/#tag/Streams/operation/createStreamProcessor) Documentation. diff --git a/templates/resources.md.tmpl b/templates/resources.md.tmpl index ed9ba98760..b2f176cdf3 100644 --- a/templates/resources.md.tmpl +++ b/templates/resources.md.tmpl @@ -56,6 +56,7 @@ {{ else if eq .Name "mongodbatlas_ldap_verify" }} {{ else if eq .Name "mongodbatlas_third_party_integration" }} {{ else if eq .Name "mongodbatlas_x509_authentication_database_user" }} + {{ else if eq .Name "mongodbatlas_stream_processor" }} {{ else if eq .Name "mongodbatlas_privatelink_endpoint_service_data_federation_online_archive" }} {{ else }} {{ tffile (printf "examples/%s/main.tf" .Name )}} diff --git a/templates/resources/stream_processor.md.tmpl b/templates/resources/stream_processor.md.tmpl new file mode 100644 index 0000000000..22e03c261b --- /dev/null +++ b/templates/resources/stream_processor.md.tmpl @@ -0,0 +1,30 @@ +# {{.Type}}: {{.Name}} + +`{{.Name}}` provides a Stream Processor resource. The resource lets you create, delete, import, start and stop a stream processor in a stream instance. + +**NOTE**: Updating an Atlas Stream Processor is currently not supported. As a result, the following steps are needed to be able to change an Atlas Stream Processor with an Atlas Change Stream Source: +1. Retrieve the value of Change Stream Source Token `changeStreamState` from the computed `stats` attribute in `mongodbatlas_stream_processor` resource or datasource or from the Terraform state file. This takes the form of a [resume token](https://www.mongodb.com/docs/manual/changeStreams/#resume-tokens-from-change-events). The Stream Processor has to be running in the state `STARTED` for the `stats` attribute to be available. However, before you retrieve the value, you should set the `state` to `STOPPED` to get the latest `changeStreamState`. + - Example: + ``` + {\"changeStreamState\":{\"_data\":\"8266C71670000000012B0429296E1404\"} + ``` +2. Update the `pipeline` argument setting `config.StartAfter` with the value retrieved in the previous step. More details in the [MongoDB Collection Change Stream](https://www.mongodb.com/docs/atlas/atlas-stream-processing/sp-agg-source/#mongodb-collection-change-stream) documentation. + - Example: + ``` + pipeline = jsonencode([{ "$source" = { "connectionName" = resource.mongodbatlas_stream_connection.example-cluster.connection_name, "config" = { "startAfter" = { "_data" : "8266C71562000000012B0429296E1404" } } } }, { "$emit" = { "connectionName" : "__testLog" } }]) + ``` +3. Delete the existing Atlas Stream Processor and then create a new Atlas Stream Processor with updated pipeline parameter and the updated values. + +## Example Usages + +{{ tffile (printf "examples/%s/main.tf" .Name )}} + +{{ .SchemaMarkdown | trimspace }} + +# Import +Stream Processor resource can be imported using the Project ID, Stream Instance name and Stream Processor name, in the format `INSTANCE_NAME-PROJECT_ID-PROCESSOR_NAME`, e.g. +``` +$ terraform import mongodbatlas_stream_processor.test yourInstanceName-6117ac2fe2a3d04ed27a987v-yourProcessorName +``` + +For more information see: [MongoDB Atlas API - Stream Processor](https://www.mongodb.com/docs/atlas/reference/api-resources-spec/v2/#tag/Streams/operation/createStreamProcessor) Documentation. From 24fc14f9e36a6bbb64f454171607c4912612441a Mon Sep 17 00:00:00 2001 From: svc-apix-bot Date: Tue, 10 Sep 2024 07:28:35 +0000 Subject: [PATCH 10/16] chore: Updates CHANGELOG.md for #2566 --- CHANGELOG.md | 3 +++ 1 file changed, 3 insertions(+) diff --git a/CHANGELOG.md b/CHANGELOG.md index e8b5ec2f06..bbd9dcb6f5 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -12,6 +12,9 @@ FEATURES: * **New Data Source:** `data-source/mongodbatlas_encryption_at_rest_private_endpoint` ([#2527](https://github.com/mongodb/terraform-provider-mongodbatlas/pull/2527)) * **New Data Source:** `data-source/mongodbatlas_encryption_at_rest_private_endpoints` ([#2536](https://github.com/mongodb/terraform-provider-mongodbatlas/pull/2536)) * **New Data Source:** `data-source/mongodbatlas_project_ip_addresses` ([#2533](https://github.com/mongodb/terraform-provider-mongodbatlas/pull/2533)) +* **New Data Source:** `data-source/mongodbatlas_stream_processor` ([#2497](https://github.com/mongodb/terraform-provider-mongodbatlas/pull/2497)) +* **New Data Source:** `data-source/mongodbatlas_stream_processors` ([#2566](https://github.com/mongodb/terraform-provider-mongodbatlas/pull/2566)) +* **New Resource:** `mongodbatlas_stream_processor` ([#2501](https://github.com/mongodb/terraform-provider-mongodbatlas/pull/2501)) * **New Resource:** `resource/mongodbatlas_encryption_at_rest_private_endpoint` ([#2512](https://github.com/mongodb/terraform-provider-mongodbatlas/pull/2512)) ENHANCEMENTS: From 7425b8d4fada902f9f02ff29da816765098bdb38 Mon Sep 17 00:00:00 2001 From: maastha <122359335+maastha@users.noreply.github.com> Date: Tue, 10 Sep 2024 10:04:44 +0100 Subject: [PATCH 11/16] update git workflow (#2572) --- .github/workflows/acceptance-tests-runner.yml | 24 ------------------- 1 file changed, 24 deletions(-) diff --git a/.github/workflows/acceptance-tests-runner.yml b/.github/workflows/acceptance-tests-runner.yml index 114b7737f8..58b9707e5d 100644 --- a/.github/workflows/acceptance-tests-runner.yml +++ b/.github/workflows/acceptance-tests-runner.yml @@ -256,8 +256,6 @@ jobs: - 'internal/service/rolesorgid/*.go' - 'internal/service/team/*.go' - 'internal/service/thirdpartyintegration/*.go' - data_lake: - - 'internal/service/datalakepipeline/*.go' encryption: - 'internal/service/encryptionatrest/*.go' - 'internal/service/encryptionatrestprivateendpoint/*.go' @@ -496,28 +494,6 @@ jobs: ./internal/service/team ./internal/service/thirdpartyintegration run: make testacc - - data_lake: - needs: [ change-detection, get-provider-version ] - if: ${{ needs.change-detection.outputs.data_lake == 'true' || inputs.test_group == 'data_lake' }} - runs-on: ubuntu-latest - permissions: {} - steps: - - uses: actions/checkout@692973e3d937129bcbf40652eb9f2f61becf3332 - with: - ref: ${{ inputs.ref || github.ref }} - - uses: actions/setup-go@0a12ed9d6a96ab950c8f026ed9f722fe0da7ef32 - with: - go-version-file: 'go.mod' - - uses: hashicorp/setup-terraform@b9cd54a3c349d3f38e8881555d616ced269862dd - with: - terraform_version: ${{ inputs.terraform_version }} - terraform_wrapper: false - - name: Acceptance Tests - env: - MONGODB_ATLAS_LAST_VERSION: ${{ needs.get-provider-version.outputs.provider_version }} - ACCTEST_PACKAGES: ./internal/service/datalakepipeline - run: make testacc encryption: needs: [ change-detection, get-provider-version ] From 56feda7af3cda66fab50529f01dd6d84c8ac87d4 Mon Sep 17 00:00:00 2001 From: Espen Albert Date: Tue, 10 Sep 2024 10:20:57 +0100 Subject: [PATCH 12/16] doc: Adds support for SDK_BRANCH in schema generation (#2562) * doc: Add support for SDK_BRANCH * chore: revert unintentional change on save * doc: remove, can be confusing to users without cloud-dev access --- scripts/schema-scaffold.sh | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/scripts/schema-scaffold.sh b/scripts/schema-scaffold.sh index 070e96d99c..ab1541a2bb 100755 --- a/scripts/schema-scaffold.sh +++ b/scripts/schema-scaffold.sh @@ -3,8 +3,9 @@ set -euo pipefail : "${1?"Name of resource or data source must be provided."}" +SDK_BRANCH="${SDK_BRANCH:-"main"}" # URL to download Atlas Admin API Spec -atlas_admin_api_spec="https://raw.githubusercontent.com/mongodb/atlas-sdk-go/main/openapi/atlas-api-transformed.yaml" +atlas_admin_api_spec="https://raw.githubusercontent.com/mongodb/atlas-sdk-go/${SDK_BRANCH}/openapi/atlas-api-transformed.yaml" echo "Downloading api spec" curl -L "$atlas_admin_api_spec" -o "./api-spec.yml" From ee569f6f47d34fa5a8a3f1d7a4380c0a8df10e64 Mon Sep 17 00:00:00 2001 From: maastha <122359335+maastha@users.noreply.github.com> Date: Tue, 10 Sep 2024 10:41:25 +0100 Subject: [PATCH 13/16] chore: Disables preview mode for EAR private endpoint so it may be normally accessible (#2571) --- .../encryption_at_rest_private_endpoint.md | 2 +- .../encryption_at_rest_private_endpoints.md | 2 +- .../encryption_at_rest_private_endpoint.md | 2 +- .../azure/README.md | 16 ++++------------ internal/provider/provider.go | 7 +++---- .../resource_test.go | 9 ++++----- .../encryption_at_rest_private_endpoint.md.tmpl | 2 +- .../encryption_at_rest_private_endpoints.md.tmpl | 2 +- .../encryption_at_rest_private_endpoint.md.tmpl | 2 +- 9 files changed, 17 insertions(+), 27 deletions(-) diff --git a/docs/data-sources/encryption_at_rest_private_endpoint.md b/docs/data-sources/encryption_at_rest_private_endpoint.md index 3cd1f2e29e..2bf3c9a263 100644 --- a/docs/data-sources/encryption_at_rest_private_endpoint.md +++ b/docs/data-sources/encryption_at_rest_private_endpoint.md @@ -3,7 +3,7 @@ `mongodbatlas_encryption_at_rest_private_endpoint` describes a private endpoint used for encryption at rest using customer-managed keys. ~> **IMPORTANT** The Encryption at Rest using Azure Key Vault over Private Endpoints feature is available by request. To request this functionality for your Atlas deployments, contact your Account Manager. -Additionally, you'll need to set the environment variable `MONGODB_ATLAS_ENABLE_PREVIEW=true` to use this data source. To learn more about existing limitations, see the [Manage Customer Keys with Azure Key Vault Over Private Endpoints](https://www.mongodb.com/docs/atlas/security/azure-kms-over-private-endpoint/#manage-customer-keys-with-azure-key-vault-over-private-endpoints). +To learn more about existing limitations, see [Manage Customer Keys with Azure Key Vault Over Private Endpoints](https://www.mongodb.com/docs/atlas/security/azure-kms-over-private-endpoint/#manage-customer-keys-with-azure-key-vault-over-private-endpoints). ## Example Usages diff --git a/docs/data-sources/encryption_at_rest_private_endpoints.md b/docs/data-sources/encryption_at_rest_private_endpoints.md index 96f3fd17b0..13bbfb31d9 100644 --- a/docs/data-sources/encryption_at_rest_private_endpoints.md +++ b/docs/data-sources/encryption_at_rest_private_endpoints.md @@ -3,7 +3,7 @@ `mongodbatlas_encryption_at_rest_private_endpoints` describes private endpoints of a particular cloud provider used for encryption at rest using customer-managed keys. ~> **IMPORTANT** The Encryption at Rest using Azure Key Vault over Private Endpoints feature is available by request. To request this functionality for your Atlas deployments, contact your Account Manager. -Additionally, you'll need to set the environment variable `MONGODB_ATLAS_ENABLE_PREVIEW=true` to use this data source. To learn more about existing limitations, see the [Manage Customer Keys with Azure Key Vault Over Private Endpoints](https://www.mongodb.com/docs/atlas/security/azure-kms-over-private-endpoint/#manage-customer-keys-with-azure-key-vault-over-private-endpoints). +To learn more about existing limitations, see [Manage Customer Keys with Azure Key Vault Over Private Endpoints](https://www.mongodb.com/docs/atlas/security/azure-kms-over-private-endpoint/#manage-customer-keys-with-azure-key-vault-over-private-endpoints). ## Example Usages diff --git a/docs/resources/encryption_at_rest_private_endpoint.md b/docs/resources/encryption_at_rest_private_endpoint.md index 3e3e068d12..54ed0003c8 100644 --- a/docs/resources/encryption_at_rest_private_endpoint.md +++ b/docs/resources/encryption_at_rest_private_endpoint.md @@ -3,7 +3,7 @@ `mongodbatlas_encryption_at_rest_private_endpoint` provides a resource for managing a private endpoint used for encryption at rest with customer-managed keys. This ensures all traffic between Atlas and customer key management systems take place over private network interfaces. ~> **IMPORTANT** The Encryption at Rest using Azure Key Vault over Private Endpoints feature is available by request. To request this functionality for your Atlas deployments, contact your Account Manager. -Additionally, you'll need to set the environment variable `MONGODB_ATLAS_ENABLE_PREVIEW=true` to use this resource. To learn more about existing limitations, see the [Manage Customer Keys with Azure Key Vault Over Private Endpoints](https://www.mongodb.com/docs/atlas/security/azure-kms-over-private-endpoint/#manage-customer-keys-with-azure-key-vault-over-private-endpoints). +To learn more about existing limitations, see [Manage Customer Keys with Azure Key Vault Over Private Endpoints](https://www.mongodb.com/docs/atlas/security/azure-kms-over-private-endpoint/#manage-customer-keys-with-azure-key-vault-over-private-endpoints). -> **NOTE:** As a prerequisite to configuring a private endpoint for Azure Key Vault, the corresponding [`mongodbatlas_encryption_at_rest`](encryption_at_rest) resource has to be adjust by configuring [`azure_key_vault_config.require_private_networking`](encryption_at_rest#require_private_networking) to true. This attribute should be updated in place, ensuring the customer-managed keys encryption is never disabled. diff --git a/examples/mongodbatlas_encryption_at_rest_private_endpoint/azure/README.md b/examples/mongodbatlas_encryption_at_rest_private_endpoint/azure/README.md index 727ec3b95b..4e4e6e93ab 100644 --- a/examples/mongodbatlas_encryption_at_rest_private_endpoint/azure/README.md +++ b/examples/mongodbatlas_encryption_at_rest_private_endpoint/azure/README.md @@ -14,15 +14,7 @@ This example shows how to configure encryption at rest using Azure with customer The Encryption at Rest using Azure Key Vault over Private Endpoints feature is available by request. To request this functionality for your Atlas deployments, contact your Account Manager. -**2\. Enable `MONGODB_ATLAS_ENABLE_PREVIEW` flag.** - -This step is needed to make use of the `mongodbatlas_encryption_at_rest_private_endpoint` resource. - -``` -export MONGODB_ATLAS_ENABLE_PREVIEW="true" -``` - -**3\. Provide the appropriate values for the input variables.** +**2\. Provide the appropriate values for the input variables.** - `atlas_public_key`: The public API key for MongoDB Atlas - `atlas_private_key`: The private API key for MongoDB Atlas @@ -41,7 +33,7 @@ export MONGODB_ATLAS_ENABLE_PREVIEW="true" - GET (Key Management Operation), ENCRYPT (Cryptographic Operation) and DECRYPT (Cryptographic Operation) policy permissions. - A `Key Vault Reader` role. -**4\. Review the Terraform plan.** +**3\. Review the Terraform plan.** Execute the following command and ensure you are happy with the plan. @@ -55,7 +47,7 @@ This project will execute the following changes to acheive a successful Azure Pr - Approve the connection from the Azure Key Vault. This is being done through terraform with the `azapi_update_resource` resource. Alternatively, the private connection can be approved through the Azure UI or CLI. - CLI example command: `az keyvault private-endpoint-connection approve --approval-description {"OPTIONAL DESCRIPTION"} --resource-group {RG} --vault-name {KEY VAULT NAME} –name {PRIVATE LINK CONNECTION NAME}` -**3\. Execute the Terraform apply.** +**4\. Execute the Terraform apply.** Now execute the plan to provision the resources. @@ -63,7 +55,7 @@ Now execute the plan to provision the resources. $ terraform apply ``` -**4\. Destroy the resources.** +**5\. Destroy the resources.** When you have finished your testing, ensure you destroy the resources to avoid unnecessary Atlas charges. diff --git a/internal/provider/provider.go b/internal/provider/provider.go index 7556324185..ee5812c119 100644 --- a/internal/provider/provider.go +++ b/internal/provider/provider.go @@ -440,11 +440,10 @@ func (p *MongodbtlasProvider) DataSources(context.Context) []func() datasource.D streamprocessor.DataSource, streamprocessor.PluralDataSource, encryptionatrest.DataSource, - } - previewDataSources := []func() datasource.DataSource{ // Data sources not yet in GA encryptionatrestprivateendpoint.DataSource, encryptionatrestprivateendpoint.PluralDataSource, } + previewDataSources := []func() datasource.DataSource{} // Data sources not yet in GA if providerEnablePreview { dataSources = append(dataSources, previewDataSources...) @@ -464,10 +463,10 @@ func (p *MongodbtlasProvider) Resources(context.Context) []func() resource.Resou streaminstance.Resource, streamconnection.Resource, streamprocessor.Resource, - } - previewResources := []func() resource.Resource{ // Resources not yet in GA encryptionatrestprivateendpoint.Resource, } + previewResources := []func() resource.Resource{} // Resources not yet in GA + if providerEnablePreview { resources = append(resources, previewResources...) } diff --git a/internal/service/encryptionatrestprivateendpoint/resource_test.go b/internal/service/encryptionatrestprivateendpoint/resource_test.go index 3626cef384..6855234983 100644 --- a/internal/service/encryptionatrestprivateendpoint/resource_test.go +++ b/internal/service/encryptionatrestprivateendpoint/resource_test.go @@ -52,7 +52,7 @@ func basicTestCase(tb testing.TB) *resource.TestCase { ) return &resource.TestCase{ - PreCheck: func() { acc.PreCheckBasic(tb); acc.PreCheckEncryptionAtRestEnvAzure(tb); acc.PreCheckPreviewFlag(tb) }, + PreCheck: func() { acc.PreCheckBasic(tb); acc.PreCheckEncryptionAtRestEnvAzure(tb) }, ProtoV6ProviderFactories: acc.TestAccProviderV6Factories, CheckDestroy: checkDestroy, Steps: []resource.TestStep{ @@ -95,7 +95,7 @@ func TestAccEncryptionAtRestPrivateEndpoint_approveEndpointWithAzureProvider(t * ) resource.Test(t, resource.TestCase{ - PreCheck: func() { acc.PreCheckBasic(t); acc.PreCheckEncryptionAtRestEnvAzure(t); acc.PreCheckPreviewFlag(t) }, + PreCheck: func() { acc.PreCheckBasic(t); acc.PreCheckEncryptionAtRestEnvAzure(t) }, ExternalProviders: acc.ExternalProvidersOnlyAzapi(), ProtoV6ProviderFactories: acc.TestAccProviderV6Factories, CheckDestroy: checkDestroy, @@ -137,7 +137,7 @@ func TestAccEncryptionAtRestPrivateEndpoint_transitionPublicToPrivateNetwork(t * ) resource.Test(t, resource.TestCase{ - PreCheck: func() { acc.PreCheckBasic(t); acc.PreCheckEncryptionAtRestEnvAzure(t); acc.PreCheckPreviewFlag(t) }, + PreCheck: func() { acc.PreCheckBasic(t); acc.PreCheckEncryptionAtRestEnvAzure(t) }, ProtoV6ProviderFactories: acc.TestAccProviderV6Factories, CheckDestroy: checkDestroy, Steps: []resource.TestStep{ @@ -160,7 +160,7 @@ func TestAccEncryptionAtRestPrivateEndpoint_transitionPublicToPrivateNetwork(t * }) } -func TestAccEncryptionAtRest_azure_requirePrivateNetworking_preview(t *testing.T) { +func TestAccEncryptionAtRest_azure_requirePrivateNetworking(t *testing.T) { var ( projectID = os.Getenv("MONGODB_ATLAS_PROJECT_EAR_PE_ID") @@ -199,7 +199,6 @@ func TestAccEncryptionAtRest_azure_requirePrivateNetworking_preview(t *testing.T PreCheck: func() { acc.PreCheckEncryptionAtRestPrivateEndpoint(t) acc.PreCheckEncryptionAtRestEnvAzureWithUpdate(t) - acc.PreCheckPreviewFlag(t) }, ProtoV6ProviderFactories: acc.TestAccProviderV6Factories, CheckDestroy: acc.EARDestroy, diff --git a/templates/data-sources/encryption_at_rest_private_endpoint.md.tmpl b/templates/data-sources/encryption_at_rest_private_endpoint.md.tmpl index 74675e1338..c68c3f0cee 100644 --- a/templates/data-sources/encryption_at_rest_private_endpoint.md.tmpl +++ b/templates/data-sources/encryption_at_rest_private_endpoint.md.tmpl @@ -3,7 +3,7 @@ `{{.Name}}` describes a private endpoint used for encryption at rest using customer-managed keys. ~> **IMPORTANT** The Encryption at Rest using Azure Key Vault over Private Endpoints feature is available by request. To request this functionality for your Atlas deployments, contact your Account Manager. -Additionally, you'll need to set the environment variable `MONGODB_ATLAS_ENABLE_PREVIEW=true` to use this data source. To learn more about existing limitations, see the [Manage Customer Keys with Azure Key Vault Over Private Endpoints](https://www.mongodb.com/docs/atlas/security/azure-kms-over-private-endpoint/#manage-customer-keys-with-azure-key-vault-over-private-endpoints). +To learn more about existing limitations, see [Manage Customer Keys with Azure Key Vault Over Private Endpoints](https://www.mongodb.com/docs/atlas/security/azure-kms-over-private-endpoint/#manage-customer-keys-with-azure-key-vault-over-private-endpoints). ## Example Usages diff --git a/templates/data-sources/encryption_at_rest_private_endpoints.md.tmpl b/templates/data-sources/encryption_at_rest_private_endpoints.md.tmpl index 701736d56a..b14a3d8202 100644 --- a/templates/data-sources/encryption_at_rest_private_endpoints.md.tmpl +++ b/templates/data-sources/encryption_at_rest_private_endpoints.md.tmpl @@ -3,7 +3,7 @@ `{{.Name}}` describes private endpoints of a particular cloud provider used for encryption at rest using customer-managed keys. ~> **IMPORTANT** The Encryption at Rest using Azure Key Vault over Private Endpoints feature is available by request. To request this functionality for your Atlas deployments, contact your Account Manager. -Additionally, you'll need to set the environment variable `MONGODB_ATLAS_ENABLE_PREVIEW=true` to use this data source. To learn more about existing limitations, see the [Manage Customer Keys with Azure Key Vault Over Private Endpoints](https://www.mongodb.com/docs/atlas/security/azure-kms-over-private-endpoint/#manage-customer-keys-with-azure-key-vault-over-private-endpoints). +To learn more about existing limitations, see [Manage Customer Keys with Azure Key Vault Over Private Endpoints](https://www.mongodb.com/docs/atlas/security/azure-kms-over-private-endpoint/#manage-customer-keys-with-azure-key-vault-over-private-endpoints). ## Example Usages diff --git a/templates/resources/encryption_at_rest_private_endpoint.md.tmpl b/templates/resources/encryption_at_rest_private_endpoint.md.tmpl index 4867ee2014..a68a9a3603 100644 --- a/templates/resources/encryption_at_rest_private_endpoint.md.tmpl +++ b/templates/resources/encryption_at_rest_private_endpoint.md.tmpl @@ -3,7 +3,7 @@ `{{.Name}}` provides a resource for managing a private endpoint used for encryption at rest with customer-managed keys. This ensures all traffic between Atlas and customer key management systems take place over private network interfaces. ~> **IMPORTANT** The Encryption at Rest using Azure Key Vault over Private Endpoints feature is available by request. To request this functionality for your Atlas deployments, contact your Account Manager. -Additionally, you'll need to set the environment variable `MONGODB_ATLAS_ENABLE_PREVIEW=true` to use this resource. To learn more about existing limitations, see the [Manage Customer Keys with Azure Key Vault Over Private Endpoints](https://www.mongodb.com/docs/atlas/security/azure-kms-over-private-endpoint/#manage-customer-keys-with-azure-key-vault-over-private-endpoints). +To learn more about existing limitations, see [Manage Customer Keys with Azure Key Vault Over Private Endpoints](https://www.mongodb.com/docs/atlas/security/azure-kms-over-private-endpoint/#manage-customer-keys-with-azure-key-vault-over-private-endpoints). -> **NOTE:** As a prerequisite to configuring a private endpoint for Azure Key Vault, the corresponding [`mongodbatlas_encryption_at_rest`](encryption_at_rest) resource has to be adjust by configuring [`azure_key_vault_config.require_private_networking`](encryption_at_rest#require_private_networking) to true. This attribute should be updated in place, ensuring the customer-managed keys encryption is never disabled. From 03c37f3e0a99904d63fc09a241d2f1c5e014bc43 Mon Sep 17 00:00:00 2001 From: Oriol Date: Tue, 10 Sep 2024 11:43:22 +0200 Subject: [PATCH 14/16] feat: Supports change_stream_options_pre_and_post_images_expire_after_seconds in `mongodbatlas_cluster` and `mongodbatlas_advanced_cluster` (#2528) * support change_stream_options_pre_and_post_images_expire_after_seconds in cluster * changelog entry * fix changelog entry * check valid value * fix flattener * use getok * default value -1(off) so it goes to default value when not setting * add to attribute to DS model * set default in data source * specify default value of attribute in doc * remove value validation, letting API fail * temp: do not run advanced_cluster test * temp:remove advanced cluster in CI * Revert "temp:remove advanced cluster in CI" This reverts commit d5b690f29256043ff585180b3d339224e207a71b. * Revert "temp: do not run advanced_cluster test" This reverts commit 5d1fef41eac820e6003b80f5773d5e279c05246b. * implement in advanced_cluster * fix pointer * docs * changelog * skip migration test (new attribute has default value) * skip mig * skip mig * use processArgs instead of attribute * run tests --- .changelog/2528.txt | 23 ++++++ docs/data-sources/advanced_cluster.md | 1 + docs/data-sources/advanced_clusters.md | 2 + docs/data-sources/cluster.md | 1 + docs/data-sources/clusters.md | 1 + docs/resources/advanced_cluster.md | 1 + docs/resources/cluster.md | 1 + .../data_source_advanced_cluster.go | 8 +- .../data_source_advanced_clusters.go | 16 +++- .../advancedcluster/model_advanced_cluster.go | 74 ++++++++++++------- .../resource_advanced_cluster.go | 28 +++++-- ...esource_advanced_cluster_migration_test.go | 1 + .../resource_advanced_cluster_test.go | 38 ++++++---- internal/service/cluster/model_cluster.go | 10 ++- .../resource_cluster_migration_test.go | 1 + .../service/cluster/resource_cluster_test.go | 6 +- 16 files changed, 153 insertions(+), 59 deletions(-) create mode 100644 .changelog/2528.txt diff --git a/.changelog/2528.txt b/.changelog/2528.txt new file mode 100644 index 0000000000..05409cb4d4 --- /dev/null +++ b/.changelog/2528.txt @@ -0,0 +1,23 @@ +```release-note:enhancement +resource/mongodbatlas_cluster: Supports change_stream_options_pre_and_post_images_expire_after_seconds attribute +``` + +```release-note:enhancement +data-source/mongodbatlas_cluster: Supports change_stream_options_pre_and_post_images_expire_after_seconds attribute +``` + +```release-note:enhancement +data-source/mongodbatlas_clusters: Supports change_stream_options_pre_and_post_images_expire_after_seconds attribute +``` + +```release-note:enhancement +resource/mongodbatlas_advanced_cluster: Supports change_stream_options_pre_and_post_images_expire_after_seconds attribute +``` + +```release-note:enhancement +data-source/mongodbatlas_advanced_cluster: Supports change_stream_options_pre_and_post_images_expire_after_seconds attribute +``` + +```release-note:enhancement +data-source/mongodbatlas_advanced_cluster: Supports change_stream_options_pre_and_post_images_expire_after_seconds attribute +``` \ No newline at end of file diff --git a/docs/data-sources/advanced_cluster.md b/docs/data-sources/advanced_cluster.md index b268bdc876..8609f2e1b4 100644 --- a/docs/data-sources/advanced_cluster.md +++ b/docs/data-sources/advanced_cluster.md @@ -197,6 +197,7 @@ Key-value pairs that categorize the cluster. Each key and value has a maximum le * `sample_size_bi_connector` - Number of documents per database to sample when gathering schema information. Defaults to 100. Available only for Atlas deployments in which BI Connector for Atlas is enabled. * `sample_refresh_interval_bi_connector` - Interval in seconds at which the mongosqld process re-samples data to create its relational schema. The default value is 300. The specified value must be a positive integer. Available only for Atlas deployments in which BI Connector for Atlas is enabled. * `transaction_lifetime_limit_seconds` - Lifetime, in seconds, of multi-document transactions. Defaults to 60 seconds. +* `change_stream_options_pre_and_post_images_expire_after_seconds` - (Optional) The minimum pre- and post-image retention time in seconds. ## Attributes Reference diff --git a/docs/data-sources/advanced_clusters.md b/docs/data-sources/advanced_clusters.md index fdec83bf58..66ef2b6ca6 100644 --- a/docs/data-sources/advanced_clusters.md +++ b/docs/data-sources/advanced_clusters.md @@ -199,6 +199,8 @@ Key-value pairs that categorize the cluster. Each key and value has a maximum le * `oplog_min_retention_hours` - Minimum retention window for cluster's oplog expressed in hours. A value of null indicates that the cluster uses the default minimum oplog window that MongoDB Cloud calculates. * `sample_size_bi_connector` - Number of documents per database to sample when gathering schema information. Defaults to 100. Available only for Atlas deployments in which BI Connector for Atlas is enabled. * `sample_refresh_interval_bi_connector` - Interval in seconds at which the mongosqld process re-samples data to create its relational schema. The default value is 300. The specified value must be a positive integer. Available only for Atlas deployments in which BI Connector for Atlas is enabled. +* `transaction_lifetime_limit_seconds` - (Optional) Lifetime, in seconds, of multi-document transactions. Defaults to 60 seconds. +* `change_stream_options_pre_and_post_images_expire_after_seconds` - (Optional) The minimum pre- and post-image retention time in seconds. ## Attributes Reference diff --git a/docs/data-sources/cluster.md b/docs/data-sources/cluster.md index 2d70b437ca..41001932c9 100644 --- a/docs/data-sources/cluster.md +++ b/docs/data-sources/cluster.md @@ -231,5 +231,6 @@ Contains a key-value pair that tags that the cluster was created by a Terraform * `sample_size_bi_connector` - Number of documents per database to sample when gathering schema information. Defaults to 100. Available only for Atlas deployments in which BI Connector for Atlas is enabled. * `sample_refresh_interval_bi_connector` - Interval in seconds at which the mongosqld process re-samples data to create its relational schema. The default value is 300. The specified value must be a positive integer. Available only for Atlas deployments in which BI Connector for Atlas is enabled. * `transaction_lifetime_limit_seconds` - Lifetime, in seconds, of multi-document transactions. Defaults to 60 seconds. +* `change_stream_options_pre_and_post_images_expire_after_seconds` - (Optional) The minimum pre- and post-image retention time in seconds. See detailed information for arguments and attributes: [MongoDB API Clusters](https://docs.atlas.mongodb.com/reference/api/clusters-create-one/) diff --git a/docs/data-sources/clusters.md b/docs/data-sources/clusters.md index 5dd99be4d9..bd80b27769 100644 --- a/docs/data-sources/clusters.md +++ b/docs/data-sources/clusters.md @@ -218,6 +218,7 @@ Contains a key-value pair that tags that the cluster was created by a Terraform * `oplog_min_retention_hours` - Minimum retention window for cluster's oplog expressed in hours. A value of null indicates that the cluster uses the default minimum oplog window that MongoDB Cloud calculates. * `sample_size_bi_connector` - Number of documents per database to sample when gathering schema information. Defaults to 100. Available only for Atlas deployments in which BI Connector for Atlas is enabled. * `sample_refresh_interval_bi_connector` - Interval in seconds at which the mongosqld process re-samples data to create its relational schema. The default value is 300. The specified value must be a positive integer. Available only for Atlas deployments in which BI Connector for Atlas is enabled. +* `change_stream_options_pre_and_post_images_expire_after_seconds` - (Optional) The minimum pre- and post-image retention time in seconds. See detailed information for arguments and attributes: [MongoDB API Clusters](https://docs.atlas.mongodb.com/reference/api/clusters-create-one/) diff --git a/docs/resources/advanced_cluster.md b/docs/resources/advanced_cluster.md index 0ee28db680..c2eec367db 100644 --- a/docs/resources/advanced_cluster.md +++ b/docs/resources/advanced_cluster.md @@ -456,6 +456,7 @@ Include **desired options** within advanced_configuration: * `sample_size_bi_connector` - (Optional) Number of documents per database to sample when gathering schema information. Defaults to 100. Available only for Atlas deployments in which BI Connector for Atlas is enabled. * `sample_refresh_interval_bi_connector` - (Optional) Interval in seconds at which the mongosqld process re-samples data to create its relational schema. The default value is 300. The specified value must be a positive integer. Available only for Atlas deployments in which BI Connector for Atlas is enabled. * `transaction_lifetime_limit_seconds` - (Optional) Lifetime, in seconds, of multi-document transactions. Defaults to 60 seconds. +* `change_stream_options_pre_and_post_images_expire_after_seconds` - (Optional) The minimum pre- and post-image retention time in seconds. This option corresponds to the `changeStreamOptions.preAndPostImages.expireAfterSeconds` cluster parameter. Defaults to `-1`(off). This setting controls the retention policy of change stream pre- and post-images. Pre- and post-images are the versions of a document before and after document modification, respectively.`expireAfterSeconds` controls how long MongoDB retains pre- and post-images. When set to -1 (off), MongoDB uses the default retention policy: pre- and post-images are retained until the corresponding change stream events are removed from the oplog. To set the minimum pre- and post-image retention time, specify an integer value greater than zero. Setting this too low could increase the risk of interrupting Realm sync or triggers processing. ### Tags diff --git a/docs/resources/cluster.md b/docs/resources/cluster.md index 9e39f404b3..5236680207 100644 --- a/docs/resources/cluster.md +++ b/docs/resources/cluster.md @@ -497,6 +497,7 @@ Include **desired options** within advanced_configuration: * `sample_size_bi_connector` - (Optional) Number of documents per database to sample when gathering schema information. Defaults to 100. Available only for Atlas deployments in which BI Connector for Atlas is enabled. * `sample_refresh_interval_bi_connector` - (Optional) Interval in seconds at which the mongosqld process re-samples data to create its relational schema. The default value is 300. The specified value must be a positive integer. Available only for Atlas deployments in which BI Connector for Atlas is enabled. * `transaction_lifetime_limit_seconds` - (Optional) Lifetime, in seconds, of multi-document transactions. Defaults to 60 seconds. +* `change_stream_options_pre_and_post_images_expire_after_seconds` - (Optional) The minimum pre- and post-image retention time in seconds. This option corresponds to the `changeStreamOptions.preAndPostImages.expireAfterSeconds` cluster parameter. Defaults to `-1`(off). This setting controls the retention policy of change stream pre- and post-images. Pre- and post-images are the versions of a document before and after document modification, respectively.`expireAfterSeconds` controls how long MongoDB retains pre- and post-images. When set to -1 (off), MongoDB uses the default retention policy: pre- and post-images are retained until the corresponding change stream events are removed from the oplog. To set the minimum pre- and post-image retention time, specify an integer value greater than zero. Setting this too low could increase the risk of interrupting Realm sync or triggers processing. ### Tags diff --git a/internal/service/advancedcluster/data_source_advanced_cluster.go b/internal/service/advancedcluster/data_source_advanced_cluster.go index 47c7861e0b..3796b7209b 100644 --- a/internal/service/advancedcluster/data_source_advanced_cluster.go +++ b/internal/service/advancedcluster/data_source_advanced_cluster.go @@ -348,12 +348,16 @@ func dataSourceRead(ctx context.Context, d *schema.ResourceData, meta any) diag. return diag.FromErr(fmt.Errorf(ErrorClusterAdvancedSetting, "replication_specs", clusterName, err)) } - processArgs, _, err := connV220240530.ClustersApi.GetClusterAdvancedConfiguration(ctx, projectID, clusterName).Execute() + processArgs20240530, _, err := connV220240530.ClustersApi.GetClusterAdvancedConfiguration(ctx, projectID, clusterName).Execute() + if err != nil { + return diag.FromErr(fmt.Errorf(ErrorAdvancedConfRead, clusterName, err)) + } + processArgs, _, err := connV2.ClustersApi.GetClusterAdvancedConfiguration(ctx, projectID, clusterName).Execute() if err != nil { return diag.FromErr(fmt.Errorf(ErrorAdvancedConfRead, clusterName, err)) } - if err := d.Set("advanced_configuration", flattenProcessArgs(processArgs)); err != nil { + if err := d.Set("advanced_configuration", flattenProcessArgs(processArgs20240530, processArgs)); err != nil { return diag.FromErr(fmt.Errorf(ErrorClusterAdvancedSetting, "advanced_configuration", clusterName, err)) } diff --git a/internal/service/advancedcluster/data_source_advanced_clusters.go b/internal/service/advancedcluster/data_source_advanced_clusters.go index 99f67a3743..dc795bae16 100644 --- a/internal/service/advancedcluster/data_source_advanced_clusters.go +++ b/internal/service/advancedcluster/data_source_advanced_clusters.go @@ -320,7 +320,11 @@ func flattenAdvancedClusters(ctx context.Context, connV220240530 *admin20240530. results := make([]map[string]any, 0, len(clusters)) for i := range clusters { cluster := &clusters[i] - processArgs, _, err := connV220240530.ClustersApi.GetClusterAdvancedConfiguration(ctx, cluster.GetGroupId(), cluster.GetName()).Execute() + processArgs20240530, _, err := connV220240530.ClustersApi.GetClusterAdvancedConfiguration(ctx, cluster.GetGroupId(), cluster.GetName()).Execute() + if err != nil { + log.Printf("[WARN] Error setting `advanced_configuration` for the cluster(%s): %s", cluster.GetId(), err) + } + processArgs, _, err := connV2.ClustersApi.GetClusterAdvancedConfiguration(ctx, cluster.GetGroupId(), cluster.GetName()).Execute() if err != nil { log.Printf("[WARN] Error setting `advanced_configuration` for the cluster(%s): %s", cluster.GetId(), err) } @@ -336,7 +340,7 @@ func flattenAdvancedClusters(ctx context.Context, connV220240530 *admin20240530. } result := map[string]any{ - "advanced_configuration": flattenProcessArgs(processArgs), + "advanced_configuration": flattenProcessArgs(processArgs20240530, processArgs), "backup_enabled": cluster.GetBackupEnabled(), "bi_connector_config": flattenBiConnectorConfig(cluster.BiConnector), "cluster_type": cluster.GetClusterType(), @@ -368,7 +372,11 @@ func flattenAdvancedClustersOldSDK(ctx context.Context, connV20240530 *admin2024 results := make([]map[string]any, 0, len(clusters)) for i := range clusters { cluster := &clusters[i] - processArgs, _, err := connV20240530.ClustersApi.GetClusterAdvancedConfiguration(ctx, cluster.GetGroupId(), cluster.GetName()).Execute() + processArgs20240530, _, err := connV20240530.ClustersApi.GetClusterAdvancedConfiguration(ctx, cluster.GetGroupId(), cluster.GetName()).Execute() + if err != nil { + log.Printf("[WARN] Error setting `advanced_configuration` for the cluster(%s): %s", cluster.GetId(), err) + } + processArgs, _, err := connV2.ClustersApi.GetClusterAdvancedConfiguration(ctx, cluster.GetGroupId(), cluster.GetName()).Execute() if err != nil { log.Printf("[WARN] Error setting `advanced_configuration` for the cluster(%s): %s", cluster.GetId(), err) } @@ -388,7 +396,7 @@ func flattenAdvancedClustersOldSDK(ctx context.Context, connV20240530 *admin2024 } result := map[string]any{ - "advanced_configuration": flattenProcessArgs(processArgs), + "advanced_configuration": flattenProcessArgs(processArgs20240530, processArgs), "backup_enabled": cluster.GetBackupEnabled(), "bi_connector_config": flattenBiConnectorConfig(convertBiConnectToLatest(cluster.BiConnector)), "cluster_type": cluster.GetClusterType(), diff --git a/internal/service/advancedcluster/model_advanced_cluster.go b/internal/service/advancedcluster/model_advanced_cluster.go index 04bd6f0ec3..90fb19ceb7 100644 --- a/internal/service/advancedcluster/model_advanced_cluster.go +++ b/internal/service/advancedcluster/model_advanced_cluster.go @@ -108,6 +108,10 @@ func SchemaAdvancedConfigDS() *schema.Schema { Type: schema.TypeInt, Computed: true, }, + "change_stream_options_pre_and_post_images_expire_after_seconds": { + Type: schema.TypeInt, + Computed: true, + }, }, }, } @@ -248,6 +252,11 @@ func SchemaAdvancedConfig() *schema.Schema { Optional: true, Computed: true, }, + "change_stream_options_pre_and_post_images_expire_after_seconds": { + Type: schema.TypeInt, + Optional: true, + Default: -1, + }, }, }, } @@ -446,25 +455,29 @@ func expandBiConnectorConfig(d *schema.ResourceData) *admin.BiConnector { return nil } -func flattenProcessArgs(p *admin20240530.ClusterDescriptionProcessArgs) []map[string]any { - if p == nil { +func flattenProcessArgs(p20240530 *admin20240530.ClusterDescriptionProcessArgs, p *admin.ClusterDescriptionProcessArgs20240805) []map[string]any { + if p20240530 == nil { return nil } - return []map[string]any{ + flattenedProcessArgs := []map[string]any{ { - "default_read_concern": p.GetDefaultReadConcern(), - "default_write_concern": p.GetDefaultWriteConcern(), - "fail_index_key_too_long": p.GetFailIndexKeyTooLong(), - "javascript_enabled": p.GetJavascriptEnabled(), - "minimum_enabled_tls_protocol": p.GetMinimumEnabledTlsProtocol(), - "no_table_scan": p.GetNoTableScan(), - "oplog_size_mb": p.GetOplogSizeMB(), - "oplog_min_retention_hours": p.GetOplogMinRetentionHours(), - "sample_size_bi_connector": p.GetSampleSizeBIConnector(), - "sample_refresh_interval_bi_connector": p.GetSampleRefreshIntervalBIConnector(), - "transaction_lifetime_limit_seconds": p.GetTransactionLifetimeLimitSeconds(), + "default_read_concern": p20240530.GetDefaultReadConcern(), + "default_write_concern": p20240530.GetDefaultWriteConcern(), + "fail_index_key_too_long": p20240530.GetFailIndexKeyTooLong(), + "javascript_enabled": p20240530.GetJavascriptEnabled(), + "minimum_enabled_tls_protocol": p20240530.GetMinimumEnabledTlsProtocol(), + "no_table_scan": p20240530.GetNoTableScan(), + "oplog_size_mb": p20240530.GetOplogSizeMB(), + "oplog_min_retention_hours": p20240530.GetOplogMinRetentionHours(), + "sample_size_bi_connector": p20240530.GetSampleSizeBIConnector(), + "sample_refresh_interval_bi_connector": p20240530.GetSampleRefreshIntervalBIConnector(), + "transaction_lifetime_limit_seconds": p20240530.GetTransactionLifetimeLimitSeconds(), }, } + if p != nil { + flattenedProcessArgs[0]["change_stream_options_pre_and_post_images_expire_after_seconds"] = p.GetChangeStreamOptionsPreAndPostImagesExpireAfterSeconds() + } + return flattenedProcessArgs } func FlattenAdvancedReplicationSpecsOldSDK(ctx context.Context, apiObjects []admin20240530.ReplicationSpec, zoneNameToZoneIDs map[string]string, rootDiskSizeGB float64, tfMapObjects []any, @@ -738,44 +751,45 @@ func getAdvancedClusterContainerID(containers []admin.CloudProviderContainer, cl return "" } -func expandProcessArgs(d *schema.ResourceData, p map[string]any) admin20240530.ClusterDescriptionProcessArgs { - res := admin20240530.ClusterDescriptionProcessArgs{} +func expandProcessArgs(d *schema.ResourceData, p map[string]any) (admin20240530.ClusterDescriptionProcessArgs, admin.ClusterDescriptionProcessArgs20240805) { + res20240530 := admin20240530.ClusterDescriptionProcessArgs{} + res := admin.ClusterDescriptionProcessArgs20240805{} if _, ok := d.GetOkExists("advanced_configuration.0.default_read_concern"); ok { - res.DefaultReadConcern = conversion.StringPtr(cast.ToString(p["default_read_concern"])) + res20240530.DefaultReadConcern = conversion.StringPtr(cast.ToString(p["default_read_concern"])) } if _, ok := d.GetOkExists("advanced_configuration.0.default_write_concern"); ok { - res.DefaultWriteConcern = conversion.StringPtr(cast.ToString(p["default_write_concern"])) + res20240530.DefaultWriteConcern = conversion.StringPtr(cast.ToString(p["default_write_concern"])) } if _, ok := d.GetOkExists("advanced_configuration.0.fail_index_key_too_long"); ok { - res.FailIndexKeyTooLong = conversion.Pointer(cast.ToBool(p["fail_index_key_too_long"])) + res20240530.FailIndexKeyTooLong = conversion.Pointer(cast.ToBool(p["fail_index_key_too_long"])) } if _, ok := d.GetOkExists("advanced_configuration.0.javascript_enabled"); ok { - res.JavascriptEnabled = conversion.Pointer(cast.ToBool(p["javascript_enabled"])) + res20240530.JavascriptEnabled = conversion.Pointer(cast.ToBool(p["javascript_enabled"])) } if _, ok := d.GetOkExists("advanced_configuration.0.minimum_enabled_tls_protocol"); ok { - res.MinimumEnabledTlsProtocol = conversion.StringPtr(cast.ToString(p["minimum_enabled_tls_protocol"])) + res20240530.MinimumEnabledTlsProtocol = conversion.StringPtr(cast.ToString(p["minimum_enabled_tls_protocol"])) } if _, ok := d.GetOkExists("advanced_configuration.0.no_table_scan"); ok { - res.NoTableScan = conversion.Pointer(cast.ToBool(p["no_table_scan"])) + res20240530.NoTableScan = conversion.Pointer(cast.ToBool(p["no_table_scan"])) } if _, ok := d.GetOkExists("advanced_configuration.0.sample_size_bi_connector"); ok { - res.SampleSizeBIConnector = conversion.Pointer(cast.ToInt(p["sample_size_bi_connector"])) + res20240530.SampleSizeBIConnector = conversion.Pointer(cast.ToInt(p["sample_size_bi_connector"])) } if _, ok := d.GetOkExists("advanced_configuration.0.sample_refresh_interval_bi_connector"); ok { - res.SampleRefreshIntervalBIConnector = conversion.Pointer(cast.ToInt(p["sample_refresh_interval_bi_connector"])) + res20240530.SampleRefreshIntervalBIConnector = conversion.Pointer(cast.ToInt(p["sample_refresh_interval_bi_connector"])) } if _, ok := d.GetOkExists("advanced_configuration.0.oplog_size_mb"); ok { if sizeMB := cast.ToInt64(p["oplog_size_mb"]); sizeMB != 0 { - res.OplogSizeMB = conversion.Pointer(cast.ToInt(p["oplog_size_mb"])) + res20240530.OplogSizeMB = conversion.Pointer(cast.ToInt(p["oplog_size_mb"])) } else { log.Printf(ErrorClusterSetting, `oplog_size_mb`, "", cast.ToString(sizeMB)) } @@ -783,7 +797,7 @@ func expandProcessArgs(d *schema.ResourceData, p map[string]any) admin20240530.C if _, ok := d.GetOkExists("advanced_configuration.0.oplog_min_retention_hours"); ok { if minRetentionHours := cast.ToFloat64(p["oplog_min_retention_hours"]); minRetentionHours >= 0 { - res.OplogMinRetentionHours = conversion.Pointer(cast.ToFloat64(p["oplog_min_retention_hours"])) + res20240530.OplogMinRetentionHours = conversion.Pointer(cast.ToFloat64(p["oplog_min_retention_hours"])) } else { log.Printf(ErrorClusterSetting, `oplog_min_retention_hours`, "", cast.ToString(minRetentionHours)) } @@ -791,12 +805,16 @@ func expandProcessArgs(d *schema.ResourceData, p map[string]any) admin20240530.C if _, ok := d.GetOkExists("advanced_configuration.0.transaction_lifetime_limit_seconds"); ok { if transactionLifetimeLimitSeconds := cast.ToInt64(p["transaction_lifetime_limit_seconds"]); transactionLifetimeLimitSeconds > 0 { - res.TransactionLifetimeLimitSeconds = conversion.Pointer(cast.ToInt64(p["transaction_lifetime_limit_seconds"])) + res20240530.TransactionLifetimeLimitSeconds = conversion.Pointer(cast.ToInt64(p["transaction_lifetime_limit_seconds"])) } else { log.Printf(ErrorClusterSetting, `transaction_lifetime_limit_seconds`, "", cast.ToString(transactionLifetimeLimitSeconds)) } } - return res + + if _, ok := d.GetOkExists("advanced_configuration.0.change_stream_options_pre_and_post_images_expire_after_seconds"); ok { + res.ChangeStreamOptionsPreAndPostImagesExpireAfterSeconds = conversion.IntPtr(cast.ToInt(p["change_stream_options_pre_and_post_images_expire_after_seconds"])) + } + return res20240530, res } func expandLabelSliceFromSetSchema(d *schema.ResourceData) ([]admin.ComponentLabel, diag.Diagnostics) { diff --git a/internal/service/advancedcluster/resource_advanced_cluster.go b/internal/service/advancedcluster/resource_advanced_cluster.go index 4a27840365..43a5898424 100644 --- a/internal/service/advancedcluster/resource_advanced_cluster.go +++ b/internal/service/advancedcluster/resource_advanced_cluster.go @@ -472,8 +472,12 @@ func resourceCreate(ctx context.Context, d *schema.ResourceData, meta any) diag. if ac, ok := d.GetOk("advanced_configuration"); ok { if aclist, ok := ac.([]any); ok && len(aclist) > 0 { - params := expandProcessArgs(d, aclist[0].(map[string]any)) - _, _, err := connV220240530.ClustersApi.UpdateClusterAdvancedConfiguration(ctx, projectID, cluster.GetName(), ¶ms).Execute() + params20240530, params := expandProcessArgs(d, aclist[0].(map[string]any)) + _, _, err := connV220240530.ClustersApi.UpdateClusterAdvancedConfiguration(ctx, projectID, cluster.GetName(), ¶ms20240530).Execute() + if err != nil { + return diag.FromErr(fmt.Errorf(errorConfigUpdate, cluster.GetName(), err)) + } + _, _, err = connV2.ClustersApi.UpdateClusterAdvancedConfiguration(ctx, projectID, cluster.GetName(), ¶ms).Execute() if err != nil { return diag.FromErr(fmt.Errorf(errorConfigUpdate, cluster.GetName(), err)) } @@ -598,12 +602,16 @@ func resourceRead(ctx context.Context, d *schema.ResourceData, meta any) diag.Di return diag.FromErr(fmt.Errorf(ErrorClusterAdvancedSetting, "replication_specs", clusterName, err)) } - processArgs, _, err := connV220240530.ClustersApi.GetClusterAdvancedConfiguration(ctx, projectID, clusterName).Execute() + processArgs20240530, _, err := connV220240530.ClustersApi.GetClusterAdvancedConfiguration(ctx, projectID, clusterName).Execute() + if err != nil { + return diag.FromErr(fmt.Errorf(errorConfigRead, clusterName, err)) + } + processArgs, _, err := connV2.ClustersApi.GetClusterAdvancedConfiguration(ctx, projectID, clusterName).Execute() if err != nil { return diag.FromErr(fmt.Errorf(errorConfigRead, clusterName, err)) } - if err := d.Set("advanced_configuration", flattenProcessArgs(processArgs)); err != nil { + if err := d.Set("advanced_configuration", flattenProcessArgs(processArgs20240530, processArgs)); err != nil { return diag.FromErr(fmt.Errorf(ErrorClusterAdvancedSetting, "advanced_configuration", clusterName, err)) } @@ -827,9 +835,15 @@ func resourceUpdate(ctx context.Context, d *schema.ResourceData, meta any) diag. if d.HasChange("advanced_configuration") { ac := d.Get("advanced_configuration") if aclist, ok := ac.([]any); ok && len(aclist) > 0 { - params := expandProcessArgs(d, aclist[0].(map[string]any)) - if !reflect.DeepEqual(params, admin20240530.ClusterDescriptionProcessArgs{}) { - _, _, err := connV220240530.ClustersApi.UpdateClusterAdvancedConfiguration(ctx, projectID, clusterName, ¶ms).Execute() + params20240530, params := expandProcessArgs(d, aclist[0].(map[string]any)) + if !reflect.DeepEqual(params20240530, admin20240530.ClusterDescriptionProcessArgs{}) { + _, _, err := connV220240530.ClustersApi.UpdateClusterAdvancedConfiguration(ctx, projectID, clusterName, ¶ms20240530).Execute() + if err != nil { + return diag.FromErr(fmt.Errorf(errorConfigUpdate, clusterName, err)) + } + } + if !reflect.DeepEqual(params, admin.ClusterDescriptionProcessArgs20240805{}) { + _, _, err := connV2.ClustersApi.UpdateClusterAdvancedConfiguration(ctx, projectID, clusterName, ¶ms).Execute() if err != nil { return diag.FromErr(fmt.Errorf(errorConfigUpdate, clusterName, err)) } diff --git a/internal/service/advancedcluster/resource_advanced_cluster_migration_test.go b/internal/service/advancedcluster/resource_advanced_cluster_migration_test.go index 69bd1d9db3..980d385c66 100644 --- a/internal/service/advancedcluster/resource_advanced_cluster_migration_test.go +++ b/internal/service/advancedcluster/resource_advanced_cluster_migration_test.go @@ -150,6 +150,7 @@ func TestMigAdvancedCluster_geoShardedMigrationFromOldToNewSchema(t *testing.T) } func TestMigAdvancedCluster_partialAdvancedConf(t *testing.T) { + mig.SkipIfVersionBelow(t, "1.19.0") // version where change_stream_options_pre_and_post_images_expire_after_seconds was introduced var ( projectID = acc.ProjectIDExecution(t) clusterName = acc.RandomClusterName() diff --git a/internal/service/advancedcluster/resource_advanced_cluster_test.go b/internal/service/advancedcluster/resource_advanced_cluster_test.go index b80a633c00..d063e0980b 100644 --- a/internal/service/advancedcluster/resource_advanced_cluster_test.go +++ b/internal/service/advancedcluster/resource_advanced_cluster_test.go @@ -271,12 +271,12 @@ func TestAccClusterAdvancedCluster_advancedConfig(t *testing.T) { CheckDestroy: acc.CheckDestroyCluster, Steps: []resource.TestStep{ { - Config: configAdvanced(projectID, clusterName, processArgs), - Check: checkAdvanced(clusterName, "TLS1_1"), + Config: configAdvanced(projectID, clusterName, processArgs, nil), + Check: checkAdvanced(clusterName, "TLS1_1", "-1"), }, { - Config: configAdvanced(projectID, clusterNameUpdated, processArgsUpdated), - Check: checkAdvanced(clusterNameUpdated, "TLS1_2"), + Config: configAdvanced(projectID, clusterNameUpdated, processArgsUpdated, conversion.IntPtr(100)), + Check: checkAdvanced(clusterNameUpdated, "TLS1_2", "100"), }, }, }) @@ -1199,7 +1199,11 @@ func checkSingleProviderPaused(name string, paused bool) resource.TestCheckFunc "paused": strconv.FormatBool(paused)}) } -func configAdvanced(projectID, clusterName string, p *admin20240530.ClusterDescriptionProcessArgs) string { +func configAdvanced(projectID, clusterName string, p *admin20240530.ClusterDescriptionProcessArgs, changeStreamOptions *int) string { + changeStreamOptionsString := "" + if changeStreamOptions != nil { + changeStreamOptionsString = fmt.Sprintf(`change_stream_options_pre_and_post_images_expire_after_seconds = %[1]d`, &changeStreamOptions) + } return fmt.Sprintf(` resource "mongodbatlas_advanced_cluster" "test" { project_id = %[1]q @@ -1230,7 +1234,8 @@ func configAdvanced(projectID, clusterName string, p *admin20240530.ClusterDescr oplog_size_mb = %[7]d sample_size_bi_connector = %[8]d sample_refresh_interval_bi_connector = %[9]d - transaction_lifetime_limit_seconds = %[10]d + transaction_lifetime_limit_seconds = %[10]d + %[11]s } } @@ -1244,22 +1249,23 @@ func configAdvanced(projectID, clusterName string, p *admin20240530.ClusterDescr } `, projectID, clusterName, p.GetFailIndexKeyTooLong(), p.GetJavascriptEnabled(), p.GetMinimumEnabledTlsProtocol(), p.GetNoTableScan(), - p.GetOplogSizeMB(), p.GetSampleSizeBIConnector(), p.GetSampleRefreshIntervalBIConnector(), p.GetTransactionLifetimeLimitSeconds()) + p.GetOplogSizeMB(), p.GetSampleSizeBIConnector(), p.GetSampleRefreshIntervalBIConnector(), p.GetTransactionLifetimeLimitSeconds(), changeStreamOptionsString) } -func checkAdvanced(name, tls string) resource.TestCheckFunc { +func checkAdvanced(name, tls, changeStreamOptions string) resource.TestCheckFunc { return checkAggr( []string{"project_id", "replication_specs.#", "replication_specs.0.region_configs.#"}, map[string]string{ "name": name, - "advanced_configuration.0.minimum_enabled_tls_protocol": tls, - "advanced_configuration.0.fail_index_key_too_long": "false", - "advanced_configuration.0.javascript_enabled": "true", - "advanced_configuration.0.no_table_scan": "false", - "advanced_configuration.0.oplog_size_mb": "1000", - "advanced_configuration.0.sample_refresh_interval_bi_connector": "310", - "advanced_configuration.0.sample_size_bi_connector": "110", - "advanced_configuration.0.transaction_lifetime_limit_seconds": "300"}, + "advanced_configuration.0.minimum_enabled_tls_protocol": tls, + "advanced_configuration.0.fail_index_key_too_long": "false", + "advanced_configuration.0.javascript_enabled": "true", + "advanced_configuration.0.no_table_scan": "false", + "advanced_configuration.0.oplog_size_mb": "1000", + "advanced_configuration.0.sample_refresh_interval_bi_connector": "310", + "advanced_configuration.0.sample_size_bi_connector": "110", + "advanced_configuration.0.transaction_lifetime_limit_seconds": "300", + "advanced_configuration.0.change_stream_options_pre_and_post_images_expire_after_seconds": changeStreamOptions}, resource.TestCheckResourceAttrSet(dataSourcePluralName, "results.#"), resource.TestCheckResourceAttrSet(dataSourcePluralName, "results.0.replication_specs.#"), resource.TestCheckResourceAttrSet(dataSourcePluralName, "results.0.name")) diff --git a/internal/service/cluster/model_cluster.go b/internal/service/cluster/model_cluster.go index e68b5d39ec..cbfa3172bf 100644 --- a/internal/service/cluster/model_cluster.go +++ b/internal/service/cluster/model_cluster.go @@ -70,7 +70,7 @@ func flattenPolicyItems(items []matlas.PolicyItem) []map[string]any { } func flattenProcessArgs(p *matlas.ProcessArgs) []map[string]any { - return []map[string]any{ + flattenedProcessArgs := []map[string]any{ { "default_read_concern": p.DefaultReadConcern, "default_write_concern": p.DefaultWriteConcern, @@ -85,6 +85,10 @@ func flattenProcessArgs(p *matlas.ProcessArgs) []map[string]any { "transaction_lifetime_limit_seconds": p.TransactionLifetimeLimitSeconds, }, } + if p.ChangeStreamOptionsPreAndPostImagesExpireAfterSeconds != nil { + flattenedProcessArgs[0]["change_stream_options_pre_and_post_images_expire_after_seconds"] = p.ChangeStreamOptionsPreAndPostImagesExpireAfterSeconds + } + return flattenedProcessArgs } func flattenLabels(l []matlas.Label) []map[string]any { @@ -272,6 +276,10 @@ func expandProcessArgs(d *schema.ResourceData, p map[string]any) *matlas.Process } } + if _, ok := d.GetOkExists("advanced_configuration.0.change_stream_options_pre_and_post_images_expire_after_seconds"); ok { + res.ChangeStreamOptionsPreAndPostImagesExpireAfterSeconds = conversion.Pointer(cast.ToInt64(p["change_stream_options_pre_and_post_images_expire_after_seconds"])) + } + return res } diff --git a/internal/service/cluster/resource_cluster_migration_test.go b/internal/service/cluster/resource_cluster_migration_test.go index 025d690b85..94e0108858 100644 --- a/internal/service/cluster/resource_cluster_migration_test.go +++ b/internal/service/cluster/resource_cluster_migration_test.go @@ -11,5 +11,6 @@ func TestMigCluster_basicAWS_simple(t *testing.T) { } func TestMigCluster_partial_advancedConf(t *testing.T) { + mig.SkipIfVersionBelow(t, "1.19.0") // version where change_stream_options_pre_and_post_images_expire_after_seconds was introduced mig.CreateAndRunTest(t, partialAdvancedConfTestCase(t)) } diff --git a/internal/service/cluster/resource_cluster_test.go b/internal/service/cluster/resource_cluster_test.go index 4e891aced7..1b45fe89aa 100644 --- a/internal/service/cluster/resource_cluster_test.go +++ b/internal/service/cluster/resource_cluster_test.go @@ -179,6 +179,7 @@ func TestAccCluster_basic_DefaultWriteRead_AdvancedConf(t *testing.T) { SampleRefreshIntervalBIConnector: conversion.Pointer[int64](310), SampleSizeBIConnector: conversion.Pointer[int64](110), TransactionLifetimeLimitSeconds: conversion.Pointer[int64](300), + ChangeStreamOptionsPreAndPostImagesExpireAfterSeconds: conversion.Pointer[int64](113), }), Check: resource.ComposeAggregateTestCheckFunc( checkExists(resourceName), @@ -190,6 +191,7 @@ func TestAccCluster_basic_DefaultWriteRead_AdvancedConf(t *testing.T) { resource.TestCheckResourceAttr(resourceName, "advanced_configuration.0.oplog_size_mb", "1000"), resource.TestCheckResourceAttr(resourceName, "advanced_configuration.0.sample_refresh_interval_bi_connector", "310"), resource.TestCheckResourceAttr(resourceName, "advanced_configuration.0.sample_size_bi_connector", "110"), + resource.TestCheckResourceAttr(resourceName, "advanced_configuration.0.change_stream_options_pre_and_post_images_expire_after_seconds", "113"), ), }, { @@ -206,6 +208,7 @@ func TestAccCluster_basic_DefaultWriteRead_AdvancedConf(t *testing.T) { resource.TestCheckResourceAttr(resourceName, "advanced_configuration.0.oplog_size_mb", "1000"), resource.TestCheckResourceAttr(resourceName, "advanced_configuration.0.sample_refresh_interval_bi_connector", "310"), resource.TestCheckResourceAttr(resourceName, "advanced_configuration.0.sample_size_bi_connector", "110"), + resource.TestCheckResourceAttr(resourceName, "advanced_configuration.0.change_stream_options_pre_and_post_images_expire_after_seconds", "-1"), ), }, }, @@ -1486,11 +1489,12 @@ func configAdvancedConfDefaultWriteRead(projectID, name, autoscalingEnabled stri sample_refresh_interval_bi_connector = %[9]d default_read_concern = %[10]q default_write_concern = %[11]q + change_stream_options_pre_and_post_images_expire_after_seconds = %[12]d } } `, projectID, name, autoscalingEnabled, *p.JavascriptEnabled, p.MinimumEnabledTLSProtocol, *p.NoTableScan, - *p.OplogSizeMB, *p.SampleSizeBIConnector, *p.SampleRefreshIntervalBIConnector, p.DefaultReadConcern, p.DefaultWriteConcern) + *p.OplogSizeMB, *p.SampleSizeBIConnector, *p.SampleRefreshIntervalBIConnector, p.DefaultReadConcern, p.DefaultWriteConcern, *p.ChangeStreamOptionsPreAndPostImagesExpireAfterSeconds) } func configAdvancedConfPartial(projectID, name, autoscalingEnabled string, p *matlas.ProcessArgs) string { From 1c0b244e97b7950e921b5107cc57eb9040b7db4c Mon Sep 17 00:00:00 2001 From: svc-apix-bot Date: Tue, 10 Sep 2024 09:45:06 +0000 Subject: [PATCH 15/16] chore: Updates CHANGELOG.md for #2528 --- CHANGELOG.md | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/CHANGELOG.md b/CHANGELOG.md index bbd9dcb6f5..4ff6f5fffa 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -19,9 +19,15 @@ FEATURES: ENHANCEMENTS: +* data-source/mongodbatlas_advanced_cluster: Supports change_stream_options_pre_and_post_images_expire_after_seconds attribute ([#2528](https://github.com/mongodb/terraform-provider-mongodbatlas/pull/2528)) +* data-source/mongodbatlas_advanced_cluster: Supports change_stream_options_pre_and_post_images_expire_after_seconds attribute ([#2528](https://github.com/mongodb/terraform-provider-mongodbatlas/pull/2528)) * data-source/mongodbatlas_advanced_cluster: supports replica_set_scaling_strategy attribute ([#2539](https://github.com/mongodb/terraform-provider-mongodbatlas/pull/2539)) * data-source/mongodbatlas_advanced_clusters: supports replica_set_scaling_strategy attribute ([#2539](https://github.com/mongodb/terraform-provider-mongodbatlas/pull/2539)) +* data-source/mongodbatlas_cluster: Supports change_stream_options_pre_and_post_images_expire_after_seconds attribute ([#2528](https://github.com/mongodb/terraform-provider-mongodbatlas/pull/2528)) +* data-source/mongodbatlas_clusters: Supports change_stream_options_pre_and_post_images_expire_after_seconds attribute ([#2528](https://github.com/mongodb/terraform-provider-mongodbatlas/pull/2528)) +* resource/mongodbatlas_advanced_cluster: Supports change_stream_options_pre_and_post_images_expire_after_seconds attribute ([#2528](https://github.com/mongodb/terraform-provider-mongodbatlas/pull/2528)) * resource/mongodbatlas_advanced_cluster: supports replica_set_scaling_strategy attribute ([#2539](https://github.com/mongodb/terraform-provider-mongodbatlas/pull/2539)) +* resource/mongodbatlas_cluster: Supports change_stream_options_pre_and_post_images_expire_after_seconds attribute ([#2528](https://github.com/mongodb/terraform-provider-mongodbatlas/pull/2528)) * resource/mongodbatlas_encryption_at_rest: Adds `aws_kms_config.0.valid`, `azure_key_vault_config.0.valid` and `google_cloud_kms_config.0.valid` attribute ([#2538](https://github.com/mongodb/terraform-provider-mongodbatlas/pull/2538)) * resource/mongodbatlas_encryption_at_rest: Adds new `azure_key_vault_config.#.require_private_networking` field to enable connection to Azure Key Vault over private networking ([#2509](https://github.com/mongodb/terraform-provider-mongodbatlas/pull/2509)) From cbfbe75595df939382d8f20350a641b26805dda9 Mon Sep 17 00:00:00 2001 From: maastha <122359335+maastha@users.noreply.github.com> Date: Tue, 10 Sep 2024 14:08:51 +0100 Subject: [PATCH 16/16] doc: Adds 1.19.0 release upgrade guide (#2564) * upgrade guide skeleton * initial EAR * include changes from changelog * minor * Update docs/guides/1.19.0-upgrade-guide.md Co-authored-by: Agustin Bettati * add TF modules * stream processor * add tf modules to index * mention replica_set_scaling_strategy in the upgrade guide * mention change_stream_options_pre_and_post_images_expire_after_seconds --------- Co-authored-by: Agustin Bettati Co-authored-by: Oriol Arbusi --- docs/guides/1.19.0-upgrade-guide.md | 35 +++++++++++++++++++++++++++++ docs/index.md | 3 +++ 2 files changed, 38 insertions(+) create mode 100644 docs/guides/1.19.0-upgrade-guide.md diff --git a/docs/guides/1.19.0-upgrade-guide.md b/docs/guides/1.19.0-upgrade-guide.md new file mode 100644 index 0000000000..0da8b5ba58 --- /dev/null +++ b/docs/guides/1.19.0-upgrade-guide.md @@ -0,0 +1,35 @@ +--- +page_title: "Upgrade Guide 1.19.0" +--- + +# MongoDB Atlas Provider 1.19.0: Upgrade and Information Guide + +The Terraform MongoDB Atlas Provider version 1.19.0 has a number of new and exciting features. + +**New Resources, Data Sources, and Features:** +- You can now [manage customer keys from Azure Key Vault over Private Endpoints](https://www.mongodb.com/docs/atlas/security/azure-kms-over-private-endpoint/#manage-customer-keys-with-azure-key-vault-over-private-endpoints) to further encrypt your data at rest in Atlas with the new `mongodbatlas_encryption_at_rest_private_endpoint` resource and data sources in conjunction with the existing `mongodbatlas_encryption_at_rest` resource. + - In order to configure a private endpoint for your Azure Key Vault, the corresponding `mongodbatlas_encryption_at_rest` resource has to be adjusted by configuring `azure_key_vault_config.require_private_networking` to `true`. This attribute can be updated in place, ensuring the customer-managed keys encryption is never disabled. + - To learn more, please review `mongodbatlas_encryption_at_rest_private_endpoint` [resource documentation](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/resources/encryption_at_rest_private_endpoint). + +- You can now use the new `mongodbatlas_project_ip_addresses` data source that returns the IP addresses in an Atlas project categorized by services. + +- You can now manage [Atlas Stream Processors](https://www.mongodb.com/docs/atlas/atlas-stream-processing/overview/) with the new `mongodbatlas_stream_processor` resource, `mongodbatlas_stream_processor` and `mongodbatlas_stream_processors` data sources. To learn more, please review `mongodbatlas_stream_processor` [resource documentation](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/resources/stream_processor). + +- You can now configure the replica set scaling mode for `mongodbatlas_advanced_cluster` using `replica_set_scaling_strategy`. To learn more, please review `mongodbatlas_advanced_cluster` [resource documentation](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/resources/advanced_cluster) + +- You can now configure the minimum pre- and post-image retention time for `mongodbatlas_advanced_cluster` and `mongodbatlas_cluster` using `change_stream_options_pre_and_post_images_expire_after_seconds`. To learn more, please review either [mongodbatlas_advanced_cluster](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/resources/advanced_cluster#change_stream_options_pre_and_post_images_expire_after_seconds) or [mongodbatlas_cluster](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/resources/cluster#change_stream_options_pre_and_post_images_expire_after_seconds) resource documentation. + +**Deprecations and removals:** +- `ip_addresses` attribute has been deprecated in `mongodbatlas_project` resource and data sources in favor of the new `mongodbatlas_project_ip_addresses` [data source](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/data-sources/project_ip_addresses). + + +## New Terraform MongoDB Atlas modules +You can now leverage our [Terraform Modules](https://registry.terraform.io/namespaces/terraform-mongodbatlas-modules) to easily get started with MongoDB Atlas and critical features like [Push-based log export](https://registry.terraform.io/modules/terraform-mongodbatlas-modules/push-based-log-export/mongodbatlas/latest), [Private Endpoints](https://registry.terraform.io/modules/terraform-mongodbatlas-modules/private-endpoint/mongodbatlas/latest), etc. + +### Helpful Links + +* [Report bugs](https://github.com/mongodb/terraform-provider-mongodbatlas/issues) + +* [Request Features](https://feedback.mongodb.com/forums/924145-atlas?category_id=370723) + +* [Contact Support](https://docs.atlas.mongodb.com/support/) covered by MongoDB Atlas support plans, Developer and above. diff --git a/docs/index.md b/docs/index.md index b2d5124fc2..032d98e627 100644 --- a/docs/index.md +++ b/docs/index.md @@ -225,3 +225,6 @@ in our GitHub repo that will help both beginner and more advanced users. Have a good example you've created and want to share? Let us know the details via an [issue](https://github.com/mongodb/terraform-provider-mongodbatlas/issues) or submit a PR of your work to add it to the `examples` directory in our [GitHub repo](https://github.com/mongodb/terraform-provider-mongodbatlas/). + +## Terraform MongoDB Atlas Modules +You can now leverage our [Terraform Modules](https://registry.terraform.io/namespaces/terraform-mongodbatlas-modules) to easily get started with MongoDB Atlas and critical features like [Push-based log export](https://registry.terraform.io/modules/terraform-mongodbatlas-modules/push-based-log-export/mongodbatlas/latest), [Private Endpoints](https://registry.terraform.io/modules/terraform-mongodbatlas-modules/private-endpoint/mongodbatlas/latest), etc. \ No newline at end of file