-
Notifications
You must be signed in to change notification settings - Fork 9.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Terraform flags aws_mskconnect_connector resource for recreation when NO changes are made to it #24538
Comments
Hey @1968mnelson 👋 Thank you for taking the time to raise this! So that we have all of the necessary information in order to look into this, can you update the issue description to include all of the information requested in the bug report template? |
I'm also seeing this issue. I have not changed anything in my configuration but it is forcing a replace each time. Terraform Version: 1.3.9
The custom_plugin and service_execution_role_arn are data looks but that shouldn't cause the resource to be replaced. |
Hey @EthanDavis 👋 Thank you for the additional information, and sorry about that dead link -- we've changed things a bit since my last comment. Are you able to supply an example Terraform configuration as well? |
Hi @justinretzolk I am. Please see below and let me know if you would like additional information. data "aws_mskconnect_custom_plugin" "mssql_custom_plugin" {
name = "dlp-mssql-src-plugin"
}
locals {
cc_clients = tolist(["ccdemo"])
}
data "aws_iam_role" "src_connector_role" {
name = "dlp-compcare-msk-connect-src-connector-role-${var.environment}"
}
resource "aws_mskconnect_connector" "src_connector" {
depends_on = [
aws_msk_cluster.compcare_cluster,
data.aws_iam_role.src_connector_role,
data.aws_mskconnect_custom_plugin.mssql_custom_plugin
]
count = length(local.cc_clients)
name = "dlp-compcare-connector-${element(local.cc_clients, count.index)}-src-${var.environment}"
kafkaconnect_version = "2.7.1"
capacity {
autoscaling {
mcu_count = 1
min_worker_count = 1
max_worker_count = 2
scale_in_policy {
cpu_utilization_percentage = 20
}
scale_out_policy {
cpu_utilization_percentage = 80
}
}
}
connector_configuration = {
"connector.class" = "io.debezium.connector.sqlserver.SqlServerConnector"
"tasks.max" = 1
"sanitize.field.names" = true
"database.instance" = "MSSQLSERVER"
"database.encrypt" = false
"database.user" = "debezium"
"database.names" = "mydb"
"database.port" = 1433
"database.hostname" = "rds-endpoint.us-east-1.rds.amazonaws.com"
"database.password" = "$${secretManager:dlp-compcare:dbpassword}"
"topic.prefix" = "dlp.compcare.${element(local.cc_clients, count.index)}"
"schema.history.internal.kafka.topic" = "dlp.compcare.${element(local.cc_clients, count.index)}.history"
"transforms.unwrap.type" = "io.debezium.transforms.ExtractNewRecordState"
"transforms" = "unwrap"
"schema.include.list" = "dbo"
"value.converter.schemaAutoRegistrationEnabled" = true
"value.converter.avroRecordType" = "GENERIC_RECORD"
"internal.key.converter.schemas.enable" = false
"key.converter.avroRecordType" = "GENERIC_RECORD"
"value.converter" = "com.amazonaws.services.schemaregistry.kafkaconnect.AWSKafkaAvroConverter"
"key.converter" = "org.apache.kafka.connect.storage.StringConverter"
"value.converter.registry.name" = "dlp-compcare-schema-registry-${var.environment}"
"value.converter.region" = var.region
"key.converter.registry.name" = "dlp-compcare-schema-registry-${var.environment}"
"key.converter.schemas.enable" = false
"key.converter.schemaAutoRegistrationEnabled" = true
"value.converter.schemas.enable" = true
"internal.value.converter.schemas.enable" = false
"internal.value.converter" = "com.amazonaws.services.schemaregistry.kafkaconnect.AWSKafkaAvroConverter"
"internal.key.converter" = "com.amazonaws.services.schemaregistry.kafkaconnect.AWSKafkaAvroConverter"
"value.converter.compatibility" = "BACKWARD"
"key.converter.compatibility" = "BACKWARD"
"schema.history.internal.kafka.bootstrap.servers" = aws_msk_cluster.compcare_cluster.bootstrap_brokers_sasl_iam
"column.exclude.list" = "colum1"
"table.include.list" = "tabel1"
"database.history.consumer.sasl.client.callback.handler.class" = "software.amazon.msk.auth.iam.IAMClientCallbackHandler"
"database.history.consumer.sasl.jaas.config" = "software.amazon.msk.auth.iam.IAMLoginModule required;"
"database.history.consumer.security.protocol" = "SASL_SSL"
"database.history.consumer.sasl.mechanism" = "AWS_MSK_IAM"
"database.history.producer.sasl.client.callback.handler.class" = "software.amazon.msk.auth.iam.IAMClientCallbackHandler"
"database.history.producer.sasl.jaas.config" = "software.amazon.msk.auth.iam.IAMLoginModule required;"
"database.history.producer.security.protocol" = "SASL_SSL"
"database.history.producer.sasl.mechanism" = "AWS_MSK_IAM"
"schema.history.internal.consumer.sasl.client.callback.handler.class" = "software.amazon.msk.auth.iam.IAMClientCallbackHandler"
"schema.history.internal.consumer.sasl.jaas.config" = "software.amazon.msk.auth.iam.IAMLoginModule required;"
"schema.history.internal.consumer.security.protocol" = "SASL_SSL"
"schema.history.internal.consumer.sasl.mechanism" = "AWS_MSK_IAM"
"schema.history.internal.producer.sasl.client.callback.handler.class" = "software.amazon.msk.auth.iam.IAMClientCallbackHandler"
"schema.history.internal.producer.sasl.jaas.config" = "software.amazon.msk.auth.iam.IAMLoginModule required;"
"schema.history.internal.producer.security.protocol" = "SASL_SSL"
"schema.history.internal.producer.sasl.mechanism" = "AWS_MSK_IAM"
}
kafka_cluster {
apache_kafka_cluster {
bootstrap_servers = aws_msk_cluster.compcare_cluster.bootstrap_brokers_sasl_iam
vpc {
security_groups = var.connector_security_groups
subnets = var.cluster_subnets
}
}
}
log_delivery {
worker_log_delivery {
cloudwatch_logs {
enabled = true
log_group = aws_cloudwatch_log_group.compcare_msk_broker_log_group.name
}
}
}
worker_configuration {
arn = aws_mskconnect_worker_configuration.compcare_config.arn
revision = aws_mskconnect_worker_configuration.compcare_config.latest_revision
}
kafka_cluster_client_authentication {
authentication_type = "IAM"
}
kafka_cluster_encryption_in_transit {
encryption_type = "TLS"
}
plugin {
custom_plugin {
arn = data.aws_mskconnect_custom_plugin.mssql_custom_plugin.arn
revision = data.aws_mskconnect_custom_plugin.mssql_custom_plugin.latest_revision
}
}
service_execution_role_arn = data.aws_iam_role.src_connector_role.arn
} Thanks for your help! |
@EthanDavis -- Thanks for providing that configuration. With the configuration and the logging that you previously provided, I have an idea of what might be happening. It looks like this configuration is within a module:
Is this module dependent on a resource or module that also has changes? If so, the reading of the data source will be deferred to apply time, which would lead to the behavior you're experiencing. I just worked with someone else on a similar issue, and wrote up an example of what was happening there. If I'm correct, I believe you may be in the same boat. |
@justinretzolk yes that is my case as well. I'll review your write up and see if I can update my config to avoid this issue. |
Hey @EthanDavis 👋 Was going through my GitHub notifications and saw this one -- were you able to find success in modifying the configuration to get around this? |
Warning This issue has been closed, meaning that any additional comments are hard for our team to see. Please assume that the maintainers will not see them. Ongoing conversations amongst community members are welcome, however, the issue will be locked after 30 days. Moving conversations to another venue, such as the AWS Provider forum, is recommended. If you have additional concerns, please open a new issue, referencing this one where needed. |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. |
I have just recently learned that Hashi has a resource for MSK Connect Connector as of 4.8 'aws_mskconnect_connector'. I was happy to see this as I had to use a CloudFormation stack to create it which makes troubleshooting very difficult.
It deploys the connector just fine, but what I am seeing now is that when I deploy and then re-deploy with NO changes, the plan still seems to always flag the connector for replacement. According to the plan itself it looks like the VPC subnets are maybe the culprit BUT those NEVER change. I know this is a relatively new service and brand spanking new Hashi resources so was wondering if maybe there was a bug under the hood causing this or if its something im just missing.
TIA!
09:01:16 # module.msk-connector.aws_mskconnect_connector.this must be replaced
09:01:16 -/+ resource "aws_mskconnect_connector" "this" {
09:01:16 ~ arn = "arn:aws:kafkaconnect:us-east-1::connector/eto-msk-connect-104284-unit-p-13-connector/" -> (known after apply)
09:01:16 ~ id = "arn:aws:kafkaconnect:us-east-1:::connector/eto-msk-connect-104284-unit-p-13-connector/" -> (known after apply)
09:01:16 name = "eto-msk-connect-104284-unit-p-13-connector"
09:01:16 ~ version = "C2EUQ1WTGCTBG2" -> (known after apply)
09:01:16 # (4 unchanged attributes hidden)
09:01:16
09:01:16
09:01:16 ~ kafka_cluster {
09:01:16 ~ apache_kafka_cluster {
09:01:16 # (1 unchanged attribute hidden)
09:01:16
09:01:16 ~ vpc {
09:01:16 ~ subnets = [
09:01:16 - "subnet-",
09:01:16 - "subnet-",
09:01:16 - "subnet-**",
09:01:16 ] -> (known after apply) # forces replacement
09:01:16 # (1 unchanged attribute hidden)
09:01:16 }
09:01:16 }
09:01:16 }
09:01:16 # (5 unchanged blocks hidden)
09:01:16 }
The text was updated successfully, but these errors were encountered: