-
Notifications
You must be signed in to change notification settings - Fork 9.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: aws_lambda_function try to update qualified_arn every time #33383
Comments
Community NoteVoting for Prioritization
Volunteering to Work on This Issue
|
Hey @eduardocque 👋 Thank you for taking the time to raise this! I noticed that in your configuration, this resource is within a module. Is that module dependent on any other changes in the configuration? If so, that might be delaying the data source reads until apply time, which might explain this behavior. If you can supply them, debug logs (redacted as needed) might help us look into this a bit more too. |
Hi @justinretzolk well the module is very simple and is completely independent, basically is just to group a few resources and be able to reuse it full main.tf (module) terraform {
required_providers {
aws = {
source = "hashicorp/aws"
}
}
}
data "archive_file" "lambda" {
type = "zip"
source_file = var.source_path
output_path = "./builds/${var.name}.zip"
}
resource "aws_iam_role" "lambda_role" {
name = "lambda-${var.project_name}-${var.name}-LambdaRole"
path = "/"
assume_role_policy = jsonencode({
"Version" : "2012-10-17",
"Statement" : [
{
"Action" : "sts:AssumeRole",
"Principal" : {
"Service" : [
"lambda.amazonaws.com",
"edgelambda.amazonaws.com"
]
},
"Effect" : "Allow",
"Sid" : ""
}
]
})
}
resource "aws_iam_role_policy_attachment" "lambda_inst_role_attc_execution" {
role = aws_iam_role.lambda_role.name
policy_arn = "arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole"
}
resource "aws_iam_role_policy_attachment" "lambda_inst_role_attc_cloud_watch" {
role = aws_iam_role.lambda_role.name
policy_arn = "arn:aws:iam::aws:policy/CloudWatchLogsFullAccess"
}
resource "aws_iam_role_policy_attachment" "lambda_inst_role_attc_dynamodb" {
count = var.hasDynamoDB ? 1 : 0
role = aws_iam_role.lambda_role.name
policy_arn = "arn:aws:iam::aws:policy/AmazonDynamoDBFullAccess"
}
# locals {
# environment_map = var.environment[*]
# }
resource "aws_lambda_function" "lambda_function" {
filename = data.archive_file.lambda.output_path
function_name = var.name
description = var.description
role = aws_iam_role.lambda_role.arn
handler = "index.handler"
source_code_hash = data.archive_file.lambda.output_base64sha256
runtime = var.runtime
publish = true
# dynamic "environment" {
# for_each = local.environment_map
# content {
# variables = environment.value
# }
# }
}
variables.tf variable "project_name" {
description = "Plitzi Project Name"
type = string
default = "plitzi"
}
variable "name" {
description = "Lambda Name"
type = string
default = ""
}
variable "description" {
description = "Lambda Description"
type = string
default = ""
}
variable "runtime" {
description = "Lambda Runtime"
type = string
default = "nodejs18.x"
}
variable "source_path" {
description = "Lambda Source Path"
type = string
default = ""
}
variable "environment" {
description = "Lambda Environment Variables"
type = map(string)
default = {}
}
variable "hasDynamoDB" {
description = "Lambda has DynamoDB"
type = bool
default = false
} this is how was called module "lambda_deployment_redirect" {
source = "./modules/lambda"
source_path = "./functions/deployment-redirect/index.mjs"
name = "DeploymentRedirect"
description = "Lambda function to redirect the deployments"
project_name = var.project_name
hasDynamoDB = true
providers = {
aws = aws.global
}
} If you notice, the environment code is commented, this way it works fine, if I uncomment it, that's when the problem starts to occur if u have more question feels free to ask |
See @antonbabenko PR for a fix that needs to be made in the provider. The fields are marked optional in the docs, but leaving them out makes future applies end up in a constant state of drift. |
Thank you @datfinesoul - that helped! (that is, setting all the optional |
I am experiencing this issue too, like @eduardocque I to have this problem triggered from an aws_lambda_function in a module, all 4 logging_config attributes are configured per @antonbabenko. Terraform 1.9.2 & hashicorp/aws v5.61.0 My 'workaround' is just to have a lifecycle policy in place ignoring these changes. Its ok for now but long term i'm not super happy with it, ive been through the debug logs and couldnt identify what was triggering the calculation of the resource information.
You also now get this in the output from
|
I just came across that issue while working with Image based package lambda resource. The problem only arose when setting This fixed it, without needing to setup a bandaid lifecycle condition :
In short, if using If it is not a bug, it should be at least mentioned in the documentation. |
Terraform Core Version
1.5.5
AWS Provider Version
5.16.1
Affected Resource(s)
aws_lambda_function
Expected Behavior
after do
plan
orapply
if i havent do any change to my lambda function code or environment variables, this should not try to deploy it again over and overActual Behavior
each time that i do plan or apply this try to update
qualified_arn
andqualified_invoke_arn
even if i havent change the code or environment variables or environment variables is emptyRelevant Error/Panic Output Snippet
Terraform Configuration Files
Steps to Reproduce
im just running
terraform plan
and each time that i do that, the previous output happensif is the first time its fine because we have to apply the changes, but after that you do again
terraform plan
you will notice the previous outputDebug Output
No response
Panic Output
No response
Important Factoids
No response
References
No response
Would you like to implement a fix?
None
The text was updated successfully, but these errors were encountered: