Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

aws_lambda_function: source_code_hash argument expects a zip file to be present #6513

Closed
adriamorgado opened this issue May 6, 2016 · 19 comments

Comments

@adriamorgado
Copy link

Hi there,

I am creating an aws_lambda_function resource and I am just wondering whether the source_code_hash argument expects the zip file containing the lambda function source code to be present during the execution plan step.

In my case, the lambda resource depends on a null resource that will, in the first place, execute a script to pull the zip files from a Git repository, as it follows:

resource "null_resource" "lambda_function" {
    provisioner "local-exec" {`
        command = "./get_lambda_functions.sh"
    }
}

resource "aws_lambda_function" "lambda-function" {
    depends_on = [ "null_resource.lambda_function" ]
    filename = "lambda.zip"
    [...]
    source_code_hash = "${base64sha256(file("lambda.zip"))}"

However, I am getting the following error when running terraform plan:

Errors:

  * file: open lambda.zip: no such file or directory in:

${base64sha256(file("lambda.zip"))}

Thanks!

@coen-hyde
Copy link

I have the same issue. I'm constructing the zip with a null_resource. But it seems file references are processed before depends_on dependancies.

@apparentlymart
Copy link
Contributor

apparentlymart commented May 13, 2016

Yes, that's accurate... all interpolation functions are resolved very early in Terraform's execution, and the file one in particular expects the file to exist at that point.

There is unfortunately not a great way to do what you're trying to do right now. Generally-speaking I would advise using Terraform in a manner where local files are considered as read-only and any steps that create/modify files happen outside of Terraform, before it runs.

I actually happen to be working on a very similar configuration myself today, and I approached it as follows:

  • A separate build step is done by a shell script that does a bunch of preparation steps and then runs zip to produce the source code bundle.
  • The Terraform config has a variable for the file path, which gets interpolated into the filename attribute on the aws_lambda_function resource and into source_code_hash, presuming that the previous step already ran and a suitable file path is being passed in.
  • Our automated build/deploy pipeline runs the first of these steps, generating the zip file as lambda.zip in its work directory, and then runs Terraform with -var="lambda_zip_file=lambda.zip".

A different way to spin it, which is then similar to the workflow you'd do if you were building docker images or AMIs for deployment (e.g. using packer), would be to use the AWS CLI client to push the generated zip file up to S3 and then use the S3 attributes of aws_lambda_function to read it from there, so the local disk isn't used to pass the file from build to deploy at all.

In this particular case you could work around it by not setting source_code_hash, but of course then the function will not get updated when the zip file contents change.

@radeksimko
Copy link
Member

radeksimko commented May 15, 2016

If @apparentlymart wouldn't come in to post his comment, I'd probably post almost exactly the same one 😃

Maybe just one more note - @adriamorgado @coen-hyde where/how would you run tests for such Lambda function?

Terraform won't run tests for you neither it will create the ZIP file (well, it will if you try really hard, but you get what I mean, hopefully). Each Lambda function, or group of functions which are related to each other deserve their own repository with all the bells and whistles (CI).

I don't think there's much we can do/fix in Terraform, except possibly removing the ability to define local file altogether and making the input S3 only 😜 (or just better documenting use cases and clearly discourage from using local files).

@coen-hyde
Copy link

@apparentlymart what you suggested can't work for what I was trying to do, but I'll solve it another way. The reason I wanted to zip up the lambda function with Terraform is I was constructing a config file that used Terraform vars specific to the Terraform environment I was deploying and putting that in the dir before zipping it. I can't create the zip before running Terraform because I won't have the outputs if the infrastructure hasn't been deployed yet. I could do it after but then I have to manage the upload to aws manually.

What I think I will do is put the env config on S3 and download that every time the lambda is run. This should be ok since it will be run infrequently. And now it can have it's own repo; with tests ;)

@apparentlymart
Copy link
Contributor

apparentlymart commented May 16, 2016

Hmm... making information from the Terraform run available to the Lambda function is an interesting problem, but I suppose not hugely different than the same for EC2 instances, etc... just need to put the config somewhere that the lambda function can find it, and somehow pre-configure the lambda function to know where to look.

However, I agree that for Lambda functions in particular it could be useful to inline settings directly inside them, since doing some sort of lookup to get the settings could end up being a large part of the run time for some Lambda functions, and also annoyingly a Lambda function can't have both VPC access and Internet access if you don't have a NAT gateway. (For S3 this doesn't matter too much I suppose, but in my environment all the interesting settings are in Consul.)

I have often found myself wishing for a way to provide arbitrary metadata along with the zip file... in my cases so far it's been to more easily share the same zip file between multiple different Lambda functions, but this seems like a good reason for that too... not sure if there's anything Terraform can do here without help from the underlying platform, though. 😒

@adriamorgado
Copy link
Author

@apparentlymart Your approach to create a separate step in our deployment pipeline so that the zip files are generated beforehand is definitely the way to go - at least for our use case 😄 -.

@radeksimko Removing the ability to define local files is not a good idea to me, as it basically leaves the user with no other choice but S3, and you might want to reference a local zip file that, for instance, has been created as part of a build step, without having to upload your lambda function source code to a specific bucket.
Either way, and as you mentioned, I think it might be useful to get that documented 😁

@denniswebb
Copy link
Contributor

I threw this utility together so I can simply add the base64sha256 to the Atlas Artifact and bypass having TF calculate it. I haven't fully tested it, but will report back my results.

@denniswebb
Copy link
Contributor

I have tested this out and it appears to work as needed.

@Zordrak
Copy link

Zordrak commented Jul 25, 2016

I have got a particularly ugly workaround for this.

Create a 0-byte dummy zip file as the lambda function content file. This gets around the file being "stat"ed on terraform start.

Then, define the content of the lambda function in a template file with the appropriate variables to be interpolated and have the lambda function explicitly depend on the template file. Define a local-exec provisioner to delete the dummy zip file, write the rendered template output to a file and the zip the file up to the original dummy filename. When it comes time to upload the zip to create the lambda function, the new file contents are read.

The reason this is so ugly, despite the dummy file, is that the local-exec only runs on creation of the template file. So to ensure the file is correctly populated on each run to update the lambda function, you have to first run a -destroy on the template_file resource.

Ugly. But it works.

@jaybocc2
Copy link

i wrote a provider to solve this use case for myself as i was not happy about the available workarounds: https://github.com/jaybocc2/terraform-provider-zip

@StyleT
Copy link

StyleT commented Sep 15, 2016

Looks like this issue will be resolved together with #8144

@jffry
Copy link

jffry commented Nov 29, 2016

FYI, the introduction of the Archive Provider in v0.7.x makes this a lot easier in the simple case where you don't need to install anything via NPM or do any other build steps.

Here's what I have setup. Changes / additions / deletions are detected by Terraform when I plan/apply. Maybe this will be useful to you?

On Disk:

lambda.tf
lambda-functions/
  test_lambda/
    index.js
    other-thing.js

Terraform:

#lambda.tf

data "archive_file" "test_lambda_code" {
  type = "zip"
  source_dir = "${path.module}/lambda-functions/test_lambda/"
  output_path = "${path.module}/.terraform/archive_files/test_lambda.zip"
}

resource "aws_lambda_function" "test_lambda" {
  function_name = "test_lambda"
  filename = "${data.archive_file.test_lambda_code.output_path}"
  source_code_hash = "${base64sha256(file("${data.archive_file.test_lambda_code.output_path}"))}"
  # ...
}

@fcorange
Copy link

@jffry Your approach is an easier way to generate .zip file for Lambda, however it still requires the .zip file be present/pre-generated in the path before running either plan or apply - otherwise Terraform would throw errors when evaluating source_code_hash.

@jffry
Copy link

jffry commented Nov 30, 2016

@fcorange at least when I tried things out, Terraform correctly figures out that the lambda function depends on the archive file, because I interpolate data.archive_file.test_lambda_code.output_path into the expression that calulates source_code_hash:

//terraform.tfstate
{
  //...
  "aws_lambda_function.test_lambda": {
    "type": "aws_lambda_function",
    "depends_on": [
      "data.archive_file.test_lambda_code"
    ],
    //...
  }
  //...
}

So AFAICT wiping out the .zip and running plan works as expected.

@fcorange
Copy link

@jffry After upgrading my Terraform from 0.7.6 to 0.7.13, this seems to be working. Thanks!

@Ninir
Copy link
Contributor

Ninir commented Jan 10, 2017

@fcorange @adriamorgado

Enhancing @jffry 's answer, you can even do that:

data "archive_file" "test_lambda_code" {
  type = "zip"
  source_dir = "${path.module}/lambda-functions/test_lambda/"
  output_path = "${path.module}/.terraform/archive_files/test_lambda.zip"
}

resource "aws_lambda_function" "test_lambda" {
  function_name = "test_lambda"
  filename = "${data.archive_file.test_lambda_code.output_path}"
  source_code_hash = "${data.archive_file.test_lambda_code.output_base64sha256}"
  # ...
}

See the last source_code_hash = "${data.archive_file.test_lambda_code.output_base64sha256}", where it fully depends on the archive file output.
With this, the depends_on is not necessary, and should work as expected :)

@stack72 I think this may be closed since the Data Source Archive File and Lambda functions have been improved a lot for last months, and the original issue may not exist.

@stack72
Copy link
Contributor

stack72 commented Jan 12, 2017

Closing - thanks for the follow up @Ninir :)

@miere
Copy link

miere commented Dec 20, 2018

I'm not sure that closing this issue was the right decision, specially when we've being waiting for such a long time to have symlinks properly working in Archive File and so far nothing has been done but a sincere apologize reply from a shamed HashiCorp employee.

idavidmcdonald added a commit to alphagov/gsp that referenced this issue Jan 28, 2019
hashicorp/terraform#6513

File references are processed before depends_on dependancies. This
appears to be causing an error where terraform can not apply as
it can not find the lambda zip file it expects to be there.

I have followed the approach suggested by
hashicorp/terraform#6513 (comment).
It works fine locally.
@ghost
Copy link

ghost commented Mar 30, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked and limited conversation to collaborators Mar 30, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests