Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Lock file generation is inconsistant #31194

Closed
tmccombs opened this issue Jun 6, 2022 · 3 comments
Closed

Lock file generation is inconsistant #31194

tmccombs opened this issue Jun 6, 2022 · 3 comments
Labels
bug duplicate issue closed because another issue already tracks this problem

Comments

@tmccombs
Copy link
Contributor

tmccombs commented Jun 6, 2022

Terraform Version

Terraform v1.1.7
on linux_amd64

Terraform Configuration Files

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 4.9"
    }
  }
  required_version = ">= 0.13"
}

Expected Behavior

terraform init, or terraform init -upgrade should generate all necessary hashes, and running terraform init even on another host shouldn't change the .terraform.lock.hcl file.

Actual Behavior

Sometimes when I run terraform init or terraform init -upgrade it will populate hte .terraform.lock.hcl file, but the only hash in the hashes array will be the h1 hash. Then later, someone else will run a terraform init, and it will add a bunch of zh hashes to the lockfile. I'm not entirely sure what the situation is that leads to this. Maybe something to do with the provider already being cached?

Since we check the .terraform.lock.hcl files into our git repo, this can be pretty annoying since someone will make an unrelated change (to say updating a provider), but it will update the lock file.

Running terraform providers lock will also add the zh hashes.

Steps to Reproduce

  1. terraform init or terraform init -upgrade on one machine
  2. terraform init on a separate host. sometimes, I don't know exactly what conditions trigger it.
  3. Also, if you run terraform providers lock it will also add a bunch of

References

Probably related to #27811

@tmccombs tmccombs added bug new new issue not yet triaged labels Jun 6, 2022
@apparentlymart
Copy link
Contributor

Hi @tmccombs! Thanks for reporting this.

This is indeed essentially a restatement of the other issue you liked to, #27811.

The crux of the problem is that if you customise Terraform's provider installation settings in a way that prevents it from talking to the provider's origin registry then Terraform can't get the signed set of checksums created by the provider author and so it must fall back on the less thorough behavior of calculating a checksum locally from whatever single package it does have access to.

The main way to avoid this is to use the default provider installation strategy (no mirrors or caching at all) during development, and reserve the special settings only for situations such as running Terraform in an automated pipeline, where the configuration is treated as read-only and so there should be no changes to the lock file. You can reinforce the latter by having the automation run terraform init -lockfile=readonly, which tells Terraform to either use the lock file as it already is or to return an error if that isn't possible, prompting you to then fix the lock file in your development environment before trying again.

As this situation is already represented in a number of issues, I don't think this new one will cause any different outcome than the others would: there is still the threat modelling question to be answered about whether trusting a mirror or cache as a source for the official checksums of a provider is valid or if it undermines the checksum scheme by allowing a hypothetical attacker to inject rogue checksums into the system that would then be trusted by downstream users of the lock file, and #27811 is already asking that question so I think we can consider this one to be a duplicate of that one, and thus I'm going to close this.

Thanks again!

@apparentlymart apparentlymart closed this as not planned Won't fix, can't repro, duplicate, stale Jun 7, 2022
@tmccombs
Copy link
Contributor Author

tmccombs commented Jun 7, 2022

The main way to avoid this is to use the default provider installation strategy (no mirrors or caching at all) during development

That isn't a great option. In development I have a large number of projects that use the same providers. And using it without a cache results in making init substantially slower (at least on the first run), and significantly higher disk usage.

@crw crw added duplicate issue closed because another issue already tracks this problem and removed new new issue not yet triaged labels Jun 10, 2022
@github-actions
Copy link
Contributor

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Jul 10, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug duplicate issue closed because another issue already tracks this problem
Projects
None yet
Development

No branches or pull requests

3 participants