Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

custom provider with many platform builds fails with localfile on v0.14-rc1 #26901

Closed
jurgenweber opened this issue Nov 12, 2020 · 13 comments
Closed
Labels
cli enhancement v0.14 Issues (primarily bugs) reported against v0.14 releases

Comments

@jurgenweber
Copy link

jurgenweber commented Nov 12, 2020

Terraform Version

Terraform v0.14.0-rc1
+ provider instaclustr/instaclustr/instaclustr v1.6.1
+ provider registry.terraform.io/hashicorp/aws v3.14.1
+ provider registry.terraform.io/hashicorp/helm v1.3.2
+ provider registry.terraform.io/hashicorp/kubernetes v1.13.3
+ provider registry.terraform.io/hashicorp/local v1.4.0
+ provider registry.terraform.io/hashicorp/null v2.1.2
+ provider registry.terraform.io/hashicorp/random v3.0.0
+ provider registry.terraform.io/hashicorp/template v2.2.0
+ provider registry.terraform.io/hashicorp/tfe v0.22.0

Terraform Configuration Files

terraform {
  required_providers {
    instaclustr = {
      version = "1.6.1"
      source   = "instaclustr/instaclustr/instaclustr"
    }
  }
  required_version = ">= 0.13"
}

Debug Output

$ rm -rf .terraform*; tf init; tfp; tf version

......

Initializing the backend...

Successfully configured the backend "remote"! Terraform will automatically
use this backend unless the backend configuration changes.

The currently selected workspace (default) does not exist.
  This is expected behavior when the selected workspace did not have an
  existing non-empty state. Please enter a number to select a workspace:

  1. dev-mtribes-sydney-aws

  Enter a value: 1


Initializing provider plugins...
- terraform.io/builtin/terraform is built in to Terraform
- Finding latest version of hashicorp/tfe...
- Finding hashicorp/aws versions matching ">= 2.0.0, >= 2.68.0"...
- Finding hashicorp/template versions matching ">= 2.0.0, >= 2.1.0"...
- Finding latest version of hashicorp/helm...
- Finding hashicorp/null versions matching ">= 2.0.0, ~> 2.0, >= 2.1.0"...
- Finding hashicorp/kubernetes versions matching ">= 1.11.1"...
- Finding hashicorp/random versions matching ">= 2.1.0, >= 2.2.0"...
- Finding hashicorp/local versions matching ">= 1.2.0, ~> 1.2, >= 1.4.0"...
- Finding instaclustr/instaclustr/instaclustr versions matching "1.6.1"...
- Installing hashicorp/local v1.4.0...
- Installed hashicorp/local v1.4.0 (signed by HashiCorp)
- Installing instaclustr/instaclustr/instaclustr v1.6.1...
- Installed instaclustr/instaclustr/instaclustr v1.6.1 (unauthenticated)
- Installing hashicorp/aws v3.14.1...
- Installed hashicorp/aws v3.14.1 (signed by HashiCorp)
- Installing hashicorp/null v2.1.2...
- Installed hashicorp/null v2.1.2 (signed by HashiCorp)
- Installing hashicorp/kubernetes v1.13.3...
- Installed hashicorp/kubernetes v1.13.3 (signed by HashiCorp)
- Installing hashicorp/random v3.0.0...
- Installed hashicorp/random v3.0.0 (signed by HashiCorp)
- Installing hashicorp/tfe v0.22.0...
- Installed hashicorp/tfe v0.22.0 (signed by HashiCorp)
- Installing hashicorp/template v2.2.0...
- Installed hashicorp/template v2.2.0 (signed by HashiCorp)
- Installing hashicorp/helm v1.3.2...
- Installed hashicorp/helm v1.3.2 (signed by HashiCorp)

Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
Success! The configuration is valid.

Running plan in the remote backend. Output will stream here. Pressing Ctrl-C
will stop streaming the logs, but will not stop the plan running remotely.

Preparing the remote plan...

The remote workspace is configured to work with configuration at
/environment relative to the target repository.

Terraform will upload the contents of the following directory,
excluding files or directories as defined by a .terraformignore file
at /Users/jurgen.weber/checkouts/deltatre/terraform/.terraformignore (if it is present),
in order to capture the filesystem context the remote workspace expects:
    /Users/jurgen.weber/checkouts/deltatre/terraform

To view this run in a browser, visit:
https://app.terraform.io/app/mtribes/environment-dev-mtribes-sydney-aws/runs/run-eGMkrM8cRojf5JH6

Waiting for the plan to start...

Terraform v0.14.0-rc1
Configuring remote state backend...
Initializing Terraform configuration...

Setup failed: Failed terraform init (exit 1): <nil>

Output:
Initializing modules...

Initializing the backend...

Successfully configured the backend "remote"! Terraform will automatically
use this backend unless the backend configuration changes.

Initializing provider plugins...
- terraform.io/builtin/terraform is built in to Terraform
- Reusing previous version of instaclustr/instaclustr/instaclustr from the dependency lock file
- Reusing previous version of hashicorp/null from the dependency lock file
- Reusing previous version of hashicorp/tfe from the dependency lock file
- Reusing previous version of hashicorp/random from the dependency lock file
- Reusing previous version of hashicorp/template from the dependency lock file
- Reusing previous version of hashicorp/aws from the dependency lock file
- Reusing previous version of hashicorp/kubernetes from the dependency lock file
- Reusing previous version of hashicorp/local from the dependency lock file
- Reusing previous version of hashicorp/helm from the dependency lock file
- Installing hashicorp/helm v1.3.2...
- Installed hashicorp/helm v1.3.2 (signed by HashiCorp)
- Installing instaclustr/instaclustr/instaclustr v1.6.1...
- Installing hashicorp/null v2.1.2...
- Installed hashicorp/null v2.1.2 (signed by HashiCorp)
- Installing hashicorp/random v3.0.0...
- Installed hashicorp/random v3.0.0 (signed by HashiCorp)
- Installing hashicorp/template v2.2.0...
- Installed hashicorp/template v2.2.0 (signed by HashiCorp)
- Installing hashicorp/tfe v0.22.0...
- Installed hashicorp/tfe v0.22.0 (signed by HashiCorp)
- Installing hashicorp/aws v3.14.1...
- Installed hashicorp/aws v3.14.1 (signed by HashiCorp)
- Installing hashicorp/kubernetes v1.13.3...
- Installed hashicorp/kubernetes v1.13.3 (signed by HashiCorp)
- Installing hashicorp/local v1.4.0...
- Installed hashicorp/local v1.4.0 (signed by HashiCorp)

Error: Failed to install provider

Error while installing instaclustr/instaclustr/instaclustr v1.6.1: the local
package for instaclustr/instaclustr/instaclustr 1.6.1 doesn't match any of the
checksums previously recorded in the dependency lock file (this might be
because the available checksums are for packages targeting different
platforms)

Terraform v0.14.0-rc1
+ provider instaclustr/instaclustr/instaclustr v1.6.1
+ provider registry.terraform.io/hashicorp/aws v3.14.1
+ provider registry.terraform.io/hashicorp/helm v1.3.2
+ provider registry.terraform.io/hashicorp/kubernetes v1.13.3
+ provider registry.terraform.io/hashicorp/local v1.4.0
+ provider registry.terraform.io/hashicorp/null v2.1.2
+ provider registry.terraform.io/hashicorp/random v3.0.0
+ provider registry.terraform.io/hashicorp/template v2.2.0
+ provider registry.terraform.io/hashicorp/tfe v0.22.0

Expected Behavior

I expect terraform to init and run.

Actual Behavior

fails on init, due to checksums it just created in the lock file but then says they are invalid.

The interest part is, it works fine in TF Cloud, but when init and planning locally this fails as described above.

Steps to Reproduce

ensure you have a locally build provider.

$ rm -rf .terraform*
terraform init; 
terraform validate
terraform plan

Additional Context

I have a custom build, stored locally of a provider. This worked against v0.14-beta2, np.

References

@jurgenweber jurgenweber added bug new new issue not yet triaged labels Nov 12, 2020
@jurgenweber jurgenweber changed the title custom provider with many platform builds fails with localfile on v0.14 custom provider with many platform builds fails with localfile on v0.14-rc1 Nov 12, 2020
@apparentlymart
Copy link
Contributor

apparentlymart commented Nov 13, 2020

Hi @jurgenweber! Thanks for opening this issue.

Based on what you've shared, I'm guessing that the system where you ran terraform init is a platform other than linux_amd64, and so it's selecting a package for instaclustr/instaclustr/instaclustr that isn't the same one that Terraform Cloud/Enterprise (which is on linux_amd64) would select.

In that case, what you've seen here is an unfortunate but intentional behavior: because this provider isn't coming from a Terraform Registry, Terraform can't rely on the publisher's signatures to determine the correct checksums for other platforms, and so the lock file will by default only include the checksums for your current platform. This is what the parenthetical in the error message is alluding to:

(this might be because the available checksums are for packages targeting different platforms)

The way we're intending to document this situation is to suggest using the new terraform providers lock subcommand to explicitly tell Terraform to populate in the lock file the checksums of local providers that Terraform can't automatically verify, like this:

terraform providers lock -fs-mirror=(your local mirror path) -platform=darwin_amd64 -platform=linux_amd64 instaclustr/instaclustr/instaclustr

For the sake of example above I've assumed that darwin_amd64 and linux_amd64 are the two platforms you've built your custom provider for. You can add as many -platform arguments as you need to cover all of the platforms this configuration might be used on.

The -fs-mirror option tells Terraform where to look for this provider, since by default it will try to fetch checksums from an upstream registry running at the host instaclustr, which obviously won't work in this case.

Once v0.14.0 is final the recommendation would then be to check the updated .terraform.lock.hcl (now containing the fuller set of checksums) into your version control so that future terraform init will work without any extra steps. We've been recommending against checking that file into version control during prerelease testing in case the format needs to change in a future prerelease, but since we expect no further changes between v0.14.0-rc1 and v0.14.0 final this might now be a reasonable thing to do.

If you choose not to put that file under version control for now, I think you should still be able to see it work under Terraform Cloud/Enterprise if you run the terraform providers lock command prior to your initial terraform init. The later terraform plan will include your .terraform.lock.hcl in the packet of files uploaded to Terraform Cloud, which will contain the checksums for the linux_amd64 packages and should thus succeed.

This design is a safety/convenience tradeoff which ended up favoring safety, and thus unfortunately requiring an extra step for those using a non-default provider installation configuration.

We may continue to refine that tradeoff in future releases if we find other designs that can increase the convenience while preserving the safety, but this current implementation is the intended design which we are planning to ship in v0.14.0 and so I'm going to relabel this as an "enhancement" to reflect it being a place to discuss potential design changes, and also in the hope that what I've written above will be helpful to anyone else hitting this behavior before we publish the v0.14 version of the website that includes a discussion of this situation.

@apparentlymart apparentlymart added cli enhancement and removed bug new new issue not yet triaged labels Nov 13, 2020
@apparentlymart
Copy link
Contributor

You can see a draft of the new terraform providers lock documentation, which will appear on the main website after the v0.14.0 final release. However, because the links in our documentation are designed for the published website rather than the sources in Git, the internal links in that draft page won't work.

@jurgenweber
Copy link
Author

jurgenweber commented Nov 13, 2020

Your guesses are all true, I am on darwin vs TFCloud which is linux and we have both providers checked in.

@jurgenweber
Copy link
Author

I am torn with the idea of the lock, right now I am rapidly developing a new environment, so I do not want a lock. Take all the latest versions as I can handle any fall out, as its not in use but later on. Not so sure.

Now I need a partial lock file, to handle these sums if I want to continue to plan and use tf on my local vs in TFCloud?

@jurgenweber
Copy link
Author

jurgenweber commented Nov 13, 2020

Assuming:

ls ~/checkouts/terraform/environment/terraform.d/plugins/instaclustr/instaclustr/instaclustr/1.6.1/*
/Users/jurgen.weber/checkouts/terraform/environment/terraform.d/plugins/instaclustr/instaclustr/instaclustr/1.6.1/darwin_amd64:
terraform-provider-instaclustr_v1.6.1

/Users/jurgen.weber/checkouts/terraform/environment/terraform.d/plugins/instaclustr/instaclustr/instaclustr/1.6.1/linux_amd64:
terraform-provider-instaclustr_v1.6.1

what should my -fs-mirror value be? I tried a bunch of variations:

$ terraform providers lock -fs-mirror=~/checkouts/terraform/environment/terraform.d -platform=darwin_amd64 -platform=linux_amd64 instaclustr/instaclustr/instaclustr

Error: Could not retrieve providers for locking

Terraform failed to fetch the requested providers for darwin_amd64 in order to
calculate their checksums: some providers could not be installed:
- instaclustr/instaclustr/instaclustr: cannot search
~/checkouts/terraform/environment/terraform.d: lstat
~/checkouts/terraform/environment/terraform.d: no such file or
directory.

Thanks

@pkolyvas pkolyvas added the v0.14 Issues (primarily bugs) reported against v0.14 releases label Nov 13, 2020
@apparentlymart
Copy link
Contributor

apparentlymart commented Nov 14, 2020

Hi @jurgenweber,

Based on the error message it seems like your shell isn't expanding ~ in that context, and so that character is passing literally to Terraform. Perhaps it will work better if you use $HOME instead, since environment variable substitutions tend to be less context-sensitive than ~ expansion.

If you'd like to avoid using the lock file mechanism at all for now, one option would be to use the .terraformignore mechanism to make the remote backend ignore .terraform.lock.hcl when it's preparing the source code archive to upload to Terraform Cloud. Then when Terraform Cloud runs terraform init it will see that the file doesn't exist and create a local lock file for itself, which will include the linux_amd64 hashes that Terraform Cloud requires. That means that the remote Terraform Cloud run won't be able to guarantee to use the same plugins you were using locally, but it sounds like you would find that tradeoff acceptable while you are doing new development.

@jurgenweber
Copy link
Author

jurgenweber commented Nov 15, 2020

Oh, I get it now... Like I understood, but now I get it. :) Cool.

I will admit, I love it when you pick up my issues/forum posts. Your replies are always so professional and comprehensive.

Sadly, while I have added a .terraformignore file:

$ cat .terraformignore
.terraform.lock.hcl

I am still getting the error.

Error: Failed to install provider

Error while installing instaclustr/instaclustr/instaclustr v1.6.1: the local
package for instaclustr/instaclustr/instaclustr 1.6.1 doesn't match any of the
checksums previously recorded in the dependency lock file (this might be
because the available checksums are for packages targeting different
platforms)

Thank you!

@jurgenweber
Copy link
Author

while the .terraformignore approach did not work, this did:

# clean the slate
rm -f .terraform.lock.hcl
terraform providers lock -fs-mirror="${HOME}/checkouts/terraform/environment/terraform.d/plugins" -platform=darwin_amd64 -platform=linux_amd64 instaclustr/instaclustr/instaclustr

@apparentlymart
Copy link
Contributor

apparentlymart commented Nov 17, 2020

Thanks for letting me know about the .terraformignore answer not working! I'm not sure what's going on there but I'll try to reproduce it and see if we can make that work as expected.

With that said, I'm glad that explicitly populating the lock file worked for you for now. I think that's the better solution in the long run anyway, since then you can make sure Terraform Cloud's remote operations mechanism will always use the same provider versions you used locally, which is one of the quirks of Terraform Cloud remote operations that the lock file mechanism was intended to address.

@apparentlymart
Copy link
Contributor

Hi @jurgenweber,

I've not yet been able to reproduce the problem of not being able to include .terraform.lock.hcl in the .terraformignore file. I suspect there might be something else contributing to the problem that I'm not including in my reproduction.

From reviewing the documentation on .terraformignore I see that it's documented to only be supported in the root directory of the "package" that Terraform builds to upload to Terraform Cloud. When you tried it, did you put .terraformignore in the directory that Terraform Cloud would consider to be the root?

A specific pitfall I'm thinking about here is that if the remote workspace has a sub-directory set as its "working directory" then the root directory would be the path that the working directory is specified relative to. Guessing a bit from your local directory layout as illustrated in what you shared so far, I wonder if your ~/checkouts/terraform directory represents the "root" and the remote workspace is then configured to run in the subdirectory environment. If so, I believe the expected location for .terraformignore would be ~/checkouts/terraform/.terraformignore rather than ~/checkouts/terraform/environment/.terraformignore.

@apparentlymart apparentlymart added the waiting-response An issue/pull request is waiting for a response from the community label Nov 18, 2020
@jurgenweber
Copy link
Author

From reviewing the documentation on .terraformignore I see that it's documented to only be supported in the root directory of the "package" that Terraform builds to upload to Terraform Cloud. When you tried it, did you put .terraformignore in the directory that Terraform Cloud would consider to be the root?

Yes, the 'terraform working directory' (as found in TFCloud workspace, general settings).

A specific pitfall I'm thinking about here is that if the remote workspace has a sub-directory set as its "working directory" then the root directory would be the path that the working directory is specified relative to. Guessing a bit from your local directory layout as illustrated in what you shared so far, I wonder if your ~/checkouts/terraform directory represents the "root" and the remote workspace is then configured to run in the subdirectory environment. If so, I believe the expected location for .terraformignore would be ~/checkouts/terraform/.terraformignore rather than ~/checkouts/terraform/environment/.terraformignore.

Right, you nailed it right on the head. I have it in the 'environment' directory. Where indeed the 'terraform' directory is the root of the git repo.

I moved it up to test.

It works as expected, but I think I am going to go with the lock file approach now and just inform the team on how to use it. 👍

As always, thank you very much. Absolute legend! :)

@ghost ghost removed the waiting-response An issue/pull request is waiting for a response from the community label Nov 18, 2020
@apparentlymart
Copy link
Contributor

Great! Thanks for following up, @jurgenweber.

Since most of what we covered in the discussion here is going to get published on the website as part of the v0.14.0 documentation release, I'm going to close this issue for now and we'll see if any similar questions come up after the v0.14.0 final release, in which case we'll probably add some additional docs based on any recurring themes in those questions.

If you are a person who found this issue while working on a v0.14.0 upgrade and you have run into a problem that the above doesn't answer, please feel free to start a topic in the community forum and I'll be happy to work through things with you there. If there are some common themes to questions then I'll gather them up myself and open a documentation PR.

@ghost
Copy link

ghost commented Dec 19, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked as resolved and limited conversation to collaborators Dec 19, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
cli enhancement v0.14 Issues (primarily bugs) reported against v0.14 releases
Projects
None yet
Development

No branches or pull requests

3 participants