-
Notifications
You must be signed in to change notification settings - Fork 85
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Remote Backend Configuration #93
Comments
@dashaun something that's always annoyed me about the Hashicorp HCL syntax is that you can't tell if something is an array or not at a glance. In this case looks like workspaces is not an array (despite the 's'), but an object with either a
|
Same result:
|
Looking at the Terraform code, You can try setting |
This is a new workspace, so that's the output that I would expect. |
After a bit more digging I think you'll have to upgrade to Terraform 0.12+ to avoid the issue. Under the hood the terraform-resource converts the YAML in your pipeline config to JSON which is passed to |
I've upgraded to 0.12.3 The source.env.TF_LOG: DEBUG doesn't appear to work, so the only output I get
Its the same result with the dash and without. |
@dashaun well we got a little farther at least. What happens if you run |
Not sure if this helps but I also ran across this issue with a project that was leveraging terraform-resource version latest. When the pipeline was last ran it was about 50 days ago and it succeed but then stopped working yesterday when we attempted to run the pipeline with a similar error as above. I did a little digging and found that starting with version |
My apologizes it would be version 0.11.14 so yes I agree with @ljfranklin something is different after upgrading to 0.12.x of terraform. I tested again and 0.12.1 through 0.12.4 result in the same error. |
@dashaun if you're still having trouble with this I just pushed this commit to the |
Not the OP, but struggling to get the remote backend with terraform cloud working at all. There seems to be an issue upstream, with a hack:
I've tried various combinations, with just setting the workspace as a name vs prefix.
What I'd really like to be doing is generate_random_name: true and then create the workspace on the fly, but for testing purposes made it static.
Added TF_WORKSPACE: one
Results in:
|
@allandegnan The resource assumes a workspace named |
I had the same thought but couldn't get it to work either. adegnan@laptop:~/Projects/blah/allan-test (ad/addingVersion)$ terraform workspace new default
default workspace not supported
You can create a new workspace with the "workspace new" command.
adegnan@laptop:~/Projects/blah/allan-test (ad/addingVersion)$ terraform workspace new prefix-allan-test-default
Created and switched to workspace "prefix-allan-test-default"!
You're now on a new, empty workspace. Workspaces isolate their state,
so if you run "terraform plan" Terraform will not see any existing state
for this configuration. Returns: 2020/04/21 13:00:40 terraform init command failed.
Error: exit status 1
Output:
Initializing the backend...
Successfully configured the backend "remote"! Terraform will automatically
use this backend unless the backend configuration changes.
Error loading state: default workspace not supported
You can create a new workspace with the "workspace new" command I might have misunderstood the docs, but I think the default magic workspace only applies to local and not remote. In any event, I also added "default" via the GUI in app.terraform.io, and that didn't help the issue either. |
So I forked the repo and made a small hacky change:- master...secureweb:bypassInitSelection Unfortunately, my plan action error with the following: ▼ ▼ ▼ ▼ ▼ ▼ ▼ ▼ ▼ ▼ Terraform Plan ▼ ▼ ▼ ▼ ▼ ▼ ▼ ▼ ▼ ▼
Error: Saving a generated plan is currently not supported
The "remote" backend does not support saving the generated execution plan
locally at this time.
Error: Run variables are currently not supported
The "remote" backend does not support setting run variables at this time.
Currently the only to way to pass variables to the remote backend is by
creating a '*.auto.tfvars' variables file. This file will automatically be
loaded by the "remote" backend when the workspace is configured to use
Terraform v0.10.0 or later.
Additionally you can also set variables on the workspace in the web UI:
https://app.terraform.io/app/secureweb/prefix-allan-test-one/variables
▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ Terraform Plan ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲
Failed To Run Terraform Plan!
2020/04/21 13:34:43 Plan Error: Failed to run Terraform command: exit status 1
Errors are:
Manually setting the backend to local in terraform cloud (I don't really want to do this, because it sort of negates parts of the point of using TFE and means I can't generate workspaces on the fly, but whatever, it'll work for "now". An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# aws_instance.example will be created
+ resource "aws_instance" "example" {
+ ami = "ami-7ad7c21e"
+ arn = (known after apply)
+ associate_public_ip_address = (known after apply)
+ availability_zone = (known after apply)
<snip>
+ iops = (known after apply)
+ kms_key_id = (known after apply)
+ volume_id = (known after apply)
+ volume_size = (known after apply)
+ volume_type = (known after apply)
}
}
Plan: 1 to add, 0 to change, 0 to destroy.
▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ Terraform Plan ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲
Failed To Run Terraform Plan!
2020/04/21 13:58:29 Plan Error: Failed to run Terraform command: exit status 1 That's with TF_LOG=trace, which isn't super helpful. But looking in terraform cloud, I can see the following: prefix-allan-test-one-plan Terraform v0.12.24
Configuring remote state backend...
Initializing Terraform configuration...
Setup failed: Failed terraform init (exit 1): <nil>
Output:
2020/04/21 13:58:26 [DEBUG] Using modified User-Agent: Terraform/0.12.24 TFC/b6160e7930
Initializing the backend...
Successfully configured the backend "remote"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing provider plugins...
- Checking for available provider plugins...
Provider "stateful" not available for installation.
A provider named "stateful" could not be found in the Terraform Registry.
This may result from mistyping the provider name, or the given provider may
be a third-party provider that cannot be installed automatically.
In the latter case, the plugin must be installed manually by locating and
downloading a suitable distribution package and placing the plugin's executable
file in the following directory:
terraform.d/plugins/linux_amd64
Terraform detects necessary plugins by inspecting the configuration and state.
To view the provider versions requested by each module, run
"terraform providers".
Error: no provider exists with the given name Okay...? Change prefix-allan-test-one-plan to local in terraform cloud. Job passes, but the terraform resource, gets, but then when I try to refer to the resource elsewhere it doesn't return anything. |
@allan-degnan-rft @allandegnan I don't have any experience using the Terraform Enterprise tooling, but I'd be open to a PR that fixes the issues you described. So far sounds like you'd need to fix the following:
|
Understood. For the record, hacking got me to part 3 (I could have sworn I already posted it), which generates this from Cloud:
That said, according to the documentation plans are only speculative, which essentially means, I'd need to lock, plan, run any tests I want against, apply(with new plan), and unlock, hoping that my lock did the job. :( |
Guess you'd also have to implement the locking API calls in the resource. Not great. I'm happy to talk through any implementation ideas but does seem like a fair bit of work to support the Enterprise flow in the resource unfortunately. |
Actually, thinking about it, we don't need to implement locking, probably just documentation. Enterprise Workflow:
Hopefully I'll have time for a PR, unsure at the moment, but I figure at least discussing the problem helps anyone else driving by too. Will get back to you. :) |
I tried to map the backend_config to one that works:
Then I get this response:
And if I add another line to workspaces like this:
I get this response:
This feels like a bug, but I'm not convinced.
I might have been staring at this same issue for too long.
The text was updated successfully, but these errors were encountered: