Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make sure state blob is created before locking or snapshotting is attempted in azurerm backend #27797

Draft
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

dz-sourced
Copy link

This fixes #27771

Current implementation attempts to create a new state blob in several different places:

  1. In the Backend.StateMgr() function, but only if state name is not a default one (not clear why)
  2. In the RemoteClient.Lock() function, if it doesn't find the blob it tries to lock. While this works in most cases, it fails if you use -lock=false CLI option.
  3. In the RemoteClient.Put() function, if blob is not found it will ignore the 404 error from GetProperties and let the new blob to be created, but if you configure your azurerm backend with snapshot=true it may try to snapshot a blob that doesn't exist yet.

One question I have is how to indicate a condition that should never happen in the RemoteClient.Lock() function. Basically if a blob doesn't exist at this point, something went really wrong or someone deleted the blob manually. Right now I just return an empty lockId and the err, but may be there is a better way to handle such conditions?

I'd like to make this solution better aligned with other backend implementations, notable S3 and PostgreSQL. Those implementations use Backend.Workspaces() function to get a list of existing states and then proceed to create new empty states
if there is no existing one. With azurerm implementation the Workspaces() function doesn't seem to return an existing state because of some strangeness in how it constructs a prefix. This function probably needs to be refactored as well, but I think that would be a separate PR since I haven't spent any time understanding the workspaces related code base.

@hashicorp-cla
Copy link

hashicorp-cla commented Feb 16, 2021

CLA assistant check
All committers have signed the CLA.

@codecov
Copy link

codecov bot commented Feb 16, 2021

Codecov Report

Merging #27797 (0b20045) into master (a351053) will increase coverage by 0.01%.
The diff coverage is 0.00%.

Impacted Files Coverage Δ
backend/remote-state/azure/backend_state.go 0.00% <0.00%> (ø)
backend/remote-state/azure/client.go 0.00% <0.00%> (ø)
internal/providercache/dir.go 67.34% <0.00%> (-6.13%) ⬇️
terraform/node_resource_plan.go 96.11% <0.00%> (-1.95%) ⬇️
states/statefile/version4.go 58.19% <0.00%> (+0.23%) ⬆️

Base automatically changed from master to main February 24, 2021 18:01
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

azurerm remote state backend fails to initialize properly when snapshot=true and lock=false
3 participants