Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: migrating snowflake_user to snowflake_service_user forces recreation #3216

Closed
1 task
jrobison-sb opened this issue Nov 21, 2024 · 8 comments
Closed
1 task
Assignees
Labels
bug Used to mark issues with provider's incorrect behavior

Comments

@jrobison-sb
Copy link
Contributor

jrobison-sb commented Nov 21, 2024

Terraform CLI Version

v1.9.1

Terraform Provider Version

v0.98.0

Company Name

No response

Terraform Configuration

resource "snowflake_user" "old" {
  ...
}

resource "snowflake_service_user" "new" {
  ...
}

Category

category:import

Object type(s)

resource:user

Expected Behavior

I should be able to migrate the old way of using snowflake_user to the new way of using snowflake_service_user by way of terraform state rm && terraform import, and the end result should be 0 terraform plan diffs, 0 resource recreations, and 0 effect on the deployed resources. The only changes should be entirely within the HCL and the terraform state.

Actual Behavior

  # snowflake_service_user.new must be replaced
-/+ resource "snowflake_service_user" "new" {
      + user_type                                     = "<changed externally>" # forces replacement
        ...
}

Steps to Reproduce

  1. Use HCL similar to what is seen above.
  2. terraform state rm the old resource to delete it from the state to start the migration
  3. terraform import the new resource to import it into state to finish the migration
  4. terraform plan to see the diffs and see that your new resource will be re-created, defeating the purpose of the migration.

How much impact is this issue causing?

Medium

Logs

No response

Additional Information

I see the same behavior when migration from snowflake_user to snowflake_legacy_service_user too.

Using jq to inspect my state file from prior to this migration, I see "user_type": "" as one of the attributes.

#3119

Would you like to implement a fix?

  • Yeah, I'll take it 😎
@jrobison-sb jrobison-sb added the bug Used to mark issues with provider's incorrect behavior label Nov 21, 2024
@sfc-gh-jcieslak
Copy link
Collaborator

Hey 👋
I'll take a closer look at it next week. We'll try to make sure the fix is in the next provider release.

@sfc-gh-jcieslak sfc-gh-jcieslak self-assigned this Nov 22, 2024
@sfc-gh-jcieslak
Copy link
Collaborator

Hey @jrobison-sb
I'm missing some information, so correct me if I did the migration differently than in the described case. I did the migration from 0.97.0 to 0.98.0. After migration, according to the described steps, I got the recreation plan that indicated that the user_type was incorrect. I did the migration again, and it worked by adding a step before importing a service user. The step is running alter user "<user_name>" set type = service; in the worksheet before importing. After that, you can import this user into the service user resource. It should be the same for legacy service but with another user type. Please, let me know if that's the case. After import, you may also get a plan that shows different fields changing; that's because different user types have different default values. If you want old values to stay you have to explicitly set them in the configuration.

@sfc-gh-jcieslak
Copy link
Collaborator

Hey @jrobison-sb
Could you confirm that my previous comment is true and no changes are required? Thank You.

@jrobison-sb
Copy link
Contributor Author

Hi @sfc-gh-jcieslak, I'll come back to this next week and let you know how it went. Thanks.

@jrobison-sb
Copy link
Contributor Author

@sfc-gh-jcieslak, your suggestion of alter user ... set type = service did indeed unblock the original issue reported here. Thanks for that.

But as part of pushing ahead with this, I ran into another related problem with migrating to the new snowflake_legacy_service_user resource. I'll report it here since it's all part of the same goal for me, but if you prefer me to open a separate issue I'm happy to do so.

I need to migrate from the old resource type to the new resource type, like this:

resource "snowflake_user" "old" {
  password = "foo"
}

resource "snowflake_legacy_service_user" "new" {
  password = "foo"
}

So I do the usual resource migration dance mentioned here, and also mentioned elsewhere in the migration guide:

terraform state rm snowflake_user.old
snowsql ... --query "alter user ... set type = service"
terraform import snowflake_legacy_service_user.new ...

The above works fine and the new resource type imports successfully.

But during the import, terraform presumably has no way to import the existing password, so on a subsequent terraform plan, I'll see diffs like this, informing me that it's going to reset the password and store the password in the state:

+ password                                      = (sensitive value)

And if I go ahead and apply the above change, then I run into this error because apparently the password doesn't comply with a password policy which never applied to the old resource but apparently now does apply to the new resource?

│ Error: 003002 (28P01): SQL execution error:
│ New password rejected by current password policy. Reason: 'MIN_UPPERCASE'

And even if the password had the desired uppercase (or whatever) characters, password re-use would also presumably be blocked.

I know I could use ignore_changes and just ignore the password for now, then come back later and reset the password to a valid value, but is that the optimal migration path here? Using ignore_changes after this migration seems like it will leave us in a worse condition with terraform managing less infrastructure attributes than it used to manage before the migration.

Usually I would expect a migration like this to take place entirely within the terraform state and without any affect to our applications, but apparently that's not how this is playing out.

Thanks for any thoughts on this.

@sfc-gh-jcieslak
Copy link
Collaborator

Yeah, importing passwords is not easy due to limitations in Terraform SDK and Snowflake output (passwords are not returned in commands like SHOW and DESC). The current workaround done by the provider is to set the password again after the import.
My guess right now is that lately, there may have been some kind of change in password policy, either globally in Snowflake or in your current Snowflake environment. The passwords that are already created are not checked by this policy, but when new users are created or the password is applied again, they run against the new policy and fail on incorrect ones.

@jrobison-sb
Copy link
Contributor Author

Thanks for your thoughts on this @sfc-gh-jcieslak. You've been very helpful on this migration, and on a few others I've been doing over the past couple months or so. Thanks.

Though I still wish that Snowflake as a company wouldn't force changes like this on its users without providing a better migration path. I ended up unblocking myself by way of jq'ing the terraform state to put the passwords back into the state when they couldn't be imported. Other Snowflake customers might well have ended up taking a downtime for their applications and resetting passwords that they never wanted to reset, but Snowflake made that decision for them.

Anyway, in case anybody else finds themselves in the same spot, here is basically how I did the migration (with no warranty implied, this is potentially dangerous, YMMV):

HCL:

resource "snowflake_user" "old" {
  for_each = toset(compact([
    "SOMEUSER",
  ]))
  password = "foo"
  ...
}

resource "snowflake_legacy_service_user" "new" {
  for_each = toset(compact([
    "SOMEUSER",
  ]))
  password = "foo"
  ...
}

Migration:

# Before doing anything, pull the terraform state, because editing the state is super dangerous and it's best to keep a backup. And also because we'll need to pull the passwords out of the old state so we can jq them into the new state for the legacy service user, which terraform can't import.

terraform state pull > tfstate_before_migration.json

# Do the migration
terraform state rm snowflake_user.old
snowsql ... --query "alter user SOMEUSER set type = legacy_service"
terraform import snowflake_legacy_service_user.new SOMEUSER

# If you have a bunch of other users to rm/import all at once, do them here.

# Okay, all the rm/imports are done. Now we modify the terraform state to restore the old passwords
terraform state pull > tfstate_after_migration.json

# Get the old password from the old state
PASSWORD="$(cat tfstate_before_migration.json | jq -r --arg index_key SOMEUSER '.resources[] | select((.mode == "managed") and (.type == "snowflake_user") and (.name == "old")) | .instances[] | select(.index_key == $index_key) | .attributes.password')"

# Inject the password into the new state for the new legacy service user
cat tfstate_after_migration.json | jq -r --arg password "$PASSWORD" --arg index_key SOMEUSER '(.resources[] | select((.mode == "managed") and (.type == "snowflake_legacy_service_user") and (.name == "new")) | .instances[] | select(.index_key == $index_key)).attributes.password |= $password' > tfstate_after_migration.json.new
mv tfstate_after_migration.json.new tfstate_after_migration.json

# Increment the state serial so we can push it back up to s3
cat tfstate_after_migration.json | jq '.serial += 1' > tfstate_after_migration.json.new
mv tfstate_after_migration.json.new tfstate_after_migration.json

# Push it back up to s3
terraform state push tfstate_after_migration.json

And in case your users use a count instead of a for_each, then you'll want something like --argjson index_key 0 instead of --arg index_key SOMEUSER for a named index key.

If you mess up any of this is some way, you can try to roll back by way of:

terraform state push -force tfstate_before_migration.json
snowsql ... --query "alter user SOMEUSER set type = null"

@sfc-gh-jcieslak
Copy link
Collaborator

Hey, sorry to hear that.
On the bright side, we just released the v1.0.0 https://github.com/Snowflake-Labs/terraform-provider-snowflake/releases/tag/v1.0.0 which means that no breaking changes will be introduced in stable resources (without bumping a major, we are still following semver rules). A more detailed announcement is coming and will be published as a discussion in our repository, so stay tuned.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Used to mark issues with provider's incorrect behavior
Projects
None yet
Development

No branches or pull requests

2 participants