-
Notifications
You must be signed in to change notification settings - Fork 427
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Inconsistent Snowflake state on apply if errors occur #2715
Comments
Hey @AndreasHEbcont. Thanks for reaching out to us. There are several resources that behave incorrectly in such a case but, to my knowledge, database and warehouse are not one of them. Please provide the exact minimal steps to reproduce the incorrect behavior, so we can reproduce it. |
Hey @sfc-gh-asawicki sure:
main.tf resource "snowflake_database" "simple" { resource "snowflake_role" "parent_role" { resource "snowflake_database_role" "db_role" { resource "snowflake_grant_database_role" "g" { resource "snowflake_grant_privileges_to_database_role" "example" { resource "snowflake_warehouse" "warehouse" { 2.Execute a Terraform apply with an user with following account level privilages: CREATE DATABASE This should result in an Error as the account does not have enought privilages to assign the resource_monitor. At this point the State file was already diffrent to the provisioned ressource, as Terraform managed to create the "SYSADMIN"role and the Database but did not add it to the state file. (3). If the ressources somehow should have been provisioned or the state should not be out of sync: Alter the configuration file to just include the warehouse and change the name and comment in one terraform apply: main.tf resource "snowflake_warehouse" "warehouse" { This should result in a Error also creating an out of sync state file because Terraform will first try to rename the warehouse and afterwards try to change to comment, not being able to find the renamed warehouse. Please let me know if you need further information! Or of course if the provided configuration is incorrect |
The case with the rename is already handled globally as part of #2702. We will try to reproduce the first case in the next few days. I have two questions, though:
|
hey,
Regards, Andreas |
Hey, I can provide you with the Error messages when executing both apply Statements. I can also provide you with the full deployment log including plan and apply but only throught a secure connection. Is there a way we can provide it to you without sharing it in github? Error on 1.st apply: Error: 003001 (42501): SQL access control error: with module.workspace_RB_APMEA_IT_BUSINESS_APPS.module.warehouses["WH01"].snowflake_warehouse.warehouse, 2.Apply without making any changes to the configuration: Error: Failed to create account role with module.workspace_RB_APMEA_IT_BUSINESS_APPS.snowflake_role.role_SYSADMIN, Account role name: RB_APMEA_IT_BUSINESS_APPS_SYSADMIN, err: 002002 (42710): Error: Failed to create account role with module.workspace_RB_APMEA_IT_BUSINESS_APPS.snowflake_role.role_SECADMIN, Account role name: RB_APMEA_IT_BUSINESS_APPS_SECADMIN, err: 002002 (42710): Error: error creating database RB_APMEA_IT_BUSINESS_APPS_DB: 002002 (42710): SQL compilation error: with module.workspace_RB_APMEA_IT_BUSINESS_APPS.module.databases["DB01"].snowflake_database.database, Error: 003001 (42501): SQL access control error: with module.workspace_RB_APMEA_IT_BUSINESS_APPS.module.warehouses["WH01"].snowflake_warehouse.warehouse, The error applying changes to the warehouse occured when adding resource_monitor = "null" to the configuration. |
Thanks for the logs @AndreasHEbcont. You can reach out to your Snowflake account manager, share the complete logs with them, and ask them to pass them on to me internally in Snowflake. |
Hey @AndreasHEbcont
and every subsequent |
Hey @sfc-gh-jcieslak, thank you for looking into this Issue so fast! I will be in touch with my collegue engineers trying to supply you with the best possible way to replicate this behaviour. In the meanwhily please do not close the ticket. Regards, Andreas |
Hey @AndreasHEbcont 👋 |
Hey, (un)fortunately I was not able to reproduce the behaviour with any further deployment. If I should ever encounter that behaviour again I will get in touch with our snowflake account manager. Thank you for your support! |
Alright, I'm closing this one then. If you encounter any similar issues, please create another one and link this one for context. Thank You 👍 . |
Terraform CLI and Provider Versions
Snowflake 0.88
Terraform >= 1.0.11
Terraform Configuration
Expected Behavior
Ressources will be created/destroyed and the Terraform state file will be updated accordingly
Actual Behavior
The ressources will be created/destroyed but the changes will not be saved to the Terraform state file
Steps to Reproduce
terraform apply
ERROR OCCURS
STATE IS INCONSISTENT resulting in multiple errors
How much impact is this issue causing?
High
Logs
No response
Additional Information
We are currently creating various resources via modules, including databases, warehouses, roles, and grants. During the apply statement of Terraform, if there are any errors, the state file will be inconsistent.
For example:
We want to create a new database, warehouse, and corresponding roles. During the Terraform apply statement, we made an error, and the executing role did not have sufficient privileges:
Error: 003001 (42501): SQL access control error: Insufficient privileges to operate on account '[ACCOUNTNAME]'
If we then try to execute a second Terraform apply while creating new resources, we will get multiple errors indicating these resources already exist. The reason for that is when the Terraform apply failed the first time, it already created some of the resources but failed to update the state file:
SQL compilation error: Object '[DATABASENAME]' already exists.
On the other hand, if an error occurs while destroying resources, Terraform will destroy some of the resources but not update the state file, resulting in an error stating that the resources to be destroyed do not exist. That is because Terraform already destroyed them, but they are still included in the Terraform state file.
This behavior occurs frequently and currently the only workaround we found is to manually remove the resources from the state file.
The text was updated successfully, but these errors were encountered: