-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unclosed connections prevent dbt from exiting on Snowflake with keepalives enabled #1271
Comments
Step 3 is my favorite |
I don't want to take credit for this since I can't tell you how it happened, but I am ~90% sure this is fixed in 0.13.0, possibly as part of the connection management work I did. I sure can't reproduce it, and I can easily reproduce it on 0.12.2. |
@beckjake So we're seeing this now. We're running in GitLab CI and it hangs on "flush usage events". Setting to False fixes it. Here's some info about the runners if that's useful. https://docs.gitlab.com/ee/user/gitlab_com/#shared-runners The command we're using is Edit: |
Thanks @tayloramurphy - I just moved this card into the LMA milestone + reopened the issue. We'll check it out! |
We're also seeing this with |
…s-properly force-cleanup all adapter connections before exiting handle_and_check (#1271)
We're also seeing this when a dbt run fails or when a dbt run has "Nothing to do" on 0.14.4. We can reproduce this at will. |
@krishbox are you still seeing this on 0.15.0? |
I'm on 0.16.1, seeing some dbt cloud jobs end with an error that seems related to this setting - these are jobs that are running for ages, 4+ hours, because there's a bit of a backlog of processing but that's another story. The error is
I checked the parameter setting on Snowflake by running In my As a side note, how do |
hey @davehowell - dbt Cloud does not currently support the This issue tracked a bug where, when the If that's the case, then I have some good news for you -- we plan on adding this config to dbt Cloud in the near future :). For any questions about dbt Cloud, please do feel free to write into [email protected], or get in touch with us in the application by clicking the 💬 in the top right corner of the page! Just to add some additional details:
|
@drewbanin thanks for the clarification. It's confusing for me too. Ideally I would never need the 4+ hours keep alive; I know the core issue is that these jobs should be optimized so they aren't running for so long, I can work around my current processing backlog by running some of the models individually. |
Issue
Issue description
The
client_session_keep_alive
config for snowflake-connector-python is implemented using a threaded heartbeat. During thedbt docs generate
command (and possibly others), dbt doesn't close 100% of the connections it opens. As a result, the snowflake-connector-python threads continue to heartbeat in perpetuity, preventing dbt from exiting. See the comment here while slowly and deliberately raising the palm of your hand into the vicinity of your face.We should, in general, ensure that dbt always closes any connections that it opens.
Results
dbt finishes executing, but control is not returned to the terminal. Instead, python hangs on a lock held by snowflake-connector-python, and the user needs to ctrl-c to exit.
System information
The output of
dbt --version
:The operating system you're running on: Any
Steps to reproduce
client_session_keep_alive
to beTrue
dbt docs generate
The text was updated successfully, but these errors were encountered: