-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Infinite loop when connection unexpectedly dies #11
Comments
Going through the stack trace, I suspect the offending loop is this one: https://github.com/thda/tds/blob/master/buffer.go#L549 |
Indeed that for loop should break when readPkt returns an error. Will fix by tomorrow. |
Hi, I think I've fixed in master. Could you please confirm? Cheers |
I agree that looks like it should fix the issue. We ran into this when our production load balancer was upgraded so I'm not sure how reliably I can reproduce the issue on demand unfortunately. I'm happy enough to close this and I'll open a new ticket if I run into it again next time a similar change happens on our load balancer. |
Closing then. Thanks! |
When the connection is interrupted unexpectedly (eg, stateful firewall loses track of the connection) it seems that when database/sql tries to signal to the driver to close the connection it gets stuck in a busy loop causing 100% CPU usage.
Stack trace of the offending goroutine running under go version go1.12.9 linux/amd64:
In this instance, db.PingContext() was called which made the database/sql package realise there was a bad connection in the pool. Relevant bit of code in the database/sql package that kicks off the chain of events: https://github.com/golang/go/blob/release-branch.go1.12/src/database/sql/sql.go#L1250
Subsequent calls to db.PingContext() worked fine using a fresh connection while the offending go routine was still caught up in a busy loop.
The text was updated successfully, but these errors were encountered: