Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Timeout reads and writes in the http server. #1917

Merged
merged 2 commits into from
Nov 2, 2016

Conversation

tomwilkie
Copy link
Contributor

Fixes #1916

Copy link
Contributor

@paulbellamy paulbellamy left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

May need to adjust the timeouts in common/xfer/websocket.go as well, probably?

@tomwilkie
Copy link
Contributor Author

Ah yes websockets!

On Wednesday, 12 October 2016, Paul Bellamy [email protected]
wrote:

@paulbellamy commented on this pull request.

May need to adjust the timeouts in common/xfer/websocket.go as well,
probably?


You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
#1917 (review),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAbGhcLtGAm_YUZS0c7s5LlgkD0b4p6dks5qzPjxgaJpZM4KTAQZ
.

@rade
Copy link
Member

rade commented Oct 12, 2016

Won't this interact badly with persistent http connections?

@2opremio
Copy link
Contributor

@tomwilkie Out of curiosity, how have you confirmed that this fixes #1916 ?

Handler: handler,
ReadTimeout: httpTimeout,
WriteTimeout: httpTimeout,
MaxHeaderBytes: 1 << 20,

This comment was marked as abuse.

@rade
Copy link
Member

rade commented Oct 17, 2016

And I am still waiting for an answer to my question re persistent connections.

@rade
Copy link
Member

rade commented Nov 1, 2016

@ekimekim could you take over on this, e.g. address review comments, answer questions above, perhaps do a deploy to dev to test whether it solves #1916 (unless you can think of a better way to test this)?

@ekimekim
Copy link
Contributor

ekimekim commented Nov 1, 2016

I'm attempting to reproduce locally. So far what I've tried: making a connection which hangs (stops sending data) during the sending of a POST body. This causes goroutine count to rise until those connections are closed on the client side, at which point they drop again.

@ekimekim
Copy link
Contributor

ekimekim commented Nov 1, 2016

ok, good news and bad news.
good: timeout works. I leave 1000 connections hanging and the goroutines die after 90s
bad: it's broken websockets completely. when i try to actually use the scope ui i just get "re-connecting..."

@ekimekim
Copy link
Contributor

ekimekim commented Nov 2, 2016

Disregard the bad news above, it's an unrelated problem with my local setup since it's happening on master too (when accessing via /admin/scope). Connecting to scope.kube-system directly works fine with this PR.

@ekimekim
Copy link
Contributor

ekimekim commented Nov 2, 2016

ok, i'm now confident in my testing:

  • works fine in normal browser usage
  • websockets aren't affected, they remain open for > 90s
  • manually testing with telnet - persistent connections remain open for > 90s
  • manually testing with telnet - connections where I stop sending halfway through request headers are closed after 90s
    I think this is good to merge.

@ekimekim ekimekim merged commit 03661f3 into master Nov 2, 2016
@ekimekim ekimekim deleted the 1916-dont-leak-goroutines branch November 2, 2016 22:25
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants