-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
storage: Advice regarding behaviour when GCS is unavailable #3522
Comments
Thanks for your question. A few thoughts on this:
|
Thank you for your thoughts!
But if I disable HTTP/2, the dead connection seems to be recycled after it fails:
I wonder if that suggests the
|
This issue sounds related to what I'm seeing with cached HTTP/2 connections being reused - golang/go#36026. As well: golang/go#30702. |
That's interesting about disabling HTTP/2, but I guess not entirely surprising given that HTTP/1.1 does not have long-lived connections in the same way. I would definitely file an issue for net/http (or comment on one of the existing ones with your experience). If you use my code snippet above to create your storage client, you can play with some of these settings yourself via fields in Transport and see if that helps (e.g. |
@horgh I looked at the golang issue and it looks like you used ReadIdleTimeout as a workaround; is that through golang.org/x/net/http2 ? Curious about how you did that. |
Thanks for looking :). Yeah, through httpTransport := http.DefaultTransport.(*http.Transport).Clone()
http2Transport, err := http2.ConfigureTransports(httpTransport)
if err != nil {
return nil, errors.Wrap(err, "error configuring http/2 transport")
}
// Setting this enables the ping timeout functionality.
http2Transport.ReadIdleTimeout = 5 * time.Second
htrans, err := htransport.NewTransport(ctx, httpTransport)
if err != nil {
return nil, errors.Wrap(err, "error creating transport")
}
httpClient := &http.Client{
Transport: htrans,
}
client, err := storage.NewClient(ctx, option.WithHTTPClient(httpClient))
if err != nil {
return nil, errors.Wrap(err, "error creating storage client")
} |
Awesome, it's very helpful to have this example! I don't think we can do this by default because it would require depending on the |
By the way, the one change we do make in the default transport is to increase the value for |
Just to circle back on this closed issue-- after some internal discussion, we decided to add a |
That's awesome! Thank you for letting me/everyone know :-). |
I'm using v1.12.0 with go1.15.6.
Hello,
I make GCS API calls from a daemon that itself serves HTTP requests. I had a case where a client of my daemon would repeatedly time out due to the GCS request taking too long.
To try to avoid this, I have a health check that the daemon can contact GCS and mark itself offline if the check fails for too long, so requests go to other instances of the daemon. (It checks whether a bucket exists, in a loop with sleeps).
This check periodically fails for unknown reasons, though my guess is that GCS has issues periodically, which probably I must expect.
The check is also flawed in that there are multiple GCS IPs (not to mention hosts I'm sure), plus the HTTP/2 connections are cached. Which is to say some IPs/connections could be fine while others are not.
In debugging this, I noticed a cached HTTP/2 connection sticks around for ~4 minutes, even if it is consistently not returning a response (I have a context timeout in my health check). This makes me wonder whether I should disable keepalives and/or add a shorter timeout and retry in my own code, to avoid these connections that could be causing my clients to time out. Neither of these options seem attractive but waiting on a connection that is dead while using a new one that may succeed seems not ideal either.
Basically it seems like from the perspective of my clients there could be an outage due to a cached dead connection (and possibly other reasons), and I wonder if I could improve things somehow.
A couple questions then:
The text was updated successfully, but these errors were encountered: