Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

x/net/http2: regression in Transport's reuse of connections #60818

Closed
bradfitz opened this issue Jun 15, 2023 · 5 comments
Closed

x/net/http2: regression in Transport's reuse of connections #60818

bradfitz opened this issue Jun 15, 2023 · 5 comments
Labels
FrozenDueToAge NeedsFix The path to resolution is known, but the work has not been done.
Milestone

Comments

@bradfitz
Copy link
Contributor

The fix to #59690 (https://go-review.googlesource.com/c/net/+/486156) broke our load balancer.

That change made any RoundTripper caller's cancelation ends up tainting the underlying HTTP/2 connection (setting its do-not-reuse bit), causing our load balancer to create new conns forever, rather than reuse existing totally fine ones.

Of note: our application only uses ClientConn.RoundTrip (not Transport.RoundTrip), so moving up the connection tainting up a level (to where it arguably belongs in Transport & its conn pool) would fix us, but still isn't the proper fix for others.

The original issue was about dead backend connections and Transport reusing them forever. It fixed it, but a bit too aggressively. We shouldn't mark good connections (in-use, recent traffic, able to reply to pings, etc) as do-not-reuse. This is already addressed if you enable pings on your backend connections, but that's not the default, probably because it's not network chatty to do by default for idle connections.

@neild and I discussed a handful of options but I'm not we've decided on any particular path(s). Some:

  • move the tainting up a level (again, fixes us, but not everybody). but maybe do this first anyway, because it's not ClientConn's job to blindly mark itself as broken if the caller cancels its context after a nanosecond.
  • do PING frames to the backend even when the periodic pinger is not enabled under certain circumstances: like when the health of the ClientConn is in question. (on errors, on a new request after N seconds have elapsed since last hearing from the backend?) but have at most one ping outstanding at a time per ClientConn.
bradfitz added a commit to tailscale/golang-x-net that referenced this issue Jun 15, 2023
@rhysh
Copy link
Contributor

rhysh commented Jun 15, 2023

  • do PING frames to the backend even when the periodic pinger is not enabled under certain circumstances: like when the health of the ClientConn is in question. (on errors, on a new request after N seconds have elapsed since last hearing from the backend?) but have at most one ping outstanding at a time per ClientConn.

An option like this sounds great. What's not to love?

One of the great strengths of the HTTP/2 protocol is that implementations can cancel a request without throwing away a TCP+TLS connection. But one of the great strengths of HTTP/1.x is that it minimizes the blast radius of a single bad connection, allowing it to harm only a single request. Reusing connections is important for application stability—otherwise when CPU starvation in a client causes it to perceive its outbound calls as timing out, an application that reacts by closing its TCP+TLS connections and then dialing new ones will make its problem worse by spending any remaining CPU time on new TLS handshakes.

I think Go's implementation can give the best of both, and through that reduce the operational risks associated with moving from HTTP/1 to HTTP/2.

Here's how I'd like to deploy HTTP/2:

  • When the caller cancels a request (either via a timeout, or electively), send a PING frame on that connection and put the connection into quarantine (unavailable for new requests).
  • If a connection sits in the quarantine pool for longer than the timeout we'd use for a fresh dial, close the connection.
  • Otherwise, if we get back a response to our PING, return the connection to the regular pool. (Maybe there are other frames that could prove a bit sooner that the peer is still responsive.)

And:

  • Allow the client to set a limit on the number of concurrent streams the transport will send on each connection. Setting to "1" gets the same level of sharing we'd see in HTTP/1.1.

Maybe those settings aren't the right choice for everyone, but today it's very hard to configure that behavior.

@btasker
Copy link

btasker commented Jun 21, 2023

do PING frames to the backend even when the periodic pinger is not enabled under certain circumstances: like when the health of the ClientConn is in question. (on errors, on a new request after N seconds have elapsed since last hearing from the backend?) but have at most one ping outstanding at a time per ClientConn.

That sounds like a reasonable fix to me too.

The only thing I would note - I don't know if it's changed - but as recently as 2021, AWS Application Load Balancers lacked support for HTTP2 Ping. So a ping based mitigation won't work if writing into something fronted by an ALB.

My original PR had checked whether the failed stream was the only one for the connection - that (I think) would also have addressed this issue, but adding pings should be much more robust if there's upstream support.

@dmitshur
Copy link
Contributor

@neild and I discussed a handful of options but I'm not we've decided on any particular path(s).

Just checking, have you also considered the option of rolling back CL 486156 here in x/net, reopening #59690, and trying to fix again? Or is this issue comparatively smaller than #59690 and better to fix forward?

@neild There are currently open backport requests of that issue to Go 1.20 and 1.19 (#60301 and #60662). Should the decision to backport be revisited given the new information in this report, or are they okay to proceed as is? Thanks.

@dmitshur dmitshur added the NeedsInvestigation Someone must examine and confirm this is a valid issue and not a duplicate of an existing one. label Jun 21, 2023
@dmitshur dmitshur added this to the Unreleased milestone Jun 21, 2023
@btasker
Copy link

btasker commented Jun 22, 2023

The only thing I would note - I don't know if it's changed - but as recently as 2021, AWS Application Load Balancers lacked support for HTTP2 Ping. So a ping based mitigation won't work if writing into something fronted by an ALB.

I kicked together a quick Python script to test

import socket
import ssl
import h2.connection
import h2.events
import certifi
import sys


SERVER_NAME = sys.argv[1]
SERVER_PORT = 443

socket.setdefaulttimeout(15)
ctx = ssl.create_default_context(cafile=certifi.where())
ctx.set_alpn_protocols(['h2'])

s = socket.create_connection((SERVER_NAME, SERVER_PORT))
s = ctx.wrap_socket(s, server_hostname=SERVER_NAME)

c = h2.connection.H2Connection()
c.initiate_connection()
c.ping(b'ffffffff')
s.sendall(c.data_to_send())

response_stream_ended = False
while not response_stream_ended:
    data = s.recv(65536 * 1024)
    if not data:
            break
    events = c.receive_data(data)
    for event in events:
        if isinstance(event, h2.events.PingAckReceived):
            print(event)
            response_stream_ended = True
            break

I'm getting ACK's back from AWS ALB's, so it looks like they've added support.

A lack of upstream support would still be a potential issue, but it at least wouldn't be quite so widespread as if ALB's didn't support it.

bradfitz added a commit to tailscale/golang-x-net that referenced this issue Jun 27, 2023
Even the part we previously kept was bad. Revert all of 82780d6 but
keep 6826f5a (which depended on bits of 82780d6). So this is a mix.

TestTransportRetryAfterRefusedStream fails with this change, but only
because it was adjusted in 82780d6 to pass with 82780d6, and this test
doesn't revert all the test changes. I just skip that test instead, because
it doesn't really affect us.

Updates tailscale/corp#12296
Updates golang/go#60818

Signed-off-by: Brad Fitzpatrick <[email protected]>
bradfitz added a commit to tailscale/golang-x-net that referenced this issue Jun 27, 2023
Even the part we previously kept was bad. Revert all of 82780d6 but
keep 6826f5a (which depended on bits of 82780d6). So this is a mix.

TestTransportRetryAfterRefusedStream fails with this change, but only
because it was adjusted in 82780d6 to pass with 82780d6, and this test
doesn't revert all the test changes. I just skip that test instead, because
it doesn't really affect us.

Updates tailscale/corp#12296
Updates golang/go#60818

Signed-off-by: Brad Fitzpatrick <[email protected]>
bradfitz added a commit to tailscale/golang-x-net that referenced this issue Jun 27, 2023
Even the part we previously kept was bad. Revert all of 82780d6 but
keep 6826f5a (which depended on bits of 82780d6). So this is a mix.

TestTransportRetryAfterRefusedStream fails with this change, but only
because it was adjusted in 82780d6 to pass with 82780d6, and this test
doesn't revert all the test changes. I just skip that test instead, because
it doesn't really affect us.

Updates tailscale/corp#12296
Updates golang/go#60818

Signed-off-by: Brad Fitzpatrick <[email protected]>
bradfitz added a commit to tailscale/golang-x-net that referenced this issue Jun 27, 2023
Even the part we previously kept was bad. Revert all of 82780d6 but
keep 6826f5a (which depended on bits of 82780d6). So this is a mix.

TestTransportRetryAfterRefusedStream fails with this change, but only
because it was adjusted in 82780d6 to pass with 82780d6, and this test
doesn't revert all the test changes. I just skip that test instead, because
it doesn't really affect us.

Updates tailscale/corp#12296
Updates golang/go#60818

Signed-off-by: Brad Fitzpatrick <[email protected]>
@gopherbot
Copy link
Contributor

Change https://go.dev/cl/507395 mentions this issue: http2: revert Transport change from CL 486156

@dmitshur dmitshur added NeedsFix The path to resolution is known, but the work has not been done. and removed NeedsInvestigation Someone must examine and confirm this is a valid issue and not a duplicate of an existing one. labels Jun 30, 2023
maisem pushed a commit to tailscale/tailscale that referenced this issue Aug 22, 2023
Theory is that our long lived http2 connection to control would
get tainted by _something_ (unclear what) and would get closed.

This picks up the fix for golang/go#60818.

Updates tailscale/corp#5761

Signed-off-by: Maisem Ali <[email protected]>
maisem pushed a commit to tailscale/tailscale that referenced this issue Aug 22, 2023
Theory is that our long lived http2 connection to control would
get tainted by _something_ (unclear what) and would get closed.

This picks up the fix for golang/go#60818.

Updates tailscale/corp#5761

Signed-off-by: Maisem Ali <[email protected]>
dmitryax pushed a commit to open-telemetry/opentelemetry-collector-contrib that referenced this issue Dec 11, 2023
**Description:**
This PR enables the HTTP2 health check to workaround the issue described
here open-telemetry/opentelemetry-collector#9022

As to why I chose 10 seconds for `HTTP2ReadIdleTimeout` and ~~5
seconds~~ 10 seconds (see review comment) for `HTTP2PingTimeout`
Those values have been tested in production and they will result, in an
active env (with default http timeout of 10 seconds and default retry
settings), of a single export failure at max before the health check
detects the corrupted tcp connection and closes it.
The only drawback is if the connection was not used for over 10 seconds,
we might end up sending unnecessary ping frames, which should not be an
issue and if it became an issue, then we can tune those settings.

The SFX exporter has multiples http clients:
- Metric client, Trace client and Event client . Those client will have
the http2 health check enabled by default as they share the same default
config
- Correlation client and Dimension client will NOT have the http2 health
check enabled. We can revisit this if needed.

**Testing:** 
- Run OTEL with one of the exporters that uses HTTP/2 client, example
`signalfx` exporter
- For simplicity use a single pipeline/exporter
- In a different shell, run this to watch the tcp state of the
established connection
```
 while (true); do echo date; sudo netstat -anp | grep -E '<endpoin_ip_address(es)>' | sort -k 5; sleep 2; done
 ```  
- From the netstat, take a note of the source port and the source IP address
- replace <> from previous step
`sudo iptables -A OUTPUT -s <source_IP> -p tcp --sport <source_Port> -j DROP`
- Note how the OTEL exporter export starts timing out

Expected Result:
- A new connection should be established, similarly to http/1 and exports should succeed

Actual Result: 
- The exports keep failing for  ~ 15 minutes or for whatever the OS `tcp_retries2` is configured to
- After 15 minutes, a new tcp connection is created and exports start working

**Documentation:** <Describe the documentation added.>
Readme is updated

**Disclaimer:**
Not all HTTP/2 servers support H2 Ping, however, this should not be a concern as our ingest servers do support H2 ping.
But if you are routing you can check if H2 ping is supported using this script golang/go#60818 (comment)

Signed-off-by: Dani Louca <[email protected]>
alexelisenko pushed a commit to Control-D-Inc/tailscale that referenced this issue Feb 15, 2024
Theory is that our long lived http2 connection to control would
get tainted by _something_ (unclear what) and would get closed.

This picks up the fix for golang/go#60818.

Updates tailscale/corp#5761

Signed-off-by: Maisem Ali <[email protected]>
Signed-off-by: Alex Paguis <[email protected]>
@golang golang locked and limited conversation to collaborators Jun 30, 2024
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
FrozenDueToAge NeedsFix The path to resolution is known, but the work has not been done.
Projects
None yet
Development

No branches or pull requests

5 participants