Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

res.end() from a https.server() handler sends a RST packet provoking ECONNRESET #36180

Closed
mmomtchev opened this issue Nov 19, 2020 · 24 comments · May be fixed by #36205
Closed

res.end() from a https.server() handler sends a RST packet provoking ECONNRESET #36180

mmomtchev opened this issue Nov 19, 2020 · 24 comments · May be fixed by #36205
Labels
http Issues or PRs related to the http subsystem. https Issues or PRs related to the https subsystem.

Comments

@mmomtchev
Copy link
Contributor

mmomtchev commented Nov 19, 2020

  • Version: At least since Node 12
  • Platform: Windows - very common, OSX - very common, Linux - rare
  • Subsystem: TLS/HTTPS - very common, HTTP - maybe(?), very rare if exists

What steps will reproduce the bug?

node test/parallel/test-https-truncate.js - then dump the network traffic
The problem has existed for quite some time, but was masked for most clients and it appeared as a seemingly random problem

How often does it reproduce? Is there a required condition?

node test/parallel/test-https-truncate.js on Windows is a guaranteed hit
#23169 describes an almost guaranteed hit on OSX
Reproduction on Linux is quite difficult and will usually happen only after running in while /bin/true loop for some time

What is the expected behavior?

res.end() leads to a proper active closing of the TCP connection

What do you see instead?

res.end() causes the server to directly call uv_close() which immediately destroys the kernel socket that goes into TIME_WAIT state - when the client receives this FIN packet he will respond with an ACK and his last data packet - this packet triggers a RST from the server

Additional information

There are two ways to properly close a TCP connection:

  • The simultaneous closing, which happens 99% of the time where the higher-level protocol, through some form of a BYE message, signals to both ends to simultaneously call the kernel close() - uv_close() for us - thus exchanging two FIN/ACK sequences - today most higher level protocols do provide some form of a bye message
  • The passive/active closing, a more archaic form of closing the connection, where one end will unilaterally close the connection without the other one expecting it. HTTP/1.0 with "Connection: close" is a classical example. TLS/1.1 also has a provision for an optional unilateral closing. In this case, the end originating the closing, the so-called active end, should send a FIN packet triggered by calling the kernel shutdown() - uv_shutdown() in our case. Upon receiving the FIN packet, the passive end should ACK it, then send any remaining data in a data packet and then proceed to send his own FIN. The active end should destroy the connection with close() / uv_close()

What currently happens is that when doing res.end() from JS this ends in net.Socket.close() and then goes through TCPWrap which does not overload Close() and finally in HandleWrap::Close()
Here uv_close() is called - this is a direct, unscheduled code-path
While the uv_shutdown() lies on an indirect code path scheduled by a Dispatch micro-task in LibuvStreamWrap::CreateShutdownWrap()
The result is that the shutdown happens one or two microseconds after the close when in fact a proper close should wait for the shutdown to finish
My opinion is that for TCP connections uv_close() should be called only in the uv_shutdown() callback

Here is the full exchange with all the layers:

Client JS Client Node/libuv Client Kernel Remote Server
res.end()
shutdown(SD_SEND)
TCP FIN ->
kernel socket goes into FIN_WAIT1 state
close()
kernel socket goes into TIME_WAIT state
<- TCP ACK
<- data
because the kernel socket is in TIME_WAIT TCP RST ->

And it should be

Client JS Client Node/libuv Client Kernel Remote Server
res.end()
shutdown(SD_SEND)
TCP FIN ->
kernel socket goes into FIN_WAIT1 state
<- TCP ACK
<- data
recv()
TCP ACK ->
<- TCP FIN
kernel socket goes into FIN_WAIT2 state
recv()
TCP ACK ->
close()
kernel socket goes into TIME_WAIT state
@mmomtchev
Copy link
Contributor Author

As this is something specific to the TCP protocol, my proposal is that TCPWrap overloads the whole close mechanism, triggering a shutdown, and then destroying the socket once the shutdown is complete

@lpinca
Copy link
Member

lpinca commented Nov 19, 2020

Note that response.end() calls socket.destroy() via socket.destroySoon() while the other peer might still be sending data.

@lpinca lpinca added http Issues or PRs related to the http subsystem. https Issues or PRs related to the https subsystem. labels Nov 19, 2020
@lpinca
Copy link
Member

lpinca commented Nov 19, 2020

Possibly related issue: #27916.

@mmomtchev
Copy link
Contributor Author

Yes, that issue looks very similar.

socket.destroySoon waits for the transmitting to finish - I guess it might be the reason why it is so difficult to reproduce the issue on Linux, because it will delay the close() a little bit. And if the remote end has Nagle or some other buffering, you are sure to get one last incoming data packet after the FIN. There are also cases where the remote sends an empty data packet - this is the case on Windows with the test-https-truncate.

Waiting for the receiving to finish can only be accomplished at the socket level - you can't do it through Node events alone - you need to make your OS send a FIN packet - by calling shutdown() and then wait for the socket to drain before calling close() - otherwise the kernel will RST on the next data packet - which probably won't be sent until you call shutdown().

Also, when directly calling uv_close() without uv_shutdown(), libuv will do a shutdown, but won't wait for the socket to drain.

The issue can be visualized by adding printf with microsecond resolution in uv_shutdown() and uv_close() and running a Wireshark.

@lpinca
Copy link
Member

lpinca commented Nov 19, 2020

Yes I understand. My point is that you can't expect a clean closure if you call response.end() before the 'end' event is emitted on the request.

@mmomtchev
Copy link
Contributor Author

Yes, this is correct, that would be one way to avoid it - when there is such an end event (which would be always in the case of HTTPS?)

@mmomtchev
Copy link
Contributor Author

No, the end event is not a reliable way to avoid this issue - at least on the test-https-truncate I get it after the resposne.end()
Besides, you might still have some data in the buffer on the client side. I tried artificially delaying the end() by a whole second - I still get some data from the Windows client after he receives the server FIN - it is probably the TLS close_notify. The client does not send this data until it has received a FIN - or until it makes another request.

@mmomtchev
Copy link
Contributor Author

@lpinca
In fact the first uv_close() in test-https-truncate.js is being triggered by the server.close(). If the server.close() is delayed or removed, the first uv_shutdown() comes before the first uv_close() - but still without waiting for the incoming data to drain

@mmomtchev
Copy link
Contributor Author

mmomtchev commented Nov 20, 2020

Maybe this could be solved in JS in this case because by the time req.on('end') has been called the shutdown has been initiated. However it is still too early to call close() - one must wait for the EOF - any ideas?

But this is still a TLS-specific solution, the general case needs solving at the TCP level.

@mmomtchev
Copy link
Contributor Author

@lpinca Yes, I confirm

  • with server.close() - uv_close() is called before uv_shutdown() like described because the callbacks are not properly arranged - I am waiting for opinions on how this should work (ie, is this an emergency close or is it a normal close, should a res.end() followed by a server.close() generate an error or not)
  • without server.close() all the callbacks work like they should - Socket._final() initiates a shutdown then waits for the callback

    node/lib/net.js

    Line 413 in 700612f

    const req = new ShutdownWrap();

    However this callback is called way too soon without draining the incoming data - and in this case it is libuv that is at fault:

uv_shutdown() initiates the shutdown and earmarks the socket for endgame:

uv_want_endgame(loop, (uv_handle_t*)handle);

INLINE static void uv_want_endgame(uv_loop_t* loop, uv_handle_t* handle) {

Then at the next event loop iteration the socket is simply destroyed:

uv_tcp_endgame(loop, (uv_tcp_t*) handle);

void uv_tcp_endgame(uv_loop_t* loop, uv_tcp_t* handle) {

There is an no intermediate state - waiting for an EOF

@bnoordhuis @cjihrig

@mmomtchev
Copy link
Contributor Author

The situation on Linux is quite different:

uv_shutdown() calls uv__stream_io()

uv__io_start(stream->loop, &stream->io_watcher, POLLOUT);

which in turns calls uv__drain()
static void uv__drain(uv_stream_t* stream) {

However once again it drains only the writing side - it does not wait for an incoming EOF - but unlike Windows - it also reads while writing - so if the last data packets arrives not too late, then everything is good - the reason why it is so difficult to reproduce it on Linux

Maybe instead of a TCPWrap::Close, there should be a new IO state in libuv? UV_HANDLE_DRAINING_HALFCLOSED?

@mmomtchev
Copy link
Contributor Author

mmomtchev commented Nov 20, 2020

@lpinca I am afraid there is no solution with req.on('end') - if you remove server.close() than request 'end' is correctly emitted and when it is handled, the closing has been initiated and the FIN packet sent. However because of the way libuv works the fate of this socket has already been sealed - at this moment it will be waiting in the queue loop->endgame_waiting . No matter what you do, it will be destroyed in the first few microseconds after the end of your request 'end' handler - while the last data packet of the remote end (the client) will still be hundreds of microseconds (or a few milliseconds when not on the loopback) away. You could of course try to count from 0 to 1e8 to block the event loop, but this is not a solution.

@addaleax proposes to avoid adding a new TCPWrap::Close method by doing a JS .close() in the shutdown request. This is already the case - the shutdown request correctly calls close() in its callback. The problem is that, server.close() closes the connection before that shutdown has a chance to execute (this is not immediately clear from the description because it is new information)

  • So maybe the server.close() needs to go and should not be supported in this situation - or we do add a TCPWrap::Close()
  • And, it seems that uv_shutdown() in libuv needs lots of work too, because currently, it will simply call the shutdown callback at the first iteration of the event loop unless there are pending writes - it is not waiting for an EOF at all

I think that uv_shutdown() in libuv should be solved first because this has no workarounds, unless I am missing something?

@vtjnash
Copy link
Contributor

vtjnash commented Nov 20, 2020

This sounds more like a misunderstanding than a true bug? There's a 'halfClose' flag in nodejs that I think should give the behavior you were looking for. The default is to assume a simultaneous active close, but with that flag toggled, I think you should be able to get the graceful shutdown you describe. I don't see a provision in the TLS standard for halfClose, so I think that is unsupported by that protocol?

Thus TLS might not be a great example of "typical" TCP usage, since it permits intentionally causing an RST packet to be sent, and may then expect the other end to ignore certain FIN/RST/shutdown messages at the TLS level (since those aren't encrypted). So that means that this:

TLS/1.1 also has a provision for an optional unilateral closing. In this case, the end originating the closing, the so-called active end, should send a FIN packet triggered by calling the kernel shutdown() - uv_shutdown() in our case. Upon receiving the FIN packet, the passive end should ACK it, then send any remaining data in a data packet and then proceed to send his own FIN

is not quite how the TLS standard defines the closure sequence, which you can read for yourself at https://tools.ietf.org/html/rfc5246#section-7.2.1. In summary, to close a socket, the only possible sequence is that active end initiates by sending a close_notify packet. The passive end then should then be "discarding any pending writes". Afterward the TLS connection is now considered closed, and the underlying TCP socket can now be destroyed also. Either uv_close or uv_shutdown may be used, as both will send the same FIN packet at the kernel level. (The FIN packet is also acceptable now to initiating the close sequence as of TLS 1.1, as it was common practice to observe, though may lead to a truncate attack.)

Thus, it is reasonable to expect to see FIN/RST exchanged instead of FIN/FIN when using TLS.

@mmomtchev
Copy link
Contributor Author

mmomtchev commented Nov 20, 2020

@vtjnash Yes, I think so
Let's just forget about the server.close() race condition for now
@lpinca What is halfClose in fact? From what I see, it only works on the client side?

@mcollina @lpinca Look at this function:

function callFinal(stream, state) {

It triggers the socket shutdown - by calling stream._final - something that ultimately leads to uv_shutdown
It then waits for its callback, and then starts destroying the socket, finish will ultimately lead to uv_close

Now from looking at this code, it seems that you both agree that libuv should do a proper shutdown, all services included.

But by looking at the libuv code, I am afraid its authors think that a proper shutdown includes flushing the output side only. In fact, they will call your callback almost immediately. At this moment the FIN has already been sent. And the remote client won't send his last data packet for quite some time.

So, the million dollar question. Who is responsible for waiting for the incoming data to drain? libuv doesn't do it. Writable.callFinal doesn't do it either. One of them should.

@mmomtchev
Copy link
Contributor Author

@vtjnash There is a plan B, where the RST is considered normal
@jasnell already said that he didn't like normal errors, I don't like them either, I think that a normal error is an oxymoron
We understand that one and its consequences, so it is safe to ignore, but if we can avoid it, it is better to avoid it
Besides, I still think that this can happen with other protocols - its just that TLS makes it very easy to trigger because the FIN packet will always elicit one last data packet from the other side, testing the correct implementation of all possible cases 😄
And - if the client is not a Node client - it will still report the error

@vtjnash
Copy link
Contributor

vtjnash commented Nov 20, 2020

Ah, right, that's for the passive side. I think 'autoClose' is toggle for the active side?

// Should .destroy() be called after 'finish' (and potentially 'end').
this.autoDestroy = !options || options.autoDestroy !== false;

@mmomtchev
Copy link
Contributor Author

@vtjnash, remarkably I have different default values for autoDestroy for Windows (false) and Linux (true) and I can't find why... @ronag?
But at the end, it doesn't change anything, because what triggers uv_close() is this line:

stream.emit('finish');

The line before the if (state.autoDestroy)

@vtjnash
Copy link
Contributor

vtjnash commented Nov 20, 2020

That doesn't appear to directly call close, but seems like it might end up at destroy after the shutdown call fails with an error, and also seems guarded by a test of 'autoDestroy'. But you're seeing it before that point? Do you know where it actually calls destroy from?

if ((r && r.autoDestroy) || (w && w.autoDestroy))
stream.destroy(err);

Search doesn't seem to show where it would be different by platform either:
https://github.com/search?l=&p=2&q=autoDestroy+repo%3Anodejs%2Fnode+path%3A%2Flib&ref=advsearch&type=Code

@ronag
Copy link
Member

ronag commented Nov 20, 2020

Nothing in streams should be platform dependent.

@mmomtchev
Copy link
Contributor Author

@ronag, I will try to isolate once I finish this

@vtjnash I have a very elegant solution to the RST problem if I consider that it libuv's uv_shutdown() should drain only the writing side - so I guess that if there has been a design decision in the past - it must have been that one

  • uv_shutdown() drains the writing side
  • the user code (user code from libuv's point - ie Node) drains the reading side
  • both work in tandem to produce a clean shutdown which happens in two parts

In this case, there is an incredibly elegant solution that solves the RST problem

@mcollina @lpinca
However it also requires changing one unit test, but my opinion is that this test is currently not correct - so please tell me what you think

mmomtchev added a commit to mmomtchev/node that referenced this issue Nov 20, 2020
Closing a TCP connection requires 2 FIN/ACK
exchanges, so we should wait for both events
before destroying a socket

Fixes: nodejs#36180
@mmomtchev
Copy link
Contributor Author

mmomtchev commented Nov 20, 2020

It even solves the server.close() problem because server.close() will be delayed in Server._emitCloseIfDrained
Everything falls into place - this must have been they way it was supposed to work

@mmomtchev
Copy link
Contributor Author

Ok, just for posteriority:
The shutdown process for a net.Socket is composed of two separate parts than can happen in any order and both must happen before the socket can be safely destroyed

  • Ending the input: read until you get an EOF, then emit end - draining happens in Node
  • Finishing the output: call shutdown and in its callback emit finish - draining happens in libuv (the kernel socket shutdown happens here)

When you have both, you can destroy the socket (the kernel socket close happens here)

@ronag
Copy link
Member

ronag commented Apr 25, 2021

@mmomtchev Any updates here? This seems to have stalled and I have a hard time figuring out where/how.

@bnoordhuis
Copy link
Member

It's been another 1.5 years with no movement so I'll take the liberty of closing this out.

@bnoordhuis bnoordhuis closed this as not planned Won't fix, can't repro, duplicate, stale Dec 29, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
http Issues or PRs related to the http subsystem. https Issues or PRs related to the https subsystem.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants