Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
wasm: fix network leak #13836
wasm: fix network leak #13836
Changes from all commits
bddc2a7
7d9cdd1
7ff34b7
b38e587
3f20a04
2405e4c
File filter
Filter by extension
Conversations
Jump to
There are no files selected for viewing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This whole change could be basically minimized to this (i.e. on downstream close, close TCP context).
I don't think we can remove upstream close event, since it's a breaking change, or that we even need to remove it in order to fix the issue at hand.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Who exactly are we breaking? I don't think anyone relies on this callback or they would definitely complain about it before me.
What's the difference between upstream close and downstream local close? What does upstream close mean in case of direct response?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't know, we don't have an exhaustive list of everybody using Proxy-Wasm extensions. Notably, this seems to be only broken when using clusters with TLS transport socket (see: onUpstreamData(end_stream=true) is never raised when using cluster with TLS transport socket #13856).
I think you're confusing downstream/upstream with remote/local close events:
In any case, this bug is not unique to Wasm, and I think the proper fix should emit upstream connection close event (again, see: #13856).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK, #13856 is not the root cause for #13806, but it results in the same leak.
For #13806 the issue is that the upstream close event won't be triggered if the connection to upstream was never established (e.g. connection timeout, connection refused), but once #13856 is fixed, then this event is always triggered if the connection to upstream was established, so I don't see the point in removing it.
Like I said originally, this PR should be reduced to "on downstream close event, destroy context" (ideally, we should keep waiting for upstream close if we ever transmitted data in either direction), but without removing the upstream close event.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can reproduce this without TLS.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you share the steps to reproduce so that I can try to debug it?
It might work by accident, but Proxy-Wasm plugins don't officially support "upstream network filters", and I suspect that some things will be broken because of flipped direction and related checks. I'm also pretty sure there are no tests for that use case. Please open an issue if you want to use them as "upstream network filters".
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fortio server
.fortio load -qps 10000 -c 100 -t 10s localhost:8001
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Filed #13929 .
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the instructions, I was able to reproduce the issue. There are mid-stream I/O errors that are silently dropped in transport sockets, see: #13939. Once the core issue is fixed (my attempt in #13941), the counters always match.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't have high confidence that there's some other issue lurking about error propagation. If this happened under trivial config, it's very likely that more complex config can cause the same leak. We should not expect that upstream connection events are correctly emitted.