Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
I'm scratching my own itch here.
I'm implementing log-file streaming over HTTP using the usual
Client - NGINX - uWSGI - Flask
stack. When clients requests the stream, they receive the full log, and then new lines added to logs are streamed as they come. Those new lines might occur with several minutes in between. If the client disconnects in such an interval, the handler is blocked waiting for new log messages, and doesn't detect the closed connection. This effectively exhausts the "worker" pool if many such disconnects happen. Writing "dummy" data to the socket regularly isn't an option, as the dummy data would appear in the streamed log contents.To fix this, the async core can detect the disconnected clients by keeping the protocol sockets in the event queue, and enable
EPOLLRDHUP
events. (I haven't checked similar flags for other event queue implementations.) The python plugin then checks a flag to detect closed connections in the response handler.