-
-
Notifications
You must be signed in to change notification settings - Fork 348
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Subtle bugs around closure handling in _unix_pipes.py and _windows_pipes.py #661
Comments
I think this could also affect |
There's also a serious bug in I think the root cause is back here, where I should have argued for clearing We should use that strategy going forward. |
This addresses a number of issues: - Fixes a major issue where aclose() called notify_fd_closed() unconditionally, even if the fd was closed; if the fd had already been recycled this could (and did) affect unrelated file descriptors: python-trio#661 (comment) - Fixes a theoretical issue (not yet observed in the wild) where a poorly timed close could fail to be noticed by other tasks (python-triogh-661) - Adds ConflictDetectors to catch attempts to use the same stream from multiple tasks simultaneously - Switches from inheritance to composition (python-triogh-830) Still todo: - Tests for these race conditions that snuck through - Audit _windows_pipes.py and _socket.py for related issues
It looks like
|
...But I'm not sure about @oremanj I guess might have done some testing of this while implementing it? What happens if you call |
And on a similar note, do you think we should we be raising an exception if two tasks are blocked in |
I don't have my Windows VM handy right now, but I'm pretty sure I tested at least the read side of this and it did the right thing (closing the handle caused the ongoing read to fail). Thus this code in _windows_pipes:
Looks like on the write side we might raise BrokenResourceError instead of ClosedResourceError on a concurrent close. Should be an easy fix if the OS behavior is like I remember it. |
It also occurs to me belatedly that the Windows version doesn't involve all the tricky retry loop stuff and the associated subtle timing issues, so it'd be pretty easy to write a test and see if it passes. |
This addresses a number of issues: - Fixes a major issue where aclose() called notify_fd_closed() unconditionally, even if the fd was closed; if the fd had already been recycled this could (and did) affect unrelated file descriptors: python-trio#661 (comment) - Fixes a theoretical issue (not yet observed in the wild) where a poorly timed close could fail to be noticed by other tasks (python-triogh-661) - Adds ConflictDetectors to catch attempts to use the same stream from multiple tasks simultaneously - Switches from inheritance to composition (python-triogh-830) Still todo: - Tests for these race conditions that snuck through - Audit _windows_pipes.py and _socket.py for related issues
Oh heh, we actually have a test for the Windows And we also have a test for the receive side version, because the generic stream tests do that. (The send-side test requires a "clogged stream", and we skip the clogged stream tests on windows pipes because of missing So that's all good! But, there is still one issue. In the windows Lines 57 to 67 in 3bdd3fd
I'm actually not 100% clear on whether this is necessary, because the IOCP docs are vague on this. It looks like libuv just does a check that it's not sending more than So, I think we should either stop splitting up the writes like this, or else we need to do similar fixes to those done for unix pipes in #874. |
I opened a new issue to discuss splitting up |
This is something I just realized while thinking about a random question someone asked in #twisted:
In
unix_pipes.py
, suppose that the following sequence of events happens:receive_some
wait_readable
os.read
on the closed fd, which if you're lucky raises an exception (EBADF), or if you're unlucky then a new fd got opened and assigned this value in between steps (4) and (5), and we end up reading from this random fd, which probably corrupts the state of some random other connection.send_all
has analogous issues.I guess we need to re-check
self._closed
every time we use the fd, not just once on entry to the function.This same issue can't happen with
SocketStream
, because when a socket is closed then it sets the underlying fd to-1
, and so step 5 would callrecv(-1, ...)
, which is always an EBADF, and theSocketStream
code knows to convert EBADF intoClosedResourceError
.With
trio.socket
, the-1
thing means you can't get a wild read from a random fd, but there's no explicit handling of this case, so you will get anOSError(EBADF)
instead of a properClosedResourceError
. I guess that would probably be good to fix, though it's less urgent than the unix_pipes.py thing.The text was updated successfully, but these errors were encountered: