Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

gh-109370: Fix unexpected traceback output in test_concurrent_futures #109780

Merged
merged 1 commit into from
Sep 26, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 2 additions & 1 deletion Lib/concurrent/futures/process.py
Original file line number Diff line number Diff line change
Expand Up @@ -521,7 +521,8 @@ def terminate_broken(self, cause):

# gh-107219: Close the connection writer which can unblock
# Queue._feed() if it was stuck in send_bytes().
self.call_queue._writer.close()
if sys.platform == 'win32':
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is it an issue on other platforms?

See also PR gh-109810 which goes further in terminate_broken(), to try harder to close everything, and terminate the child processes.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because of scenarios described in #109397 (comment). It is not possible to safely close a file descriptor in other thread without using some kind of locking, and it is difficult if possible without rewriting the way the GIL or all IO works even to use locking.

It may be lesser issue on Windows (if handles are never reused, I do not know if it is so), but I am not sure that it is bug-free there. It may be simply that bugs are rarely manifested on Windows because most time the queue thread spends in WaitForMultipleObjects().

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is not possible to safely close a file descriptor in other thread without using some kind of locking

It's maybe time to add "some kind of locking".

There are two cases:

  • Thread B sets the file descriptor / handle to None, and then close FD / handle. No other thread was using it, no problem. Next, when thread A tries to use it, "closed FD/handle" exception is raised.

  • Thread B closes the FD/handle while thread A is using it. That's an important use case to be able to unblock multiprocessing when it goes into troubles with a sick worker progress, sick manager process, well, when things "go wrong".

For me, the case that we should care about is terminate_broken() of concurrent.futures ProcessPoolExecutor and the associated BrokenProcessPool exception. When everything goes wrong, we should just cleanup resources: terminate processes, close queue, close file descriptors/handles, etc. I don't think that we can still expect worker processes to handle "please stop" commands from the main process.

For example, PR #109810 is when the executor cannot even create new threads. In that case, multiprocessing and concurrent.futures cannot work properly, and it's time to exit as soon as possible.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's maybe time to add "some kind of locking".

Actually, I do not think how is it possible, even with cooperation with GIL. It needs a support in the kernel to allow switching to other thread only when a system call waits. We can reduce the window for race condition, but it will always be possible with some small probability to close and replace file descriptor just before system call.

There are two cases:

There are many other cases, depending on definition of "use it". If the file descriptor is closed or the attribute is set to None between the place where you can realistically check this and the place where it is used, you have an unsolvable problem.

For now, I am worried about regression introduce by #107219 on non-Windows platforms. Since the original issue was Windows-only, I prefer to apply the solution (even if it may be non-perfect) on Windows where it fixes serious issue and not apply it other platforms where it seems unnecessary but causes visible regression.

self.call_queue._writer.close()

# clean up resources
self.join_executor_internals()
Expand Down
3 changes: 1 addition & 2 deletions Lib/multiprocessing/connection.py
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,6 @@
BUFSIZE = 8192
# A very generous timeout when it comes to local connections...
CONNECTION_TIMEOUT = 20.
WSA_OPERATION_ABORTED = 995

_mmap_counter = itertools.count()

Expand Down Expand Up @@ -300,7 +299,7 @@ def _send_bytes(self, buf):
finally:
self._send_ov = None
nwritten, err = ov.GetOverlappedResult(True)
if err == WSA_OPERATION_ABORTED:
if err == _winapi.ERROR_OPERATION_ABORTED:
# close() was called by another thread while
# WaitForMultipleObjects() was waiting for the overlapped
# operation.
Expand Down