Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to use concurrent.futures in atexit hook #86813

Open
jd mannequin opened this issue Dec 15, 2020 · 12 comments
Open

Unable to use concurrent.futures in atexit hook #86813

jd mannequin opened this issue Dec 15, 2020 · 12 comments
Labels
3.10 only security fixes 3.11 only security fixes 3.12 bugs and security fixes docs Documentation in the Doc dir stdlib Python modules in the Lib dir tests Tests in the Lib/test dir triaged The issue has been accepted as valid by a triager.

Comments

@jd
Copy link
Mannequin

jd mannequin commented Dec 15, 2020

BPO 42647
Nosy @pitrou, @vstinner, @jd, @aeros, @iritkatriel

Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.

Show more details

GitHub fields:

assignee = None
closed_at = None
created_at = <Date 2020-12-15.13:59:10.547>
labels = ['3.9', 'type-crash']
title = 'Unable to use concurrent.futures in atexit hook'
updated_at = <Date 2021-02-01.13:41:43.911>
user = 'https://github.com/jd'

bugs.python.org fields:

activity = <Date 2021-02-01.13:41:43.911>
actor = 'corona10'
assignee = 'none'
closed = False
closed_date = None
closer = None
components = []
creation = <Date 2020-12-15.13:59:10.547>
creator = 'jd'
dependencies = []
files = []
hgrepos = []
issue_num = 42647
keywords = []
message_count = 4.0
messages = ['383058', '385584', '385972', '386067']
nosy_count = 5.0
nosy_names = ['pitrou', 'vstinner', 'jd', 'aeros', 'iritkatriel']
pr_nums = []
priority = 'normal'
resolution = None
stage = None
status = 'open'
superseder = None
type = 'crash'
url = 'https://bugs.python.org/issue42647'
versions = ['Python 3.9']

@jd
Copy link
Mannequin Author

jd mannequin commented Dec 15, 2020

Python 3.9 introduced a regression with concurrent.futures. The following program works fine on Python < 3.8 but raises an error on 3.9:

import atexit
import concurrent.futures

def spawn():
    with concurrent.futures.ThreadPoolExecutor() as t:
        pass

atexit.register(spawn)
$ python3.9 rep.py
Error in atexit._run_exitfuncs:
Traceback (most recent call last):
  File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
  File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 680, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 790, in exec_module
  File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
  File "/usr/local/Cellar/[email protected]/3.9.0_3/Frameworks/Python.framework/Versions/3.9/lib/python3.9/concurrent/futures/thread.py", line 37, in <module>
    threading._register_atexit(_python_exit)
  File "/usr/local/Cellar/[email protected]/3.9.0_3/Frameworks/Python.framework/Versions/3.9/lib/python3.9/threading.py", line 1370, in _register_atexit
    raise RuntimeError("can't register atexit after shutdown")
RuntimeError: can't register atexit after shutdown

@jd jd mannequin added 3.9 only security fixes type-crash A hard crash of the interpreter, possibly with a core dump labels Dec 15, 2020
@iritkatriel
Copy link
Member

b61b818 is the first bad commit
commit b61b818
Author: Kyle Stanley <[email protected]>
Date: Fri Mar 27 15:31:22 2020 -0400

bpo-39812: Remove daemon threads in concurrent.futures (GH-19149)

Remove daemon threads from :mod:`concurrent.futures` by adding
an internal `threading._register_atexit()`, which calls registered functions
prior to joining all non-daemon threads. This allows for compatibility
with subinterpreters, which don't support daemon threads.

@aeros
Copy link
Contributor

aeros commented Jan 30, 2021

Thanks for bringing attention to this, Julien.

While the regression is definitely unfortunate, I'm uncertain about whether the behavior is *correct* to allow in the first place. Specifically, allowing the registration of an atexit hook which uses a ThreadPoolExecutor within it means that the finalization of the executor will be done *after* thread finalization occurs, leaving dangling threads which will have to be abruptly killed upon interpreter exit instead of being safely joined. From my perspective at least, this doesn't seem like something to be encouraged.

Is there a real-world situation where it's specifically necessary or even beneficial to utilize ThreadPoolExecutor at this point after thread finalization rather than earlier in the program? Not that it doesn't exist, but to me it intuitively seems very odd to utilize an executor within an atexit hook, which are intended to just be resource finalization/cleanup functions called at interpreter shutdown. Assuming there is a genuine use case I'm not seeing, it may be worth weighing against the decision to convert the executors to not use daemon threads, as I presently don't think there's a way to (safely) allow that behavior without reverting back to using daemon threads.

That said, I'll admit that I'm more than a bit biased as the author of the commit which introduced the regression, so I'll CC Antoine Pitrou (active expert for threading and concurrent.futures) to help make the final decision.

@jd
Copy link
Mannequin Author

jd mannequin commented Feb 1, 2021

Is there a real-world situation where it's specifically necessary or even beneficial to utilize ThreadPoolExecutor at this point after thread finalization rather than earlier in the program? Not that it doesn't exist, but to me it intuitively seems very odd to utilize an executor within an atexit hook, which are intended to just be resource finalization/cleanup functions called at interpreter shutdown. Assuming there is a genuine use case I'm not seeing, it may be worth weighing against the decision to convert the executors to not use daemon threads, as I presently don't think there's a way to (safely) allow that behavior without reverting back to using daemon threads.

To put that in perspective, here is the original issue that trigged this bug for me:

Traceback (most recent call last):
  File "/root/project/ddtrace/profiling/scheduler.py", line 50, in flush
    exp.export(events, start, self._last_export)
  File "/root/project/ddtrace/profiling/exporter/http.py", line 186, in export
    self._upload(client, self.endpoint_path, body, headers)
  File "/root/project/ddtrace/profiling/exporter/http.py", line 189, in _upload
    self._retry_upload(self._upload_once, client, path, body, headers)
  File "/root/project/.tox/py39-profile-gevent/lib/python3.9/site-packages/tenacity/__init__.py", line 423, in __call__
    do = self.iter(retry_state=retry_state)
  File "/root/project/.tox/py39-profile-gevent/lib/python3.9/site-packages/tenacity/__init__.py", line 360, in iter
    return fut.result()
  File "/root/.pyenv/versions/3.9.0/lib/python3.9/concurrent/futures/_base.py", line 433, in result
    return self.__get_result()
  File "/root/.pyenv/versions/3.9.0/lib/python3.9/concurrent/futures/_base.py", line 389, in __get_result
    raise self._exception
  File "/root/project/.tox/py39-profile-gevent/lib/python3.9/site-packages/tenacity/__init__.py", line 426, in __call__
    result = fn(*args, **kwargs)
  File "/root/project/ddtrace/profiling/exporter/http.py", line 193, in _upload_once
    client.request("POST", path, body=body, headers=headers)
  File "/root/.pyenv/versions/3.9.0/lib/python3.9/http/client.py", line 1255, in request
    self._send_request(method, url, body, headers, encode_chunked)
  File "/root/.pyenv/versions/3.9.0/lib/python3.9/http/client.py", line 1301, in _send_request
    self.endheaders(body, encode_chunked=encode_chunked)
  File "/root/.pyenv/versions/3.9.0/lib/python3.9/http/client.py", line 1250, in endheaders
    self._send_output(message_body, encode_chunked=encode_chunked)
  File "/root/.pyenv/versions/3.9.0/lib/python3.9/http/client.py", line 1010, in _send_output
    self.send(msg)
  File "/root/.pyenv/versions/3.9.0/lib/python3.9/http/client.py", line 950, in send
    self.connect()
  File "/root/.pyenv/versions/3.9.0/lib/python3.9/http/client.py", line 921, in connect
    self.sock = self._create_connection(
  File "/root/project/.tox/py39-profile-gevent/lib/python3.9/site-packages/gevent/socket.py", line 88, in create_connection
    addrs = list(getaddrinfo(host, port, 0, SOCK_STREAM))
  File "/root/project/.tox/py39-profile-gevent/lib/python3.9/site-packages/gevent/_socketcommon.py", line 247, in getaddrinfo
    addrlist = get_hub().resolver.getaddrinfo(host, port, family, type, proto, flags)
  File "/root/project/.tox/py39-profile-gevent/lib/python3.9/site-packages/gevent/hub.py", line 841, in _get_resolver
    self._resolver = self.resolver_class(hub=self) # pylint:disable=not-callable
  File "/root/project/.tox/py39-profile-gevent/lib/python3.9/site-packages/gevent/resolver/thread.py", line 39, in __init__
    self.pool = hub.threadpool
  File "/root/project/.tox/py39-profile-gevent/lib/python3.9/site-packages/gevent/hub.py", line 865, in _get_threadpool
    self._threadpool = self.threadpool_class(self.threadpool_size, hub=self)
  File "/root/project/.tox/py39-profile-gevent/lib/python3.9/site-packages/gevent/hub.py", line 860, in threadpool_class
    return GEVENT_CONFIG.threadpool
  File "/root/project/.tox/py39-profile-gevent/lib/python3.9/site-packages/gevent/_config.py", line 50, in getter
    return self.settings[setting_name].get()
  File "/root/project/.tox/py39-profile-gevent/lib/python3.9/site-packages/gevent/_config.py", line 146, in get
    self.value = self.validate(self._default())
  File "/root/project/.tox/py39-profile-gevent/lib/python3.9/site-packages/gevent/_config.py", line 248, in validate
    return self._import_one_of([self.shortname_map.get(x, x) for x in value])
  File "/root/project/.tox/py39-profile-gevent/lib/python3.9/site-packages/gevent/_config.py", line 223, in _import_one_of
    return self._import_one(candidates[-1])
  File "/root/project/.tox/py39-profile-gevent/lib/python3.9/site-packages/gevent/_config.py", line 237, in _import_one
    module = importlib.import_module(module)
  File "/root/.pyenv/versions/3.9.0/lib/python3.9/importlib/__init__.py", line 127, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 1030, in _gcd_import
  File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
  File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 680, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 790, in exec_module
  File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
  File "/root/project/.tox/py39-profile-gevent/lib/python3.9/site-packages/gevent/threadpool.py", line 748, in <module>
    class ThreadPoolExecutor(concurrent.futures.ThreadPoolExecutor):
  File "/root/.pyenv/versions/3.9.0/lib/python3.9/concurrent/futures/__init__.py", line 49, in __getattr__
    from .thread import ThreadPoolExecutor as te
  File "/root/.pyenv/versions/3.9.0/lib/python3.9/concurrent/futures/thread.py", line 37, in <module>
    threading._register_atexit(_python_exit)
  File "/root/.pyenv/versions/3.9.0/lib/python3.9/threading.py", line 1370, in _register_atexit
    raise RuntimeError("can't register atexit after shutdown")
RuntimeError: can't register atexit after shutdown

What's happening is that the ddtrace library registers an atexit hook that does an HTTP call. As the application runs using gevent, some gevent modules that were not loaded are loaded during the atexit() hook and the loading of concurrent.futures.thread is done very late, at the point where the interpreter is shutting down.

I'm totally fine blaming gevent here if you prefer. The problem is that there's nothing preventing any library call to be made in an atexit() hook, and any library could decide to use concurent.futures without the library user being able to do anything about it, except maybe, making sure concurent.futures is loaded very early in the program. However, having to load this library even if you don't use it to be sure it does not break would break the separation of principles.

At at least, at this stage, it might be the responsibility of Python to make sure all threading._register_at_exit calls are done whatever if the library is used or not (i.e. that'd mean loading concurent.futures with threading unconditionally).

@jamesra
Copy link

jamesra commented Jul 1, 2022

I'm running into the same issue after attempting to migrate my code to Python 3.10. My use case is there is a temporary folder full of files. I have an atexit hook which deletes the temporary folder recursively in parallel. The temporary folder is over a network mount and shutil.rmtree runs too slowly. I'm far from a python expert but it seems very odd to remove API functionality simply because a notification has been triggered. I'm not really sure how to work around this other than registering a separate "atexit safe" rmtree implementation. How is a user supposed to know which calls are safe to use in atexit and which are not?

In case it is informative this is the implementation for my use-case. I register this function to be called by atexit. (Credit goes to a stackoverflow post I read sometime trying to figure out why shutil.rmtree was so slow)

atexit.register(rmtree, _sharedTempRoot)

def rmtree(directory):  
    with concurrent.futures.ThreadPoolExecutor() as executor:
        for root, dirs, files in os.walk(directory, topdown=False):
            try: 
                files_futures = executor.map(os.remove, [os.path.join(root, f) for f in files])
                t = list(files_futures) #Force the map operation to complete
            except Exception as e:
                prettyoutput.error(f'{e}')
                
            try:
                dir_futures = executor.map(os.rmdir, [os.path.join(root, d) for d in dirs])
                d = list(dir_futures) #Force the map operation to complete
            except Exception as e:
                prettyoutput.error(f'{e}')
            
    os.rmdir(directory)
            
    return 

robertfasano added a commit to robertfasano/labAPI that referenced this issue Sep 8, 2022
This is necessary for Python >= 3.9 due to an issue referenced in python/cpython#86813
@gpshead gpshead added type-bug An unexpected behavior, bug, or error stdlib Python modules in the Lib dir labels Sep 13, 2022
@gpshead
Copy link
Member

gpshead commented Sep 13, 2022

How is a user supposed to know which calls are safe to use in atexit and which are not?

We lack a written rule for this. It would be difficult to write such a rule and have it bear an accurate resemblance to reality. In general things like atexit and __del__ are finalizers, called when an interpreter may be in the final stages to shutting down,. So doing "complicated" things from then is not a good idea.

It may be possible to make some atexit using libraries like concurrent.futures detect and avoid this specific problem. If we do that we definitely need a regression test for the scenario.

@jamesra
Copy link

jamesra commented Sep 13, 2022

If the behavior cannot be documented something is wrong. Nothing in the documentation of atexit communicates any interpreter or language capabilities will be missing. It is reasonable for a programmer to expect atexit callbacks execute prior to any shutdown procedure to ensure the interpreter and built-in libraries are fully capable.

There seem to be two atexit consumers. One is for the programmer to perform cleanup and the second is the interpreter and built-in packages performing shutdown. It really seems those need to be kept separate. I'm a bit confused about execution order too since atexit claims to call registered functions in reverse order. How is my atexit call being invoked after ThreadPoolExecutor if atexit registrations are last-in-first-out? Even so, perhaps a fix could be allowing registering an atexit inside an atexit call. If the user's atexit function invokes a library which also requires atexit the interpreter would call the libraries atexit function in the documented order after the user function exits. Or have an internal atexit system for built-ins and the interpreter that runs afterword.

@gpshead
Copy link
Member

gpshead commented Sep 14, 2022

Workaround for the original problem: import concurrent.futures.thread at the top level in your program. The code depends on that being fully initialized in order to work as it wants to use it in a context in which it is too late to otherwise import it. In 3.9+ the library does not support being imported during shutdown from an atexit handler itself. Yes, that broke in 3.9. There was no testing to ensure that it works because testing everything in an atexit shutdown context is not something our test suite does.

Test and documentation improvement contributions welcome if you want to see that changed.

If the behavior cannot be documented something is wrong. Nothing in the documentation of atexit communicates any interpreter or language capabilities will be missing.

Best not assume that silence on a topic means anything goes. concurrent.futures is not an interpreter or language capability. It is a complicated library built on top of primitives of the language. No different than anything you'd get from PyPI. Unless something explicitly says it works during interpreter shutdown do not assume that it will. Try it and find out (which you did, in <= 3.8). But even when it works, when there is no explicit test for that, it cannot be guaranteed to continue working in such situations across releases or for the change in behavior to even be noticed or documented ahead of time.

Assumptions are made by libraries all over the place about the program being in a fully usable state. Those assumptions are often no longer true by the time any form of finalizers is running. So when you call into any library, expecting it to be usable in that context is not always reasonable. Code defensively in finalizers. When unsure something working, include a test of the capability in your own test suite to notice regressions early so new workarounds can be adopted.

I'm a bit confused about execution order too since atexit claims to call registered functions in reverse order. How is my atexit call being invoked after ThreadPoolExecutor if atexit registrations are last-in-first-out?

It isn't. concurrent.futures.thread had not yet been imported so it hadn't registered a handler yet, your atexit handler triggered the import. Thus the error.

[old comment:]

At at least, at this stage, it might be the responsibility of Python to make sure all threading._register_at_exit calls are done whatever if the library is used or not (i.e. that'd mean loading concurent.futures with threading unconditionally).

We can't do that. This would require us to import every library up front. Interpreter startup time and resource consumption would skyrocket. Unconditional imports of things that won't be used are highly undesirable. That is one reason why concurrent.futures switched to on demand imports of needed functionality in Python 3.7.

@gpshead
Copy link
Member

gpshead commented Sep 14, 2022

To close out this issue it's just a documentation update and an added unittest:

  1. [Docs] Add notes to the concurrent.futures docs about the limitation and the need to manually import concurrent.futures.thread or import concurrent.futures.process if their respective concurrent.futures Executor classes are to be used from an atexit handler.
  2. [Test] Add an explicit test that ThreadPoolExecutor and ProcessPoolExecutor both work from an atexit handler when their respective modules have been pre-imported.

@gpshead gpshead added tests Tests in the Lib/test dir docs Documentation in the Doc dir 3.11 only security fixes 3.10 only security fixes 3.12 bugs and security fixes triaged The issue has been accepted as valid by a triager. and removed type-crash A hard crash of the interpreter, possibly with a core dump topic-multiprocessing labels Sep 14, 2022
@roychowdhuryrohit-dev
Copy link

Is this issue fixed in latest versions of Python?

@gpshead
Copy link
Member

gpshead commented May 26, 2023

Given the direction things are going over in #104690 we're quite likely to block the ability to create new threads or forked processes from within an atexit handler.

@Tobotimus
Copy link
Contributor

Tobotimus commented Aug 14, 2023

For what it's worth, I thought I'd add my use-case here since I've been affected by the new behaviour after upgrading to Python 3.10.

We use atexit to upload a logfile to S3 upon termination of the program. The library we use to upload to S3 (boto3) uses a ThreadPoolExecutor for file uploads.

In general, I imagine it's a fairly common usage pattern to have exit handlers to do some communication over a network in order to save diagnostics or some such. And such communication is often done in threads, depending on the library being used.

I'm personally happy to change/reconfigure the library we're using to upload our file so that it's done in the main thread. But I would humbly request caution over any further limitations placed on exit handlers, especially any which limit network communication in any way.

My workaround if you're interested

Just in case anyone else is doing the same thing as me (uploading a file to S3 using boto3 from within an exit handler), I've got you covered.

To force boto3 to upload the file from the main thread, you can pass a TransferConfig which disables the use of secondary threads:

import boto3.s3.transfer

client = boto3.client("s3")
client.upload_file(
  Filename="path/to/file",
  Bucket="my-bucket",
  Key="my-file",
  Config=boto3.s3.transfer.TransferConfig(use_threads=False),
)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
3.10 only security fixes 3.11 only security fixes 3.12 bugs and security fixes docs Documentation in the Doc dir stdlib Python modules in the Lib dir tests Tests in the Lib/test dir triaged The issue has been accepted as valid by a triager.
Projects
Status: No status
Development

No branches or pull requests

7 participants