-
Notifications
You must be signed in to change notification settings - Fork 715
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Inconsistent cross platform behavior when using multiprocessing in spawn mode #746
Comments
Hey @TCatshoek, glad you like Loguru, and thanks for the fully reproducible example! So, I haven't investigated much yet, but a wild guess would be that the Loguru internal Quoting the
A quick workaround is to use For now, I don't think of any better solution. I'll think about it but I'm not sure it's even fixable from within Loguru. |
Awesome! Thanks for getting back to me so quickly. It didn't occur to me that I was setting things up in a different context than the one that was being used by the subprocesses. Changing main.py to: import multiprocessing
import sys
from loguru import logger
import workers
def main():
logger.remove()
logger.add(sys.stderr, enqueue=True)
with multiprocessing.Pool(4, initializer=workers.set_logger,
initargs=(logger,)) as pool:
pool.map(
workers.log_test,
list(range(10))
)
if __name__ == "__main__":
multiprocessing.set_start_method("spawn")
main() Does indeed fix the problem! |
Yeah, adding a note in the documentation is the very least we can do. PR is welcome for sure! :) |
Added the note in #750! |
For information I implemented a new It will be available when I release the next Loguru version, and should be used this way (start method name also accepted as import multiprocessing
context = multiprocessing.get_context("spawn")
logger.add("file.log", enqueue=True, context=context) # Use "spawn" instead of default "fork" on Linux.
with context.Pool(4) as pool:
... |
Hi Delgan! First of all, thanks for the awesome library.
I'm currently developing a tool that is supposed to run on windows, linux and macos that uses multiprocessing, and to guarantee consistent behavior on all platforms we use multiprocessing in "spawn" mode on all platforms.
However, when using multiprocessing in spawn mode on linux, and setting up logging as suggested here, logging anything in the subprocesses seems to freeze or kill them.
The following minimal example should reproduce the issue on linux:
main.py:
workers.py:
Only "before log" is printed, the log message is not and neither is the print statement after. I have also ran the same code on a windows machine, and it seems to work as expected there. Running in "fork" mode on linux also works as expected.
The subprocesses spawned by python seem to actually segfault, if it helps here's what coredumpctl says:
Version info:
The text was updated successfully, but these errors were encountered: