-
Notifications
You must be signed in to change notification settings - Fork 712
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix deadlock caused by generational GC and object finalizers #713
Conversation
tests/test_locks.py
Outdated
def test_no_deadlock_on_generational_garbage_collection(): | ||
"""Regression test for https://github.com/Delgan/loguru/issues/712 | ||
|
||
Assert that deadlocks do not occur when a cyclic isolate containing log output in | ||
finalizers is collected by generational GC, during the output of another log message. | ||
""" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not particularly happy with the test, as when it fails it obviously results in deadlock rather than passing or failing, and the test process requires a SIGTERM to exit.
This could be worked around by invoking a subprocess and timing out - but that requires a fair bit of scaffolding, and makes the code harder to grok. Happy to do this if its preferred.
tests/test_locks.py
Outdated
|
||
# WHEN there are cyclic isolates in memory which log on GC | ||
# AND logs are produced long enough to trigger generational GC | ||
for _ in range(1000): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This number is somewhat arbitrary unfortunately. The generational GC will trigger after a certain amount of memory is allocated. Possibly this depends on the amount of memory available? On my machine range(10)
did not reproduce the issue, but range(100)
did. I went for range(1000)
to be safe. Its still quite quick - 130ms on my machine for the entire test.
I believe the failing tests all relate to an implicit upgrade of I will push a commit to ensure mypy runs with the same version as it did for the most recent commit on |
The remaining failing test appears to be a network failure uploading the coverage report. I can't re-run it, and I don't want to push a commit that re-runs all of the tests unnecessarily. |
Closing due to issues with re-entrant locks as explained here: #712 (comment) |
This PR aims to solve #712 by using re-entrant locks on the logging handler. This allows the same thread to acquire the lock multiple times. This appears to resolve deadlocks when the generational (or cyclic) garbage collector triggers a emits a log during the handling of another log message.
I'm not clear on whether using a re-entrant lock here allows non-safe situations in other cases, but the existing threading and multi-processing tests do all pass.
Resolves #712