Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix deadlock caused by generational GC and object finalizers #713

Closed
wants to merge 0 commits into from

Conversation

jacksmith15
Copy link
Contributor

@jacksmith15 jacksmith15 commented Sep 27, 2022

This PR aims to solve #712 by using re-entrant locks on the logging handler. This allows the same thread to acquire the lock multiple times. This appears to resolve deadlocks when the generational (or cyclic) garbage collector triggers a emits a log during the handling of another log message.

I'm not clear on whether using a re-entrant lock here allows non-safe situations in other cases, but the existing threading and multi-processing tests do all pass.

Resolves #712

Comment on lines 22 to 27
def test_no_deadlock_on_generational_garbage_collection():
"""Regression test for https://github.com/Delgan/loguru/issues/712

Assert that deadlocks do not occur when a cyclic isolate containing log output in
finalizers is collected by generational GC, during the output of another log message.
"""
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not particularly happy with the test, as when it fails it obviously results in deadlock rather than passing or failing, and the test process requires a SIGTERM to exit.

This could be worked around by invoking a subprocess and timing out - but that requires a fair bit of scaffolding, and makes the code harder to grok. Happy to do this if its preferred.


# WHEN there are cyclic isolates in memory which log on GC
# AND logs are produced long enough to trigger generational GC
for _ in range(1000):
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This number is somewhat arbitrary unfortunately. The generational GC will trigger after a certain amount of memory is allocated. Possibly this depends on the amount of memory available? On my machine range(10) did not reproduce the issue, but range(100) did. I went for range(1000) to be safe. Its still quite quick - 130ms on my machine for the entire test.

@jacksmith15
Copy link
Contributor Author

I believe the failing tests all relate to an implicit upgrade of mypy from 0.971 to 0.981 (the version is unpinned). They haven't made release notes available for v0.981 yet, but the diff is here.

I will push a commit to ensure mypy runs with the same version as it did for the most recent commit on master.

@jacksmith15
Copy link
Contributor Author

The remaining failing test appears to be a network failure uploading the coverage report. I can't re-run it, and I don't want to push a commit that re-runs all of the tests unnecessarily.

@jacksmith15
Copy link
Contributor Author

Closing due to issues with re-entrant locks as explained here: #712 (comment)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Deadlock when objects emit messages at garbage collection
1 participant