Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

bpo-46070: _PyGC_Fini() untracks objects #30577

Merged
merged 1 commit into from
Jan 13, 2022
Merged

bpo-46070: _PyGC_Fini() untracks objects #30577

merged 1 commit into from
Jan 13, 2022

Conversation

vstinner
Copy link
Member

@vstinner vstinner commented Jan 13, 2022

Py_EndInterpreter() now explicitly untracks all objects currently
tracked by the GC. Previously, if an object was used later by another
interpreter, calling PyObject_GC_UnTrack() on the object crashed if
the previous or the next object of the PyGC_Head structure became a
dangling pointer.

https://bugs.python.org/issue46070

Py_EndInterpreter() now explicitly untracks all objects currently
tracked by the GC. Previously, if an object was used later by another
interpreter, calling PyObject_GC_UnTrack() on the object crashed if
the previous or the next object of the PyGC_Head structure became a
dangling pointer.
@vstinner
Copy link
Member Author

vstinner commented Jan 13, 2022

@erlend-aasland
Copy link
Contributor

I do not know the GC, so I'm not the right person to review this change. It sounds good to me, though, FWIW 😄

@vstinner vstinner merged commit 1a4d1c1 into python:main Jan 13, 2022
@miss-islington
Copy link
Contributor

Thanks @vstinner for the PR 🌮🎉.. I'm working now to backport this PR to: 3.9, 3.10.
🐍🍒⛏🤖

@vstinner vstinner deleted the gc_fini_untrack branch January 13, 2022 18:28
miss-islington pushed a commit to miss-islington/cpython that referenced this pull request Jan 13, 2022
Py_EndInterpreter() now explicitly untracks all objects currently
tracked by the GC. Previously, if an object was used later by another
interpreter, calling PyObject_GC_UnTrack() on the object crashed if
the previous or the next object of the PyGC_Head structure became a
dangling pointer.
(cherry picked from commit 1a4d1c1)

Co-authored-by: Victor Stinner <[email protected]>
@bedevere-bot bedevere-bot removed the needs backport to 3.10 only security fixes label Jan 13, 2022
@bedevere-bot
Copy link

GH-30578 is a backport of this pull request to the 3.10 branch.

miss-islington pushed a commit to miss-islington/cpython that referenced this pull request Jan 13, 2022
Py_EndInterpreter() now explicitly untracks all objects currently
tracked by the GC. Previously, if an object was used later by another
interpreter, calling PyObject_GC_UnTrack() on the object crashed if
the previous or the next object of the PyGC_Head structure became a
dangling pointer.
(cherry picked from commit 1a4d1c1)

Co-authored-by: Victor Stinner <[email protected]>
@bedevere-bot
Copy link

GH-30579 is a backport of this pull request to the 3.9 branch.

Copy link
Member

@ericsnowcurrently ericsnowcurrently left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The approach seems good to me, but I'd want to hear what @pablogsal, @pitrou, @iritkatriel, etc. have to say.

Comment on lines +2184 to +2188
// bpo-46070: Explicitly untrack all objects currently tracked by the
// GC. Otherwise, if an object is used later by another interpreter,
// calling PyObject_GC_UnTrack() on the object crashs if the previous
// or the next object of the PyGC_Head structure became a dangling
// pointer.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FWIW, this change would be unnecessary no objects were ever shared. It may be worth adding such a note to this comment, so it's clear that this extra cleanup code could be removed at that point.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Once we will be sure that it is really impossible to share any object, we can just remove this code.

PyGC_Head *gc;
for (gc = GC_NEXT(list); gc != list; gc = GC_NEXT(list)) {
PyObject *op = FROM_GC(gc);
_PyObject_GC_UNTRACK(op);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So one or more objects "owned" by another interpreter (AKA OTHER) could still hold a reference to one of these untracked objects (AKA UNTRACKED). The bug we're fixing demonstrates that's a possibility.

Could it make it harder to break cycles? Could it lead to memory leaks?

How will the UNTRACKED object impact GC for any OTHER object holding a reference to it?

What happens if one of these UNTRACKED objects is involved in a cycle with one or more OTHER objects? Can we still clean them all up?

What happens if several of these UNTRACKED objects are involved in a cycle, but one or more OTHER objects holds a reference to one of them? Can we still clean them all up?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If an object is not tracked by the GC and is part of a ref cycle, the GC is unable to break the reference cycle, and so yes, memory leaks. Previously, Python was crashing. Well, the situation is less bad :-)

This change doesn't introduce the memory leak. An object cannot be tracked in two GC lists at the same time: _PyObject_GC_TRACK() has an assertion for that. The leak was already there.

If an object is created in interpreter 1, it's tracked by the GC of the interpreter 1. If it's copied to the interpreter 2 and the interpreter 1 is destroyed, the interpreter 2 GC is not going to automatically tracks the object. Moreover, the interpreter 1 cannot guess if another interpreter is using the object or not.

IMO untracking all objects is the least bad solution.

IMO the only way to ensure that no memory is leaked is to prevent sharing objects between interpreters, and rely on existing mechanisms (GC and finalizers) to release memory. So continue to convert static types to heap types, continue to update C extensions to the multi-phase initialization, continue moving globals into module state and per-interpreter structures, etc.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

so yes, memory leaks. Previously, Python was crashing. Well, the situation is less bad :-)

🙂

Copy link
Member Author

@vstinner vstinner Jan 14, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In Python 3.8, when the GC state was shared, it seems like any interpreter could run a GC collection. It couldn't happen in parallel, thanks to the "collecting" flag. The GC is not re-entrant and simply does nothing (exit) if it's already collecting.

I cannot say if Python 3.8 was able to break the reference cycles that Python 3.9 and newer can no longer break: when an object in created in an interpreter and then "migrates" to another interpreter.

miss-islington added a commit that referenced this pull request Jan 13, 2022
Py_EndInterpreter() now explicitly untracks all objects currently
tracked by the GC. Previously, if an object was used later by another
interpreter, calling PyObject_GC_UnTrack() on the object crashed if
the previous or the next object of the PyGC_Head structure became a
dangling pointer.
(cherry picked from commit 1a4d1c1)

Co-authored-by: Victor Stinner <[email protected]>
vstinner added a commit that referenced this pull request Jan 13, 2022
Py_EndInterpreter() now explicitly untracks all objects currently
tracked by the GC. Previously, if an object was used later by another
interpreter, calling PyObject_GC_UnTrack() on the object crashed if
the previous or the next object of the PyGC_Head structure became a
dangling pointer.

(cherry picked from commit 1a4d1c1)
@pablogsal
Copy link
Member

This approach looks sensible but another solution to consider (it may have problems) is to move all these objects into the main interpreter GC state. This way, more reference cycles can be broken there and the memory can be reclaimed when the main interpreter does a GC pass.

Thoughts on this approach?

@vstinner
Copy link
Member Author

This approach looks sensible but another solution to consider (it may have problems) is to move all these objects into the main interpreter GC state. This way, more reference cycles can be broken there and the memory can be reclaimed when the main interpreter does a GC pass. Thoughts on this approach?

For a simple object, it should be fine. But I don't understand how it would work for more complex structures with finalizers. Which interpreter is supposed to call it? Which interpreter "owns" the object? There is the "strong reference" ownership but the GC list ownership.

IMO we would avoid many not-fun-at-all problems by preventing objects to travel between interpreters ;-)

@pitrou
Copy link
Member

pitrou commented Jan 15, 2022

I agree with @vstinner. Another related issue is: if objects can travel between interpreters, how would you make the object allocator per-interpreter (if you want to have a per-interpreter GIL)?

@pablogsal
Copy link
Member

I am convinced by these arguments 👍

@vstinner
Copy link
Member Author

vstinner commented Jan 16, 2022

I understand the motivation to "clean up everything" by breaking the reference cycles. But for me, the problem is way bigger than that. If you have complex objects with finalizers running complex code, the interpreter in which they are executed matters. For example, if it's a database connection. You may care in which interpreter it's run.

It reminds me a fork() problem when an application creates many objects before calling fork(). The consistency of a database connection, of a socket or a file becomes way harder to control if two processes can access it with "memory copy" of these objects. For example, a common problem is when a process opens a file and then forks: the file is not fully closed before all processes close the "shared" file descriptor (technically, there are more file descriptor copies all pointing to the same file in the kernel).

@vstinner
Copy link
Member Author

I am convinced by these arguments +1

I'm not 100% convinced, so it's good to have such discussion :-)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants