Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

store: call RecordWrite() before graphDriver Cleanup() #1724

Merged
merged 1 commit into from
Oct 5, 2023

Conversation

giuseppe
Copy link
Member

@giuseppe giuseppe commented Oct 4, 2023

Move the execution of RecordWrite() before the graphDriver Cleanup(). This addresses a longstanding issue that occurs when the Podman cleanup process is forcely terminated and on some occasions the termination happens after the Cleanup() but before the change is recorded. This causes that the next user is not notified about the change and will mount the container without the home directory below (the infamous /var/lib/containers/storage/overlay mount). Then when the next time the graphDriver is initialized, the home directory is mounted on top of the existing mounts causing some containers to fail with ENOENT since all files are hidden and some others cannot be cleaned up since their mount directory is covered by the home directory mount.

Closes: containers/podman#18831
Closes: containers/podman#17216
Closes: containers/podman#17042

Move the execution of RecordWrite() before the graphDriver Cleanup().
This addresses a longstanding issue that occurs when the Podman
cleanup process is forcely terminated and on some occasions the
termination happens after the Cleanup() but before the change is
recorded.  This causes that the next user is not notified about the
change and will mount the container without the home directory
below (the infamous /var/lib/containers/storage/overlay mount).
Then when the next time the graphDriver is initialized, the home
directory is mounted on top of the existing mounts causing some
containers to fail with ENOENT since all files are hidden and some
others cannot be cleaned up since their mount directory is covered by
the home directory mount.

Closes: containers/podman#18831
Closes: containers/podman#17216
Closes: containers/podman#17042

Signed-off-by: Giuseppe Scrivano <[email protected]>
@giuseppe
Copy link
Member Author

giuseppe commented Oct 4, 2023

@edsantiago I am quite sure this solves the issue we've seen.

It is marked as Draft, as I am still testing it though to get more confidence,

@giuseppe
Copy link
Member Author

giuseppe commented Oct 4, 2023

@mtrmac @rhatdan @nalind PTAL

@rhatdan
Copy link
Member

rhatdan commented Oct 4, 2023

LGTM

@giuseppe
Copy link
Member Author

giuseppe commented Oct 5, 2023

@vrothberg @flouthoc PTAL

Copy link
Collaborator

@flouthoc flouthoc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@flouthoc flouthoc merged commit 44418ab into containers:main Oct 5, 2023
18 checks passed
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Oct 5, 2023

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: flouthoc, giuseppe

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

Copy link
Member

@vrothberg vrothberg left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Amazing job tracking this down, @giuseppe !

@rhatdan
Copy link
Member

rhatdan commented Oct 5, 2023

Lets open a PR to get this into Podman.

@giuseppe
Copy link
Member Author

giuseppe commented Oct 5, 2023

here it is: containers/podman#20273

store.go Show resolved Hide resolved
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
5 participants