Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Creating .lock files in /tmp folder during backup #528

Closed
v3n1kk opened this issue Sep 29, 2022 · 4 comments
Closed

Creating .lock files in /tmp folder during backup #528

v3n1kk opened this issue Sep 29, 2022 · 4 comments
Labels
done Issues in the state 'done' new Issues requiring triage

Comments

@v3n1kk
Copy link

v3n1kk commented Sep 29, 2022

Project board link

Hello, folks!
I noticed that, I have some issues on 2 of 12 nodes while creating backup.
During the backup in the /tmp folder creating a lot of files:

-rw-r--r-- 1 root        root         0 Sep 28 02:11 ffefd10391fbe8b8feb77656c582d3175a0135a63706bdb40fd45ca36e1bdfb7.lock
-rw-r--r-- 1 root        root         0 Sep 28 02:10 fff0600e2ed9312f2a2c2e5105a7d59954bc7f3879659cf8cb0654e0270c77e0.lock
-rw-r--r-- 1 root        root         0 Sep 28 02:56 fff1308a15f7fe8e3e520681f2d080233884ec3a080f8bff2653bc734d318447.lock
-rw-r--r-- 1 root        root         0 Sep 28 02:47 fff13f18e2b3fbc01fa02af91b9da9c4238147d88a50f7a78e710d13544b3d95.lock
-rw-r--r-- 1 root        root         0 Sep 28 02:33 fff84d7a44d81e2ac67dca5ece7076db9642c3345848253d735dab7f418bf50f.lock
-rw-r--r-- 1 root        root         0 Sep 29 02:38 fffae393486cc3d2e1e6f784948ba771067d2d062cf965a2ecedb9be8647eb20.lock
-rw-r--r-- 1 root        root         0 Sep 28 02:05 fffb2fa3ef45676f79ab9e399641dcbc15dc2b39e48825fae172f83c086dcae3.lock
-rw-r--r-- 1 root        root         0 Sep 29 03:00 fffdab6d1b83675e03501ac06e28875f13b6cfa48eb3686237b48eeda3609b97.lock
-rw-r--r-- 1 root        root         0 Sep 28 02:34 ffff24c8e8a1bea11addcff64dfb06511f0e940f0e95bdfda15ddef9579d0b0a.lock

There are could be thousands .lock files with zero size. And I don't have any other affects.
Only 2 nodes have this behaviour. Although, there are same configs and versions medusa & cassandra on nodes. Medusa 0.5.1, Cassandra 3.11.12
Any ideas what could be the reason of this?

┆Issue is synchronized with this Jira Task by Unito
┆friendlyId: K8SSAND-1808
┆priority: Medium

@adejanovski adejanovski added the new Issues requiring triage label Sep 29, 2022
@adejanovski adejanovski moved this to To Groom in K8ssandra Nov 8, 2022
@somnoynadno
Copy link

somnoynadno commented Dec 3, 2023

Same issue encountered with medusa v0.14.0 and cassandra v4.0.5.

Also it happens only with local storage provider. Cannot reproduce it with s3_compatible one.

Definitely seems like a bug somewhere in LocalStorageDriver from libcloud.storage.drivers.local package: https://github.com/apache/libcloud/blob/trunk/libcloud/storage/drivers/local.py#L66

@somnoynadno
Copy link

Yeah, seems like InterProcessLock instance is able to create provided lockfile by path, if it doesn't exists, but not removing it from filesystem when the lock is released.

So, the preferable solution for me is to patch exit method in libcloud provider and manually remove lockfile for proper cleanup.

@somnoynadno
Copy link

somnoynadno commented Dec 3, 2023

Also, this bug is already fixed in version v0.16, where libcloud dependency was completely removed: #640

This issue may be closed.

@adejanovski
Copy link
Contributor

Indeed, thanks for the heads up @somnoynadno 👍

@github-project-automation github-project-automation bot moved this to Done in K8ssandra Dec 4, 2023
@adejanovski adejanovski added done Issues in the state 'done' and removed to-groom labels Dec 4, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
done Issues in the state 'done' new Issues requiring triage
Projects
None yet
Development

No branches or pull requests

3 participants