-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
state.db-wal grown too large #1575
Comments
I had the same issue (16 GB file, leading to pod eviction).
If the vacuum has to be implemented, a specific issue should imo go there: https://github.com/rancher/kine |
I had a similar situation where the file got to an amazing 177GB in size. Running those commands got rid of that file. Will follow up if there are any issues from running the commands. |
We should probably evaluate either enabling auto-vacuum, or manually vacuuming at intervals or at startup since auto-vacuum cannot be enabled on existing databases. |
For anyone encountering this in the future - could you comment here? We'd like to know more about the workload and collect some statistics from your database. In particular, using the Precompiled Binaries for Linux from https://www.sqlite.org/download.html, run:
|
Closing this issue as it does not appear to be affecting anyone any longer. If someone does run into this, please collect information on the sqlite database as described above and open a new issue. |
Unfortunately I experience this issue on K3OS on AWS. Due to the minimal OS it is difficult to execute sqlite3 on the host. I have made the following workaround for anyone on K3OS or a minimum / cloud OS |
Any idea what is in state.db-wal ?
It grown to 8+GB within a month or so... is it something that can be cleaned up, or how can I find out why it is filling up so much ?
The text was updated successfully, but these errors were encountered: