-
-
Notifications
You must be signed in to change notification settings - Fork 621
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[common] Cleanup ReplicationDestinations #29824
Comments
Is not true afaik, the cache is 10gb and wiped after being finished But in general we shouldn't be setting these flags to enabled by default. |
Then tell that the 50GB used on my cluster for "dest" pvcs. Which are just there and never get deleted or removed and never used again. As the only way to retrigger it would be to manually modifying the ReplicationSource CRDs. And even then it works as it just recreates the pvcs and downloads the data from s3-storage |
As the dest-pvcs contain the full pvc data and all snapshots. and arent cleared after restore. They just stay for eternity. |
**Description** <!-- Please include a summary of the change and which issue is fixed. Please also include relevant motivation and context. List any dependencies that are required for this change. --> Flags introduced in this release. (We should also update the CRDs on the volsync chart) ⚒️ Fixes #29824 **⚙️ Type of change** - [x] ⚙️ Feature/App addition - [ ] 🪛 Bugfix - [ ]⚠️ Breaking change (fix or feature that would cause existing functionality to not work as expected) - [ ] 🔃 Refactor of current code **🧪 How Has This Been Tested?** <!-- Please describe the tests that you ran to verify your changes. Provide instructions so we can reproduce. Please also list any relevant details for your test configuration --> **📃 Notes:** <!-- Please enter any other relevant information here --> **✔️ Checklist:** - [x] ⚖️ My code follows the style guidelines of this project - [x] 👀 I have performed a self-review of my own code - [ ] #️⃣ I have commented my code, particularly in hard-to-understand areas - [ ] 📄 I have made corresponding changes to the documentation - [x]⚠️ My changes generate no new warnings - [x] 🧪 I have added tests to this description that prove my fix is effective or that my feature works - [x] ⬆️ I increased versions for any altered app according to semantic versioning - [x] I made sure the title starts with `feat(chart-name):`, `fix(chart-name):` or `chore(chart-name):` **➕ App addition** If this PR is an app addition please make sure you have done the following. - [ ] 🖼️ I have added an icon in the Chart's root directory called `icon.png` --- _Please don't blindly check all the boxes. Read them and only check those that apply. Those checkboxes are there for the reviewer to see what is this all about and the status of this PR with a quick glance._ Signed-off-by: Stavros Kois <[email protected]>
This issue is locked to prevent necro-posting on closed issues. Please create a new issue or contact staff on discord of the problem persists |
Is your feature request related to a problem?
Volsync Restore is done once with a Manual Trigger. But the PVC and PV corresponding to the Restore arent cleaned up afterwards and are just there for eternity and take about as much space as the running persistence itself
Describe the solution you'd like
Set the corresponding ReplicationDestinations flags to automatically delete the PVCs/PVs after successfull restore.
See: backube/volsync#1388
Describe alternatives you've considered
Manually deleting ReplicationDestinations with kubectl but those will be recreated because of Gitops/Fluxcd.
Additional context
No response
I've read and agree with the following
The text was updated successfully, but these errors were encountered: