-
Notifications
You must be signed in to change notification settings - Fork 56
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ever growing collection of stack and ssh directories in /tmp #104
Comments
* WIP fixes for #102 * Update CRD and avoid clobbering last successful commit * Record actual git commit instead of using spec value * Delete working directory after reconciliation - should help with #104 * Address PR comments * Improve tests * Adding more tests to cover lifecycle of stack with success and failure
Yes it was - closing. |
This is still an issue for me as of v0.0.19. Primarily go-build and pulumi_auto directories. It fills up then the pod is evicted. Seems like they never get cleaned. Should this issue be reopened? @viveklak
|
@jsravn Looks like this can indeed happen if the workspace fails to be initialized (e.g. you have the wrong token or config etc.) But these are the situations which are retried aggressively. I have reopened and will address. |
There is still a problem with a work directories under repos. The current cleanup only cleans the workdir and leaks the rest of the repo. The ideal fix is #78. But we should make every effort to cleanup any additional files generated. |
We end up having to use volumes anyway (I think due to our use of a monorepo). The volume fills up over time as pods are killed mid-reconcile. Obviously, we can fix this by increasing the graceful shutdown period, but it would be better if on startup the operator looks for old auto_workdirs and cleans them up. |
I got several options to perform this housekeeping:
|
Good news everyone, we just released a preview of Pulumi Kubernetes Operator v2. This new release has a whole-new architecture that uses pods as the execution environment. I do think the garbage problem is now under control. Please read the announcement blog post for more information: Would love to hear your feedback! Feel free to engage with us on the #kubernetes channel of the Pulumi Slack workspace. |
Problem description
Each run of the reconciliation loop for a stack creates two directories in
/tmp
.When in exponential back-off each error will produce the directories quickly filling the disk if the program is large.
The
/tmp
directory does not seem to be clean-up afterward.The text was updated successfully, but these errors were encountered: