-
Notifications
You must be signed in to change notification settings - Fork 121
Using CronJobs to automatically clean up completed Runs #479
Comments
We have the same requirement to delete the completed resources automatically. ( Status may need to be considered, for example Failed PipelineRun should be deleted after a longer time than the Succeeded PipelineRun). (Actually, now we delete these resources by our apiServer above Tekton through monioring the pipelineRun related events and when it's finished, we'll delete the succeeded pipelineRuns after a short time, and delete the failed pipelineRuns after a longer time ). |
I'll have a look into this. /assign |
@jlpettersson any update on this? Right now I have a pretty janky kubectl command to delete old pipeline runs lol |
It should not be much more job. Give me a few days. |
🤔 Seems like following the idioms established for K8s Jobs would be somewhat prudent. If we were to establish a "ttl" duck type (e.g. for how this is embedded into specs) combined with the use of a cc @n3wscott (this would benefit from the ideas in our last Kubecon talk) |
@afrittoli has implemented this kind of pruning behaviour in our dogfooding cluster now 🎉 🎉 . That work could help inform this issue. Here's the PR where his changes were added: tektoncd/plumbing#442 |
Rotten issues close after 30d of inactivity. /close Send feedback to tektoncd/plumbing. |
Stale issues rot after 30d of inactivity. /lifecycle rotten Send feedback to tektoncd/plumbing. |
@tekton-robot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/reopen |
@sbwsg: Reopened this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
I'm keeping this issue open as it's a feature area that is still seeing semi-regular community requests. /lifecycle frozen |
@sbwsg you're thinking that this would just be a sample cronjob (and related resources) that users could apply to their own clusters, correct (similar to what @afrittoli did on the dogfooding cluster)? Perhaps under the Assuming that's the case... |
Great! Documenting this as part of Pipelines would also be really useful. |
This CronJob was shared by Tekton twitter account: https://gist.github.com/ctron/4764c0c4c4ea0b22353f2a23941928ad |
An evolution of that ...
- name: kubectl
image: docker.io/alpine/k8s:1.20.7
env:
- name: NUM_TO_KEEP
value: "3"
command:
- /bin/bash
- -c
- >
while read -r PIPELINE; do
while read -r PIPELINE_TO_REMOVE; do
test -n "${PIPELINE_TO_REMOVE}" || continue;
kubectl delete ${PIPELINE_TO_REMOVE} \
&& echo "$(date -Is) PipelineRun ${PIPELINE_TO_REMOVE} deleted." \
|| echo "$(date -Is) Unable to delete PipelineRun ${PIPELINE_TO_REMOVE}.";
done < <(kubectl get pipelinerun -l tekton.dev/pipeline=${PIPELINE} --sort-by=.metadata.creationTimestamp -o name | head -n -${NUM_TO_KEEP});
done < <(kubectl get pipelinerun -o go-template='{{range .items}}{{index .metadata.labels "tekton.dev/pipeline"}}{{"\n"}}{{end}}' | uniq); Full example with |
Expected Behavior
Create a new tool that uses CronJob objects to schedule the cleanup of completed TaskRuns and PipelineRuns.
This could have use in our own dogfooding and also provide the community with help managing their own completed runs.
In a prior PR we explored the idea of using a TTL on runs and leveraging the Kubernetes TTL Controller to help clean them up. During that review process a user suggested CronJobs as an alternative to baking this TTL support directly into the Tekton Pipelines controller.
Actual Behavior
We don't currently have any way to automatically clean up completed TaskRuns and PipelineRuns but we definitely hear feedback that some kind of tooling or guidance would be very useful.
The text was updated successfully, but these errors were encountered: