-
Notifications
You must be signed in to change notification settings - Fork 169
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pruning resources #88
Comments
Issue-Label Bot is automatically applying the label Links: app homepage, dashboard and code for this bot. |
Hi, I like this idea! We might not even need a CRD for this purpose, because Kubernetes labels should provide enough functionality to do what we need:
It might make sense to leave this optional using a flag such as |
This is such a delightful simplification. I'll see if I can get it done in the next week or so. |
Can you draft a design doc first? This is one of the bigger features and I would like some additional discussion around it so we can make sure we get it on the first take right before we write anything to the users' clusters? |
I considered a design doc, but needed to get my hands dirty to prove it could work. Totally happy to rewrite the PR I just submitted based upon better ideas. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
+1 on for the labels. Docker compose do it in the same way: "Labels": {
"com.docker.compose.config-hash": "32e9d85cbe1711281c7a3b57905307c3a204af30a7b0e4d92f614ec9b307d3e6",
"com.docker.compose.container-number": "1",
"com.docker.compose.oneoff": "False",
"com.docker.compose.project": "dev",
"com.docker.compose.service": "dev",
"com.docker.compose.version": "1.16.1"
} |
Using labels on resources is difficult as it requires checking for resources across every apiversion/kind that kube-api supports. With CRDs, this can become quite difficult. I think a simple "state" file approach would work best here. eg, a simple configMap is used to list all of the resources created, with some simple metadata about each. The configmap should be named based on the environment name defined in spec.json metadata
When new resources are deployed, they are added to the state file. Entries are only moved from the state file after the resources have been removed from k8s. eg if As an alternative to storing the state in configMaps, we could take the terraform approach and have pluggable support for state storage, eg S3/GCS. Though I would prefer to just start with the configMap approach, we can always add support for other storage options in future. |
I worry the configmap option has a lot of edge cases that need to be considered when actually implementing. If it gets out of sync through asynchronous runs or inconsistent tk versions or manual changes it can be difficult to feel confident it is accurate any more. The labels are much simpler, and I like to hope the discovery problem is solvable. I have a PoC for scanning a cluster for all resources of all types, including CRDs. Repository here. It uses the discovery and dynamic apis to enumerate all types in your cluster, and scan them all with a label filter. It usually takes under 10 seconds to scan everything and list out objects. So my proposed implementation would be something like:
It does require bringing in the client-go packages, which I hear we have been hesitant to do so far. I would also like to discuss that, but probably in another issue. |
@captncraig some thoughts on this:
Yes!
I don't think we need that. We already query
While my #131 did that, it became obvious that it has drawbacks, such as accidental deleting of resources when applying, because you forgot to rebase against master before. Instead, I'd prefer to have this as a separate action |
I like an explicit command. |
At present, Tanka will not remove Kubernetes resources that have been removed from the respective Jsonnet configuration, and such removals need to be done manually.
Terraform (for example) handles this by maintaining a record of 'intended state'. If an object is present in the state file, but not in the Jsonnet configuration, then we can deduce that it needs to be removed.
Given that Kubernetes already maintains its own state, and that that is consumed for
ks diff
etc, we could likely achieve deletions if we simply stored a list of resources created by this Tanka environment, within for example, a single CRD. This CRD could be named within thespec.json
file.Thus, when either
tk diff
ortk apply
are executed, Tanka can look for any resources that are present in this CRD but are not present in the Jsonnet output. Such resources are candidates for deletion.tk diff
could include them in its output, andtk apply
could usekubectl delete
to remove them.We should consider whether deletions should be a standard feature of Tanka, or whether they would need enabling via a command line switch.
The text was updated successfully, but these errors were encountered: