-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow restoration of the status
metadata field
#1687
Comments
Thanks for filing the issue upstream with Rancher, I do think that it's odd for Rancher to store state in status. That said, I don't think it would hurt to add an annotation or flag (my preference would be to annotate items with something like |
I believe there's another issue here, which is that |
We probably need to discuss as a team and see if this is something we want to invest further time in to investigate/prototype. |
Hello,
Please let us know if and when you reach a decision. SAP Webide dev team |
Thanks for adding your use case here @i300543. Originally this code was designed under the assumption that |
For various reasons, our application was developed in this way, and changing it is not easy at this point. Why does Velero add a restriction to Kubernetes objects which is explicitly not enforced by Kubernetes itself? In Kubernetes, a CRD which doesn't explicitly define a "/status" subresource is able to use the Status section like our application does. Velero's assumption breaks our flow, while removing this assumption will not hurt any use case as far as we can see no ? |
@rayanebel @i300543 could you provide a couple specific examples of the types of information that are stored in Also, in terms of UX - would you want to enable/disable restoring |
hello @skriss As Rancher work with custom resources definitions (xxxxxx.cattle.io) for all its object. I think It will be great to restore status at the CRD level (for all item in CRD with a wildcard for example). With Rancher for example, when we create a cluster we have an object of kind
And for example Rancher need to retrieve data like |
Somewhat related to this, there are discussions just starting in SIG Apps to standardise the |
Related: #1272 |
status
metadata field
I have a similar issue where the Custom Resource when restored doesn't have the status fields populated, and thus the operator relating to this CR isn't able to reconcile. ( I understand that the operator should not be using the status fields for this purpose but I am in a situation where I have to live with what I have ). Thus I would like the status fields not to be dropped. I understand this is possible if the Custom Resource Definition does not use the /status subresource, in which case it is a simple RestoreAction plugin to ensure that the Status fields are restored. However because in my case the Custom Resource Definition does use /status subresource I believe this means we can only update the Customer Resource status using the /status endpoint. This becomes impossible using the RestoreAction Plugin because I think the creation of the resource only happens during a velero restore after the restoreAction plugin operation has occurred, and updating teh status using teh /status endpoint can only happen after resource creation. I had been wondering about whether I could use 2 RestoreAction Plugins, the first one called to recreate the Custom Resource, and the 2nd restore action plugin to update the status via the /status endpoint. However I noticed that the creation of the normal restored resource occurs after the Restore Action plugins have been called and this will fail if a resource of the same name already exists and results in the metadata being reset and status cleared. So I still would like some mechanism of maintaining status of a Custom Resource where the CRD uses a status subresource, and if this could be done using a RestoreAction Plugin then this would be great. |
We have the intention of making the experience better by at least documenting around this issue. See: #3654 (comment). Maybe @dsu-igeek has some further insights. |
I think that @wcochran53's proposed solution with a RestoreItemAction plugin would be how I'd approach this. If someone feels strongly about implementing one that would inspect CRs for a status subresource, then we could definitely review the proposal. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
Hey, I'm suffering from the same issue. I'd love to help with some initiative to support this |
Well, I was taking a look at the code and reading previous answers.. and it seems that kind: Restore
metadata:
...
spec:
restoreStatus:
includedResources:
- webhooks
- someothercrds
- deployments # no reason to restrict this to CRDs I guess
excludedResources: [] The default ( |
This seems a valid assumption to me: |
I think the simplest example I can give is a CRD we have called Webhook, which represents a Github Webhook. On status, we have the Github Webhook ID, so we can know if it's configured correctly (with the correct events, secret and URL). apiVersion: workflows.dev/v1alpha1
kind: Webhook
metadata:
name: example-webhook
spec:
baseURL: https://listener.example.url/hook
events:
- push
- pull_request
repository:
name: velero
owner: vmware-tanzu
status:
externalID: 123456789 If we don't apply the |
@RafaeLeal |
I think that can solve my issue, but this is a hack and not a general solution... It only works for me because I'm also the developer of the controller, but that's not the case for a lot of other CRDs... I think it makes sense to develop a more general solution, don't you think? |
I am facing the same issue with my CICD system We are using Argo Workflows which stores information about when the workflow was run, where the logs are etc in the status. After restore, that information is missing. However, If I manually apply the kubernetes manifests(that velero had backed up) then status field is restored. |
Hey, I just proposed a solution in this PR I built, deployed, and tested it on a cluster: it works. We might need to tweak some tests though. Let me know what you think, and if we should discuss more deeply something. |
Hi |
It's possible, but if my memory serves me, the client-go |
@blackpiglet I believe one could follow up with a patch call to add status. |
@kaovilai that's exactly how we do it (see #4785 ) The question was asked above about the possibility of some other controller processing the restored item between the create and patch calls. And yes, this is a possiblity, but we haven't been successful so far in coming up with a way to avoid this possibility since you can't create the resource with status included. |
Describe the problem/challenge you have
To manage our clusters, we are using Rancher. Rancher run on top of Kubernetes and save all the data in the same ETCD server (it's like a big operator). We tried to use
velero
to be able to restore our Rancher server in case of disaster recovery.Backup is running perfectly but we are facing to an issue during the restore phase because Rancher save some state information inside
status
part and velero clean all status and metadata before restoring all the objects. So currently, we use velero for the backup and restore the file with multiplekubectl
command line.Describe the solution you'd like
I don't have the perfect solution but, I saw in velero code the function
resetMetadataAndStatus
https://github.com/heptio/velero/blob/e371ba78b0844b28359f902076d470ae5a5de0b9/pkg/restore/restore.go#L1117
Maybe as a first step we can add a simple flag
--include-status
or something else to let the choice to the user to execute this function.Maybe you have another solution or another idea ?
Anything else you would like to add:
We also open a ticket in Rancher side : rancher/rancher#21647
Environment:
The text was updated successfully, but these errors were encountered: