-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
cross cluster restore failed because backup only backups v1beta1 crd #6796
Comments
I haven't checked this in detail but see if this helps: |
Thanks, APIGroupVersionsFeatureFlag is already added here |
Some other approaches:
|
Just saw same issue #5146 |
qiuming-best
added
the
Needs triage
We need discussion to understand problem and decide the priority
label
Sep 12, 2023
reasonerjt
removed
the
Needs triage
We need discussion to understand problem and decide the priority
label
Sep 13, 2023
Closing as this has been fixed. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
What steps did you take and what happened:
In below setup:
Source cluster: k8s 1.18, istio 1.5.9
Target cluster: k8s 1.23, istio 1.14
Using application which has virtualservices.networking.istio.io CR (e.g., istio sample app bookinfo)
What did you expect to happen:
Restore should not fail
The following information will help us better understand what's going on:
If you are using velero v1.7.0+:
Please use
velero debug --backup <backupname> --restore <restorename>
to generate the support bundle, and attach to this issue, more options please refer tovelero debug --help
If you are using earlier versions:
Please provide the output of the following commands (Pasting long output into a GitHub gist or other pastebin is fine.)
kubectl logs deployment/velero -n velero
velero backup describe <backupname>
orkubectl get backup/<backupname> -n velero -o yaml
velero backup logs <backupname>
velero restore describe <restorename>
orkubectl get restore/<restorename> -n velero -o yaml
velero restore logs <restorename>
backup spec:
restore spec:
backup logs:
restore logs:
backup tar:
bookinfo-0-resource-r8ndq-4mwgp.tar.gz
Anything else you would like to add:
By looking at both backup and restore logs, and the resources backed up, I think the problem is at backup phase while crd virtualservices is being backed up, v1beta1 is choosen, instead of v1 version. So at restore phase, the only version to be restored is v1beta1, which does not exist in target cluster.
Looking at remap logic:
The logic only checks if discovery APIGroups() has v1beta1, do we need to also check cluster preferred version? Or any other clue we can use here?
In summary, there are 2 approaches to fix this which might need more thinking:
Environment:
velero version
): 1.7 base with some back portingsvelero client config get features
): APIGroupVersionsFeatureFlag, CSIkubectl version
):/etc/os-release
):Vote on this issue!
This is an invitation to the Velero community to vote on issues, you can see the project's top voted issues listed here.
Use the "reaction smiley face" up to the right of this comment to vote.
The text was updated successfully, but these errors were encountered: