-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unable to perform restore via CSI driver: Failed to get volumesnapshots.snapshot.storage.k8s.io not found #7444
Comments
FYI: I tried to use |
@jkroepke First, from the backup description information, I think Velero's CRDs are not updated to date.
Please follow the Velero update procedure document: https://velero.io/docs/v1.13/upgrade-to-1.13/. Second, please do not limit the resources in the restore process. Just PVC and PV are not enough for the Velero to work properly, because, in your scenario, the VolumeSnapshot and VolumeSnapshotContent are generated as the intermediate resources. They are also needed to make the CSI volume restore to work.
|
Thanks, I fixed that. Looking at the crashd, I guess crashd will collect logs from all pods in the namespace. To due some restriction, I have to bundle velero with other pods on the same namespace.
If there an recommend list, if I want only to restore a PVC, e.g:
I also found other issues like:
Which I expect that velero should automatically added the VolumeSnapshot and VolumeSnapshotContent, or not? Or do I have to set |
Got it. Thanks for the information.
No need to add the annotation manually.
|
@jkroepke |
It's fine for me. It understand it now. Thanks. |
What steps did you take and what happened:
I have 2 Storage Class in my cluster. Both Storage Classes has separate providers. And both VolumeSnapshotClasses are annotated:
If I execute
I'm getting this error:
What did you expect to happen: PVC restored successfully
The following information will help us better understand what's going on:
If you are using velero v1.7.0+:
Please use
velero debug --backup <backupname> --restore <restorename>
to generate the support bundle, and attach to this issue, more options please refer tovelero debug --help
If you are using earlier versions:
Please provide the output of the following commands (Pasting long output into a GitHub gist or other pastebin is fine.)
kubectl logs deployment/velero -n velero
: https://gist.githubusercontent.com/jkroepke/a6c0877f1aa339e18d8f03774b8b0c35/raw/836aa29ceb8aabd6c125e302844d5b970f4fb662/gistfile1.txtvelero backup describe <backupname>
orkubectl get backup/<backupname> -n velero -o yaml
Details
velero backup logs <backupname>
: https://gist.githubusercontent.com/jkroepke/363895325e0e49cfded6059bf29e60b1/raw/d625a6ed2b144b315a3ee083631f63b2410c537f/gistfile1.txtvelero restore describe <restorename>
orkubectl get restore/<restorename> -n velero -o yaml
Details
velero restore logs <restorename>
Details
Anything else you would like to add:
Environment:
velero version
): 1.13.0velero client config get features
): NOT SETkubectl version
): 1.27.7/etc/os-release
):Vote on this issue!
This is an invitation to the Velero community to vote on issues, you can see the project's top voted issues listed here.
Use the "reaction smiley face" up to the right of this comment to vote.
The text was updated successfully, but these errors were encountered: