-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
unable to restore persistenvolume and persistent volume claims using velero on AKS #5741
Comments
As per the error message in restore log file, when executing restore item action, Velero fails to get Volumesnapshot to backup the corresponding pvc. |
Hi @aloknagaonkar, I think the failure is due to the restore setting.
Because when using CSI plugin to do the B/R, some CSI specific k8s resources are also created, e.g. VolumeSnapshot, VolumeSnapshotContent, and Velero needs the VolumeSnapshot and VolumeSnapshotContent to be the data source of creating PVC and PV. Since the restore command only contains the PV and PVC as the resources to restore, VolumeSnapshot and VolumeSnapshotContent were not restored, and that triggered the error. I suggest to remove the |
@blackpiglet We recently added a feature to plugins to allow them to force including other resource types via an annotation. Maybe we should update the CSI plugin to use this new feature. |
@sseago |
hi Team can you help us with the standard /working configuration to take volume snaphot fo mongodb and restore in the same namespace or another namespace |
Taking snapshots of volume is the default configuration of Velero.
If MongoDB is installed in the If preferring to restore the same namespace, suggest deleting the |
What are the Azure disk type supported? if we have mongodb provisioned with Provisioner: kubernetes.io/azure-disk (Parameters: cachingmode=None,kind=Managed,location=eastus,storageaccounttype=StandardSSD_LRS,zoned=true). |
I don't think you need the CSI plugin to do that. The Azure plugin can take snapshots of the disk. |
what is the best practice , should we use azure plugin or CSI plugin . what is the need of restic during volume snapshot ? do we need that ?. I want to understand the options and what would be best practice to use velero from reliability and support point of view |
IMO, for your case, the Azure plugin is good enough. You can also try with CSI plugin. If you don't have a special need, Restic is not recommended. |
I am getting this error when restoring in different namespace
|
@aloknagaonkar This is normal for some cluster resources, e.g. CustomResourceDefinition "mongodb.mongodb.com", PersistentVolume "pvc-85be4a60-a0ff-478a-bc00-5cf7ae377aea", ClusterRoleBinding "mongodb-enterprise-operator-mongodb-webhook-binding" and ClusterRoleBinding "mongodb-enterprise-operator-mongodb-certs-binding". |
@blackpiglet I have a similar scenario but would like to have a restoration option.
schedules:
daily:
schedule: "*/30 * * * *"
template:
ttl: "72h"
snapshotVolumes: false
snapshot:
schedule: "*/30 * * * *"
template:
ttl: "72h"
snapshotVolumes: true
labelSelector:
matchLabels:
backup.velero.io/my-backup-volume: "true"
Using CSI features seems to work with the case [1]: But it fails to restore PVC (azure-managed-disk) in case [2]: Log:
Is there any way to restore PVC to the existing PV (Released one) ? |
@iusergii |
Close for now. |
What steps did you take and what happened:
velero install
helm upgrade --install velero vmware-tanzu/velero --namespace velero --set-file credentials.secretContents.cloud=./cre
dentials-velero --set configuration.provider=azure --set configuration.backupStorageLocation.name=azure --set configuration.backupStorageLocation.bucket=
backup --set configuration.backupStorageLocation.config.resourceGroup=xxxx --set configuration.volumeSnapshotLocation.config.subscriptionId=xxxx --set c
onfiguration.backupStorageLocation.config.subscriptionId=xxx --set configuration.backupStorageLocation.config.storageAccount=xxxxxx --set snapshotsEnable
d=true --set deployNodeAgent=true --set configuration.volumeSnapshotLocation.name=azure --set configuration.features=EnableCSI --set image.repository=vel
ero/velero --set image.pullPolicy=Always --set configuration.volumeSnapshotLocation.config.resourceGroup=app-network --set configuration.volumeSnapshotLo
cation.config.snapshotLocation="East US" -f custom-values.yaml
custom-values,yaml :
initContainers:
image: velero/velero-plugin-for-microsoft-azure:v1.6.0
imagePullPolicy: IfNotPresent
volumeMounts:
name: plugins
image: velero/velero-plugin-for-csi:v0.3.2
imagePullPolicy: IfNotPresent
volumeMounts:
name: plugins
Extra K8s manifests to deploy
extraObjects:
kind: VolumeSnapshotClass
metadata:
name: csi-azuredisk-vsc
labels:
velero.io/csi-volumesnapshot-class: "true"
driver: disk.csi.azure.com
deletionPolicy: Retain
parameters:
resourcegroup: app-network
we are taking backup using velero
./velero backup create mysql1 --include-namespaces velero-test --volume-snapshot-locations azure --storage-location azure
and restoring as
./velero restore create mysql-restore2 --from-backup mysql1 --include-resources pvc,pv
while restoring getting below error
time="2023-01-05T13:39:39Z" level=error msg="Namespace velero-test, resource restore error: error preparing persistentvolumeclaims/velero-test/pvc-mysql:
rpc error: code = Unknown desc = Failed to get Volumesnapshot velero-test/velero-pvc-mysql-mfffx to restore PVC velero-test/pvc-mysql: volumesnapshots.s
bundle-2023-01-05-20-01-05.zip
napshot.storage.k8s.io "velero-pvc-mysql-mfffx" not found" logSource="pkg/controller/restore_controller.go:531" restore=velero/mysql-restore2
time="2023-01-05T13:39:39Z" level=info msg="restore completed" logSource="pkg/controller/restore_controller.go:545" restore=velero/mysql-restore2
What did you expect to happen:
I am expecting restore to work and pv and pvc should get created
The following information will help us better understand what's going on:
If you are using velero v1.7.0+: updated
Please use
velero debug --backup <backupname> --restore <restorename>
to generate the support bundle, and attach to this issue, more options please refer tovelero debug --help
If you are using earlier versions:
Please provide the output of the following commands (Pasting long output into a GitHub gist or other pastebin is fine.)
kubectl logs deployment/velero -n velero
velero backup describe <backupname>
orkubectl get backup/<backupname> -n velero -o yaml
velero backup logs <backupname>
velero restore describe <restorename>
orkubectl get restore/<restorename> -n velero -o yaml
velero restore logs <restorename>
Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]
Environment:
[velero version](url)
):W0105 14:32:14.816783 4226 azure.go:92] WARNING: the azure auth plugin is deprecated in v1.22+, unavailable in v1.25+; use https://github.com/Azure/ku
belogin instead.
To learn more, consult https://kubernetes.io/docs/reference/access-authn-authz/authentication/#client-go-credential-plugins
Client:
Version: v1.9.5
Git commit: 2b5281f
Server:
Version: v1.10.0
WARNING: the client version does not match the server version. Please update client
Velero features (use
velero client config get features
):features:
Kubernetes version (use
kubectl version
):Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.1", GitCommit:"632ed300f2c34f6d6d15ca4cef3d3c7073412212", GitTreeState:"clean", Bui
ldDate:"2021-08-19T15:45:37Z", GoVersion:"go1.16.7", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.12", GitCommit:"f941a31f4515c5ac03f5fc7ccf9a330e3510b80d", GitTreeState:"clean", Bu
ildDate:"2022-11-09T17:12:33Z", GoVersion:"go1.17.13", Compiler:"gc", Platform:"linux/amd64"}
Kubernetes installer & version:
Cloud provider or hardware configuration: AKS
OS (e.g. from
/etc/os-release
):Vote on this issue!
This is an invitation to the Velero community to vote on issues, you can see the project's top voted issues listed here.
Use the "reaction smiley face" up to the right of this comment to vote.
The text was updated successfully, but these errors were encountered: