You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
What happened:
When a XFS volume + its restored snapshot are NodeStaged to a single node, mount of the second volume fails with:
Warning FailedMount 3s (x4 over 9s) kubelet MountVolume.MountDevice failed for volume "pvc-95794d96-69b5-4fa8-8db9-51db152fa1aa" : rpc error: code = Internal desc = could not format "/dev/disk/azure/scsi1/lun1"(lun: "1"), and mount it at "/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-95794d96-69b5-4fa8-8db9-51db152fa1aa/globalmount"
What you expected to happen:
Both volumes are mounted
How to reproduce it:
Create XFS volume and use it in a Pod.
Take its snapshot.
Restore the snapshot as a separate PV/PVC.
Use the snapshot restored PVC in another pod. Make sure it's scheduled to the same node as the pod from step 1. (which is still running)
-> See the mount error.
Anything else we need to know?:
XFS does not allow to mount two block devices with the same filesystem UUID - the original + restored snapshot in this case.
Ext4 works without any issues.
Environment:
CSI Driver version: 1.3.0, but it should be reproducible with master too
The text was updated successfully, but these errors were encountered:
E0730 13:24:37.991215 1 mount_linux.go:175] Mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t xfs -o defaults /dev/disk/azure/scsi1/lun1 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-95794d96-69b5-4fa8-8db9-51db152fa1aa/globalmount
Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-95794d96-69b5-4fa8-8db9-51db152fa1aa/globalmount: wrong fs type, bad option, bad superblock on /dev/sdd, missing codepage or helper program, or other error.
Kernel log on the node:
[ 1107.755237] XFS (sdd): Filesystem has duplicate UUID 28eeacec-3fe2-49a0-bd60-be367dd6829c - can't mount
What happened:
When a XFS volume + its restored snapshot are NodeStaged to a single node, mount of the second volume fails with:
What you expected to happen:
Both volumes are mounted
How to reproduce it:
-> See the mount error.
Anything else we need to know?:
XFS does not allow to mount two block devices with the same filesystem UUID - the original + restored snapshot in this case.
Ext4 works without any issues.
Environment:
The text was updated successfully, but these errors were encountered: