-
Notifications
You must be signed in to change notification settings - Fork 83
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
restart csi-blobfuse-node daemonset would make current blobfuse mount unavailable #115
Comments
When stage volume is broken, csi driver could not recover:
|
It seems that there is a similar problem in azuredisk-csi-driver. After deleting the csi-node pod, the following error appears when creating a new nginx-azuredisk pod.
|
fuse driver issue is related to this issue: kubernetes/kubernetes#70013 |
|
reopen this issue, now it depends on kubernetes/kubernetes#88569 |
also need to investigate the other two CSI drivers, when restart driver daemonset, will the original mount point still work? May use same fix. |
@ZeroMagic could you repro this issue? I have done the azure disk csi driver daemonset restart test, not found any issue. |
I tried it again. But this time it was the same as you. All the things were normal. Maybe there was some kind of illegal operation last time. |
Update: |
@ZeroMagic I think it could be due to this commit: there is a field name change from |
kubernetes/kubernetes#88569 was merged into k8s v1.18.0, and also in cherry-picking to k8s v1.15, 1.16, 1.17 |
update: |
This issue is actually not fixed, restart blob driver daemonset would still make current blobfuse mount unavailable, workaround is delete pod with blobfuse mount, and remount would work with fix kubernetes/kubernetes#88569, to permanently fix this issue, should add a new proxy (run as process) to mount blobfuse outside of driver daemonset(like csi-proxy on Windows). Another workaround is we don't use blobfuse mount, use NFS protocol instead, in long term, we may recommend user to use NFS protocol on Linux, so we don't need to implement blobfuse-proxy |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
I get this issue when upgrading a cluster. Rebooting the nodes afterwards seem to resolve the issue. |
Too soon, it stopped working again I think it probably didn't successfully mount. |
using blobfuse-proxy could mitigate this issue:
kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/blob-csi-driver/master/deploy/blobfuse-proxy/blobfuse-proxy.yaml
|
pls try with blobfuse-proxy: https://github.com/kubernetes-sigs/blob-csi-driver/tree/master/deploy/blobfuse-proxy, it's now the default setting from v1.6.0, blobfuse-proxy could make blobfuse mount still available after driver restart. |
btw, restart |
What happened:
kubectl delete po csi-blobfuse-node-8ttf5 -n kube-system
would make current blobfuse mount inaccessibledelete current nginx-blobfuse pod and create a new nginx-blobfuse pod
What you expected to happen:
How to reproduce it:
Anything else we need to know?:
Environment:
kubectl version
):uname -a
):The text was updated successfully, but these errors were encountered: