Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Not able to mount volume to csi-app pod #152

Closed
Kartik494 opened this issue Feb 12, 2020 · 8 comments
Closed

Not able to mount volume to csi-app pod #152

Kartik494 opened this issue Feb 12, 2020 · 8 comments

Comments

@Kartik494
Copy link
Member

While running example application to check and validate for deployment,csi-app pod get stucked in containercreating state.here is description : -

Name: my-csi-app
Namespace: default
Priority: 0
Node: minikube/192.168.99.100
Start Time: Wed, 12 Feb 2020 14:59:44 +0530
Labels:
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"name":"my-csi-app","namespace":"default"},"spec":{"containers":[{"command":[...
Status: Pending
IP:
IPs:
Containers:
my-frontend:
Container ID:
Image: busybox
Image ID:
Port:
Host Port:
Command:
sleep
1000000
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment:
Mounts:
/data from my-csi-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-thlml (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
my-csi-volume:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: csi-pvc
ReadOnly: false
default-token-thlml:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-thlml
Optional: false
QoS Class: BestEffort
Node-Selectors:
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message


Normal Scheduled 21m default-scheduler Successfully assigned default/my-csi-app to minikube
Normal SuccessfulAttachVolume 21m attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-efb22d61-2728-4e6b-8ed3-cc7554adaaa8"
Warning FailedMount 17m (x10 over 21m) kubelet, minikube MountVolume.MountDevice failed for volume "pvc-efb22d61-2728-4e6b-8ed3-cc7554adaaa8" : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name hostpath.csi.k8s.io not found in the list of registered CSI drivers
Warning FailedMount 3m47s (x5 over 17m) kubelet, minikube Unable to attach or mount volumes: unmounted volumes=[my-csi-volume], unattached volumes=[my-csi-volume default-token-thlml]: timed out waiting for the condition
Warning FailedMount 93s (x4 over 19m) kubelet, minikube Unable to attach or mount volumes: unmounted volumes=[my-csi-volume], unattached volumes=[default-token-thlml my-csi-volume]: timed out waiting for the condition
Warning FailedMount 68s (x8 over 15m) kubelet, minikube MountVolume.SetUp failed for volume "pvc-efb22d61-2728-4e6b-8ed3-cc7554adaaa8" : kubernetes.io/csi: mounter.SetupAt failed: rpc error: code = NotFound desc = volume id 344eeca7-4d7a-11ea-b921-0242ac110005 does not exit in the volumes list

Here is description of pv
Name: pvc-efb22d61-2728-4e6b-8ed3-cc7554adaaa8
Labels:
Annotations: pv.kubernetes.io/provisioned-by: hostpath.csi.k8s.io
Finalizers: [kubernetes.io/pv-protection]
StorageClass: csi-hostpath-sc
Status: Bound
Claim: default/csi-pvc
Reclaim Policy: Delete
Access Modes: RWO
VolumeMode: Filesystem
Capacity: 1Gi
Node Affinity:
Message:
Source:
Type: CSI (a Container Storage Interface (CSI) volume source)
Driver: hostpath.csi.k8s.io
VolumeHandle: 344eeca7-4d7a-11ea-b921-0242ac110005
ReadOnly: false
VolumeAttributes: storage.kubernetes.io/csiProvisionerIdentity=1581499491410-8081-hostpath.csi.k8s.io
Events:

Here is description of pvc
Name: csi-pvc
Namespace: default
StorageClass: csi-hostpath-sc
Status: Bound
Volume: pvc-efb22d61-2728-4e6b-8ed3-cc7554adaaa8
Labels:
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"csi-pvc","namespace":"default"},"spec":{"accessMode...
pv.kubernetes.io/bind-completed: yes
pv.kubernetes.io/bound-by-controller: yes
volume.beta.kubernetes.io/storage-provisioner: hostpath.csi.k8s.io
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 1Gi
Access Modes: RWO
VolumeMode: Filesystem
Mounted By: my-csi-app
Events:
Type Reason Age From Message


Normal ExternalProvisioning 24m persistentvolume-controller waiting for a volume to be created, either by external provisioner "hostpath.csi.k8s.io" or manually created by system administrator
Normal Provisioning 24m hostpath.csi.k8s.io_csi-snapshotter-0_56c59124-742c-49a0-9b43-7433d37f0584 External provisioner is provisioning volume for claim "default/csi-pvc"
Normal ProvisioningSucceeded 24m hostpath.csi.k8s.io_csi-snapshotter-0_56c59124-742c-49a0-9b43-7433d37f0584 Successfully provisioned volume pvc-efb22d61-2728-4e6b-8ed3-cc7554adaaa8

can anyone suggest why csi-pod mount volume failed I am implementing csi-hostpath-driver on kubernetes 1.17.
Thanks

@pohly
Copy link
Contributor

pohly commented Feb 12, 2020

This seems to be the relevant error:

Warning FailedMount 17m (x10 over 21m) kubelet, minikube MountVolume.MountDevice failed for volume "pvc-efb22d61-2728-4e6b-8ed3-cc7554adaaa8" : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name hostpath.csi.k8s.io not found in the list of registered CSI drivers

You are deploying on minikube, right? Which version of it? See also #53

@Kartik494
Copy link
Member Author

@pohly yes,i am using minikube version: v1.7.2

@Kartik494
Copy link
Member Author

Hi @pohly i am facing issue with v1.7 but later when i deploy csi-hostpath-driver with minikube 1.16 i got succeed in case that csi-app pod got volume mount and while doing so i realise there is step missing of volumesnapshotclass crd i.e while deploying hostpath driver on kubernetes 1.16 volumesnapshotclass not created and we have to explicitly apply it through path csi-driver-host-path/deploy/kubernetes-1.16/snapshotter.So if you don't mind should i modify the docs and add this step.
Thanks
https://github.com/kubernetes-csi/csi-driver-host-path

@Madhu-1
Copy link
Contributor

Madhu-1 commented Feb 13, 2020

@Kartik494 i believe you are using 1.17 not 1.7 .

in kubernetes 1.16 snapshot CRD will be created by snapshot sidecar container, that why you are not seeing the issue but in kube 1.17 this is not the case.

as per the csi snapshotter doc distributor's has to bundle and deploy the controller and CRDs as part of their Kubernetes cluster management process

https://github.com/kubernetes-csi/external-snapshotter/blob/release-2.0/README.md#usage

the current host path code here

while [ $(kubectl get pods 2>/dev/null | grep '^csi-hostpath.* Running ' | wc -l) -lt $expected_running_pods ] || ! kubectl describe volumesnapshotclasses.snapshot.storage.k8s.io 2>/dev/null >/dev/null; do
is checking for the snapshot class CRD.

I think we need to check the CRD's and controller exists or not, if not do we need to create it as part of hostpath driver deployment?

@pohly any suggestions?

@Kartik494
Copy link
Member Author

Kartik494 commented Feb 13, 2020

@Madhu-1 thanks for suggestion and my concern is when i deployed it in kubernetes 1.16 all crds created but volumesnapshotclass volumesnapshotclass.snapshot.storage.k8s.io/csi-hostpath-snapclass created is not created in my scenario and for volumesnapshotclass we have to explicitly apply yaml which is in path /csi-driver-host-path/deploy/kubernetes-1.16/snapshotter and it is not mention in docs.so i think i should add this step as it has to apply explicitly

@pohly
Copy link
Contributor

pohly commented Feb 13, 2020

I think we need to check the CRD's and controller exists or not, if not do we need to create it as part of hostpath driver deployment?

That would be only a stop-gap solution. As you pointed out, the CRD installation is no longer owned by the driver.

@xing-yang: what do you think?

@Kartik494
Copy link
Member Author

@Madhu-1 @pohly I will try to clear my situation. The problem has 2 parts:

  1. While using k8s version v1.16 , as per the README, I only have to run deploy/kubernetes-1.16/deploy-hostpath.sh to create host-path-driver and get the expected output as mentioned. But the command did not create VolumeSnapshotClass. I had to explicitly apply VolumeSnapshotClass file to get the desired results.
    I am not sure why this is happening. This must either be addressed in the script file or maybe in the main README doc.

  2. While using k8s version v1.17, when I ran deploy/kubernetes-1.17/deploy-hostpath.sh I got all the resources (including all beta crd's) installed but I am unable to create Volume object (got error as mentioned initially) while appying my-csi-app. I want some help with this.

As always please tell If something does not make sense. I would be happy to explain it further. Thanks

@Kartik494
Copy link
Member Author

Hi i am closing this issue as i got the result as volume mount successful.
Thanks

TerryHowe pushed a commit to TerryHowe/csi-driver-host-path that referenced this issue Oct 17, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants