Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to provision VolumeMode: Block PVC #83

Closed
humblec opened this issue Aug 2, 2019 · 8 comments
Closed

Unable to provision VolumeMode: Block PVC #83

humblec opened this issue Aug 2, 2019 · 8 comments
Assignees
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@humblec
Copy link
Contributor

humblec commented Aug 2, 2019

While creating a block PVC with hostpath driver it stays at pending state for ever. The plugin log shows below:

I0802 07:57:56.276473       1 server.go:117] GRPC call: /csi.v1.Controller/CreateVolume
I0802 07:57:56.276489       1 server.go:118] GRPC request: {"capacity_range":{"required_bytes":1073741824},"name":"pvc-3a3698ee-b4fb-11e9-b29c-02160dfd70a0","volume_capabilities":[{"AccessType":{"Block":{}},"access_mode":{"mode":1}}]}
I0802 07:57:56.281536       1 volume_path_handler_linux.go:41] Creating device for path: /csi-data-dir/3ce27dde-b4fb-11e9-927d-02c40ff6a830
I0802 07:57:56.709467       1 volume_path_handler_linux.go:75] Failed device create command for path: /csi-data-dir/3ce27dde-b4fb-11e9-927d-02c40ff6a830 exit status 1 losetup: /csi-data-dir/3ce27dde-b4fb-11e9-927d-02c40ff6a830: failed to set up loop device: No such file or directoryE0802 07:57:56.709513       1 controllerserver.go:160] failed to attach device: exit status 1
E0802 07:57:56.709634       1 controllerserver.go:163] failed to cleanup block file /csi-data-dir/3ce27dde-b4fb-11e9-927d-02c40ff6a830: <nil>
E0802 07:57:56.709647       1 server.go:121] GRPC error: rpc error: code = Internal desc = failed to attach device: exit status 1

hostpath driver was deployed via deploy-hostpath.sh from deploy/kubernetes-1.14 directory.

Env Details:

[root@localhost kubernetes-1.14]# uname -a
Linux localhost.localdomain 5.1.6-200.fc29.x86_64 #1 SMP Mon Jun 3 17:20:05 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
[root@localhost kubernetes-1.14]# cat /etc/redhat-release 
Fedora release 29 (Twenty Nine)
[root@localhost kubernetes-1.14]# lsmod |grep loop
loop                   36864  4
[root@localhost kubernetes-1.14]# ll /dev/loop
loop0         loop1         loop2         loop8         loop-control  
[root@localhost kubernetes-1.14]# 
@humblec
Copy link
Contributor Author

humblec commented Aug 2, 2019

/assign @darkowlzz
/assign @pohly

@k8s-ci-robot
Copy link
Contributor

@humblec: GitHub didn't allow me to assign the following users: darkowlzz.

Note that only kubernetes-csi members, repo collaborators and people who have commented on this issue/PR can be assigned. Additionally, issues/PRs can only have 10 assignees at the same time.
For more information please see the contributor guide

In response to this:

/assign @darkowlzz
/assign @pohly

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@humblec
Copy link
Contributor Author

humblec commented Aug 2, 2019

@bswartz have you faced similar issue in your setup ? :)

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 31, 2019
@msau42
Copy link
Collaborator

msau42 commented Nov 1, 2019

This might be fixed by #109

@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Dec 1, 2019
@msau42
Copy link
Collaborator

msau42 commented Dec 6, 2019

/close

@k8s-ci-robot
Copy link
Contributor

@msau42: Closing this issue.

In response to this:

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

5 participants