Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Perform user acceptance test #107

Closed
leakingtapan opened this issue Nov 13, 2018 · 9 comments
Closed

Perform user acceptance test #107

leakingtapan opened this issue Nov 13, 2018 · 9 comments
Milestone

Comments

@leakingtapan
Copy link
Contributor

leakingtapan commented Nov 13, 2018

  • Go through README
  • Try out example CSI driver
  • Report any issue found
@leakingtapan leakingtapan added this to the alpha milestone Nov 13, 2018
@leakingtapan
Copy link
Contributor Author

/cc @Jeffwan

@kubernetes-sigs kubernetes-sigs deleted a comment from k8s-ci-robot Nov 14, 2018
@Jeffwan
Copy link

Jeffwan commented Nov 14, 2018

Find two typos

kubectl create -f deploy/kubernetes/v.12+ should be kubectl create -f deploy/kubernetes/v1.12+

kubectl create -f deploy/kubernetes/v.12+/sample_app should be kubectl create -f deploy/kubernetes/v1.12+/sample_app

@Jeffwan
Copy link

Jeffwan commented Nov 14, 2018

I am trying to resize volume and got following errors. Is this in expectation?

$:~/aws-ebs-csi-driver# kubectl apply -f deploy/kubernetes/v1.12+/sample_app

Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
pod/app configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
storageclass.storage.k8s.io/slow configured


Error from server (Forbidden): error when applying patch:
{"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"PersistentVolumeClaim\",\"metadata\":{\"annotations\":{},\"name\":\"claim1\",\"namespace\":\"default\"},\"spec\":{\"accessModes\":[\"ReadWriteOnce\"],\"resources\":{\"requests\":{\"storage\":\"7Gi\"}},\"storageClassName\":\"slow\"}}\n"}},"spec":{"resources":{"requests":{"storage":"7Gi"}}}}
to:

Resource: "/v1, Resource=persistentvolumeclaims", GroupVersionKind: "/v1, Kind=PersistentVolumeClaim"
Name: "claim1", Namespace: "default"
Object: &{map["kind":"PersistentVolumeClaim" "apiVersion":"v1" "metadata":map["resourceVersion":"2890" "creationTimestamp":"2018-11-14T21:59:06Z" "annotations":map["pv.kubernetes.io/bind-completed":"yes" "pv.kubernetes.io/bound-by-controller":"yes" "volume.beta.kubernetes.io/storage-provisioner":"com.amazon.aws.csi.ebs"] "finalizers":["kubernetes.io/pvc-protection"] "name":"claim1" "namespace":"default" "selfLink":"/api/v1/namespaces/default/persistentvolumeclaims/claim1" "uid":"81de681b-e858-11e8-9641-061447ae7e4e"] "spec":map["accessModes":["ReadWriteOnce"] "resources":map["requests":map["storage":"4Gi"]] "volumeName":"pvc-81de681b-e858-11e8-9641-061447ae7e4e" "storageClassName":"slow" "dataSource":<nil>] "status":map["phase":"Bound" "accessModes":["ReadWriteOnce"] "capacity":map["storage":"4Gi"]]]}


for: "deploy/kubernetes/v1.12+/sample_app/claim.yaml": persistentvolumeclaims "claim1" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize

@Jeffwan
Copy link

Jeffwan commented Nov 14, 2018

Can not find target file and we should update to latest in README.md

$:~/aws-ebs-csi-driver# kubectl apply -f https://raw.githubusercontent.com/kubernetes/csi-api/master/pkg/crd/testdata/csinodeinfo.yaml --validate=false

error: unable to read URL "https://raw.githubusercontent.com/kubernetes/csi-api/master/pkg/crd/testdata/csinodeinfo.yaml", server reported 404 Not Found, status code=404

@Jeffwan
Copy link

Jeffwan commented Nov 14, 2018

Another question, for ACCESS_KEY_ID and SECRET_ACCESS_KEY. Should the doc specify right permission for IAM user? I tried to revoke permission and got following error when claiming storage.

$:~/aws-ebs-csi-driver# kubectl describe pvc claim2
Name:          claim2
Namespace:     default
StorageClass:  fast
Status:        Pending
Volume:
Labels:        <none>
Annotations:   volume.beta.kubernetes.io/storage-provisioner: com.amazon.aws.csi.ebs
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
Events:
  Type       Reason                Age                From                                                                           Message
  ----       ------                ----               ----                                                                           -------
  Warning    ProvisioningFailed    20s                persistentvolume-controller                                                    storageclass.storage.k8s.io "fast" not found
  Normal     ExternalProvisioning  15s (x2 over 15s)  persistentvolume-controller                                                    waiting for a volume to be created, either by external provisioner "com.amazon.aws.csi.ebs" or manually created by system administrator
  Normal     Provisioning          15s                com.amazon.aws.csi.ebs_csi-provisioner-0_5535714b-e858-11e8-bfc3-0a580af40004  External provisioner is provisioning volume for claim "default/claim2"
  Warning    ProvisioningFailed    15s                com.amazon.aws.csi.ebs_csi-provisioner-0_5535714b-e858-11e8-bfc3-0a580af40004  failed to provision volume with StorageClass "fast": rpc error: code = Internal desc = UnauthorizedOperation: You are not authorized to perform this operation.
             status code: 403, request id: 6b8f82f6-3253-412c-b268-1dbe4afcdaeb
Mounted By:  app2

@Jeffwan
Copy link

Jeffwan commented Nov 14, 2018

cluster version v1.10.3
aws-ebs-csi-driver manifest version v1.12+

Events:
  Type     Reason            Age                From               Message
  ----     ------            ----               ----               -------
  Warning  FailedScheduling  19m (x3 over 19m)  default-scheduler  pod has unbound PersistentVolumeClaims (repeated 2 times)
  Warning  FailedScheduling  8m (x35 over 18m)  default-scheduler  0/2 nodes are available: 2 node(s) had volume node affinity conflict.
  Warning  FailedScheduling  3m                 default-scheduler  0/5 nodes are available: 2 node(s) had volume node affinity conflict, 3 node(s) were not ready.

Try 2 and 5 node cluster but still failed, looks like it against some volume affinity policy

cluster version v1.10.3
aws-ebs-csi-driver manifest version v1.10

Get a few warnings about InvalidVolume.ZoneMismatch. Even it successfully mount on pod, the pod is still in initialization

  Events:
  Type     Reason                 Age              From                                                   Message
  ----     ------                 ----             ----                                                   -------
  Warning  FailedScheduling       1m (x4 over 1m)  default-scheduler                                      pod has unbound PersistentVolumeClaims (repeated 5 times)
  Normal   Scheduled              1m               default-scheduler                                      Successfully assigned app to ip-192-168-14-144.us-west-2.compute.internal
  Normal   SuccessfulMountVolume  1m               kubelet, ip-192-168-14-144.us-west-2.compute.internal  MountVolume.SetUp succeeded for volume "default-token-95ddr"
  Warning  FailedMount            1m               kubelet, ip-192-168-14-144.us-west-2.compute.internal  MountVolume.WaitForAttach failed for volume "pvc-fae2375c-e866-11e8-891f-0250f9347cca" : rpc error: code = Internal desc = Could not attach volume "vol-064eb8e390d2af354" to node "i-00088a2102e1917c7": could not attach volume "vol-064eb8e390d2af354" to node "i-00088a2102e1917c7": InvalidVolume.ZoneMismatch: The volume 'vol-064eb8e390d2af354' is not in the same availability zone as instance 'i-00088a2102e1917c7'
           status code: 400, request id: 30a69e5e-1d46-4b71-818b-b383318ff75b
  Warning  FailedMount  1m  kubelet, ip-192-168-14-144.us-west-2.compute.internal  MountVolume.WaitForAttach failed for volume "pvc-fae2375c-e866-11e8-891f-0250f9347cca" : rpc error: code = Internal desc = Could not attach volume "vol-064eb8e390d2af354" to node "i-00088a2102e1917c7": could not attach volume "vol-064eb8e390d2af354" to node "i-00088a2102e1917c7": InvalidVolume.ZoneMismatch: The volume 'vol-064eb8e390d2af354' is not in the same availability zone as instance 'i-00088a2102e1917c7'
           status code: 400, request id: 9a380795-2994-43d3-a6ee-935b6aea4803
  Warning  FailedMount  1m  kubelet, ip-192-168-14-144.us-west-2.compute.internal  MountVolume.WaitForAttach failed for volume "pvc-fae2375c-e866-11e8-891f-0250f9347cca" : rpc error: code = Internal desc = Could not attach volume "vol-064eb8e390d2af354" to node "i-00088a2102e1917c7": could not attach volume "vol-064eb8e390d2af354" to node "i-00088a2102e1917c7": InvalidVolume.ZoneMismatch: The volume 'vol-064eb8e390d2af354' is not in the same availability zone as instance 'i-00088a2102e1917c7'
           status code: 400, request id: b87e0d53-fd6a-4aa7-bda5-a68c72d2c792
  Warning  FailedMount  1m  kubelet, ip-192-168-14-144.us-west-2.compute.internal  MountVolume.WaitForAttach failed for volume "pvc-fae2375c-e866-11e8-891f-0250f9347cca" : rpc error: code = Internal desc = Could not attach volume "vol-064eb8e390d2af354" to node "i-00088a2102e1917c7": could not attach volume "vol-064eb8e390d2af354" to node "i-00088a2102e1917c7": InvalidVolume.ZoneMismatch: The volume 'vol-064eb8e390d2af354' is not in the same availability zone as instance 'i-00088a2102e1917c7'
           status code: 400, request id: 8e02058a-e12b-4b7a-90aa-aa64d3315354
  Warning  FailedMount  1m  kubelet, ip-192-168-14-144.us-west-2.compute.internal  MountVolume.WaitForAttach failed for volume "pvc-fae2375c-e866-11e8-891f-0250f9347cca" : rpc error: code = Internal desc = Could not attach volume "vol-064eb8e390d2af354" to node "i-00088a2102e1917c7": could not attach volume "vol-064eb8e390d2af354" to node "i-00088a2102e1917c7": InvalidVolume.ZoneMismatch: The volume 'vol-064eb8e390d2af354' is not in the same availability zone as instance 'i-00088a2102e1917c7'
           status code: 400, request id: e98e8af7-44b7-4ea6-9ee1-c8147c887598
  Warning  FailedMount  1m  kubelet, ip-192-168-14-144.us-west-2.compute.internal  MountVolume.WaitForAttach failed for volume "pvc-fae2375c-e866-11e8-891f-0250f9347cca" : rpc error: code = Internal desc = Could not attach volume "vol-064eb8e390d2af354" to node "i-00088a2102e1917c7": could not attach volume "vol-064eb8e390d2af354" to node "i-00088a2102e1917c7": InvalidVolume.ZoneMismatch: The volume 'vol-064eb8e390d2af354' is not in the same availability zone as instance 'i-00088a2102e1917c7'
           status code: 400, request id: fef4ef9b-c307-4157-adaa-66d7328a0eb3
  Warning  FailedMount  57s  kubelet, ip-192-168-14-144.us-west-2.compute.internal  MountVolume.WaitForAttach failed for volume "pvc-fae2375c-e866-11e8-891f-0250f9347cca" : rpc error: code = Internal desc = Could not attach volume "vol-064eb8e390d2af354" to node "i-00088a2102e1917c7": could not attach volume "vol-064eb8e390d2af354" to node "i-00088a2102e1917c7": InvalidVolume.ZoneMismatch: The volume 'vol-064eb8e390d2af354' is not in the same availability zone as instance 'i-00088a2102e1917c7'
           status code: 400, request id: 3fdb417d-35ed-449e-baae-6142d151cdc1
  Warning  FailedMount  25s  kubelet, ip-192-168-14-144.us-west-2.compute.internal  MountVolume.WaitForAttach failed for volume "pvc-fae2375c-e866-11e8-891f-0250f9347cca" : rpc error: code = Internal desc = Could not attach volume "vol-064eb8e390d2af354" to node "i-00088a2102e1917c7": could not attach volume "vol-064eb8e390d2af354" to node "i-00088a2102e1917c7": InvalidVolume.ZoneMismatch: The volume 'vol-064eb8e390d2af354' is not in the same availability zone as instance 'i-00088a2102e1917c7'
           status code: 400, request id: f09e1adc-99ab-44a9-838e-cf250efd7f86
  Normal   SuccessfulAttachVolume  14s (x3 over 1m)  attachdetach-controller  AttachVolume.Attach succeeded for volume "pvc-fae2375c-e866-11e8-891f-0250f9347cca"

but pods is still not up due to unknown reason.

➜  v1.[10,11] git:(master) ✗ kubectl get pods
NAME                READY     STATUS              RESTARTS   AGE
app                 0/1       ContainerCreating   0          1m

@Jeffwan
Copy link

Jeffwan commented Nov 15, 2018

Looks like connection is unavailable due to incorrect driver name. @leakingtapan remind me there's a WIP branch to address this issue. I tested manifests in that branch and it is working perfectly.

  Warning  FailedMount             0s (x4 over 4s)    kubelet, ip-192-168-13-99.us-west-2.compute.internal  MountVolume.MountDevice failed for volume "pvc-1054a730-e86b-11e8-b149-0a22c3a2d8b8" : rpc error: code = Unavailable desc = grpc: the connection is unavailable

@leakingtapan
Copy link
Contributor Author

leakingtapan commented Nov 15, 2018

Thanks for prove read the README and test out the driver!

Find two typos

kubectl create -f deploy/kubernetes/v.12+ should be kubectl create -f deploy/kubernetes/v1.12+

kubectl create -f deploy/kubernetes/v.12+/sample_app should be kubectl create -f deploy/kubernetes/v1.12+/sample_app

Will fix

I am trying to resize volume and got following errors. Is this in expectation?
Resize is not supported yet.

Can not find target file and we should update to latest in README.md

The link is moved. Will create issue for the upstream doc and point to there
Ref: kubernetes-csi/docs#80

Another question, for ACCESS_KEY_ID and SECRET_ACCESS_KEY. Should the doc specify right permission for IAM user? I tried to revoke permission and got following error when claiming storage.

Will update the README with suggestions.

Get a few warnings about InvalidVolume.ZoneMismatch. Even it successfully mount on pod, the pod is still in initialization

This is because kubernetes v1.10 does have volume scheduling feature yet.

@leakingtapan
Copy link
Contributor Author

Try 2 and 5 node cluster but still failed, looks like it against some volume affinity policy

This is related to a known behavior: #61 (comment)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants