Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

EFS Dynamic provisioning : gid & uid ignored parameters are ignored #726

Closed
AlexandreBrown opened this issue Jun 23, 2022 · 2 comments
Closed
Labels
kind/bug Categorizes issue or PR as related to a bug.

Comments

@AlexandreBrown
Copy link

/kind bug

What happened?
The access point created from a storage class with uid & gid specified was created with an arbitrary uid and gid that does not reflect the specified parameters.
What you expected to happen?
I expected the files/folders created to be owned by the specified uid & gid specified in the storage class parameters.
How to reproduce it (as minimally and precisely as possible)?
Storage class

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: efs-sc
parameters:
  directoryPerms: "700"
  fileSystemId: fs-xxxx
  gid: "100"
  uid: "1000"
  provisioningMode: efs-ap
provisioner: efs.csi.aws.com
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
  1. Setup EFS file system with dynamic provisiong
  2. Apply the storage class
  3. Create a PVC
  4. Notice how the access point that was automatically created has a different gid and uid
    Screenshot from 2022-06-22 22-11-28
    Screenshot from 2022-06-22 22-10-38
    Screenshot from 2022-06-22 22-12-19
kubectl describe sc efs-sc
Name:            efs-sc
IsDefaultClass:  Yes
Annotations:     kubectl.kubernetes.io/last-applied-configuration={"allowVolumeExpansion":true,"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{},"name":"efs-sc"},"parameters":{"directoryPerms":"700","fileSystemId":"fs-xxxx","gid":"100","provisioningMode":"efs-ap","uid":"1000"},"provisioner":"efs.csi.aws.com","reclaimPolicy":"Delete","volumeBindingMode":"WaitForFirstConsumer"}
,storageclass.kubernetes.io/is-default-class=true
Provisioner:           efs.csi.aws.com
Parameters:            directoryPerms=700,fileSystemId=fs-xxxx,gid=100,provisioningMode=efs-ap,uid=1000
AllowVolumeExpansion:  True
MountOptions:          <none>
ReclaimPolicy:         Delete
VolumeBindingMode:     WaitForFirstConsumer
Events:                <none>  

Anything else we need to know?:
I am not manually creating a PVC, I am using PVC from a Kubeflow Notebook setup to use AWS EFS (see https://awslabs.github.io/kubeflow-manifests/docs/deployment/add-ons/storage/efs/guide/).

Environment

  • Kubernetes version (use kubectl version): 1.21
  • Driver version: 1.36
@k8s-ci-robot k8s-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Jun 23, 2022
@jonathanrainer
Copy link
Contributor

Hi @AlexandreBrown I've had a look into this and I think you just need to use a slightly later version of the driver. I know there were some issues around exactly when this fix was included, but I just spun up a cluster running Helm Chart 2.2.7/App Version 1.4.0 and the GID/UID got passed through without problem.

My storage class looked like the following:

allowVolumeExpansion: true
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  creationTimestamp: "2022-06-27T10:25:28Z"
  labels:
    provisioning-type: dynamic
  name: efs-dynamic
  resourceVersion: "6369"
  uid: 7b402743-a767-49c2-90e4-1364efc8ee25
parameters:
  basePath: /dynamic
  directoryPerms: "777"
  fileSystemId: fs-03f5b419ffc9a65f1
  gid: "5678"
  provisioningMode: efs-ap
  uid: "1234"
provisioner: efs.csi.aws.com
reclaimPolicy: Delete
volumeBindingMode: Immediate

Which then when I created a PVC, caused the following access point to appear in AWS:
Screenshot 2022-06-27 at 11 30 13

This is also reflected on the mounting file system as follows:

/ # ls -al
total 20
drwxr-xr-x    1 root     root            40 Jun 27 10:26 .
drwxr-xr-x    1 root     root            40 Jun 27 10:26 ..
-rwxr-xr-x    1 root     root             0 Jun 27 10:26 .dockerenv
drwxr-xr-x    2 root     root         12288 Jun  6 22:13 bin
drwxr-xr-x    5 root     root           360 Jun 27 10:26 dev
drwxrwxrwx    2 1234     5678          6144 Jun 27 10:26 efs
drwxr-xr-x    1 root     root            66 Jun 27 10:26 etc
drwxr-xr-x    2 nobody   nobody           6 Jun  6 22:13 home
dr-xr-xr-x  184 root     root             0 Jun 27 10:26 proc
drwx------    1 root     root            26 Jun 27 10:27 root
dr-xr-xr-x   13 root     root             0 Jun 27 10:26 sys
drwxrwxrwt    2 root     root             6 Jun  6 22:13 tmp
drwxr-xr-x    3 root     root            18 Jun  6 22:13 usr
drwxr-xr-x    1 root     root            17 Jun 27 10:26 var

@AlexandreBrown
Copy link
Author

@jonathanrainer Thanks a lot for your help, you were right, using v1.4.0 fixed my issue!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

No branches or pull requests

3 participants