Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Run livenessprobe as non-root + Run all possible sidecars as nonRootGroup #65

Merged
merged 8 commits into from
Aug 27, 2024

Conversation

prankulmahajan
Copy link
Member

@prankulmahajan prankulmahajan commented Jan 8, 2024

Detailed analysis and test results in GHE https://github.ibm.com/alchemy-containers/armada-storage/issues/5301

This PR aims to do the following,

  1. In deployment run all the possible sidecars as NonRootGroup. Currently, though the users are non-root, the group is still being used as root.
  2. In deployment run livenessprobe sidecar as non-root user/group.

To achieve the 2nd point, had to change file permissions of CSI socket created by node server.

# Default permissions of csi socket inside node server -- (User 0, Group 0, Permissions 755) -- Only the root user has the permissions to access socket
[root@ibm-vpc-file-csi-node-hxm8w /]# id
uid=0(root) gid=0(root) groups=0(root)
[root@ibm-vpc-file-csi-node-hxm8w /]# 
[root@ibm-vpc-file-csi-node-hxm8w /]# ls -la /csi/
total 8
drwxr-xr-x 2 root root 4096 Jan 23 07:29 .
drwxr-xr-x 1 root root 4096 Jan 23 07:29 ..
srwxr-xr-x 1 root root    0 Jan 23 07:29 csi-vpc-file.sock

The socket file should have read and write permissions for access. i.e min access required is 600/660 depending on who is accessing the socket file

This PR changes the group of the csi socket from root to non-root(set 2121 in deployment configMap) and update permissions to 660. Setting "660" allows the root user and the non-root group to access the socket as expected.

# Permissions post changes - node server
root@ibm-vpc-file-csi-node-l74rw:/# id
uid=0(root) gid=0(root) groups=0(root)
root@ibm-vpc-file-csi-node-l74rw:/# ls -la /csi/
total 8
drwxr-xr-x 2 root root 4096 Jan 23 07:38 .
drwxr-xr-x 1 root root 4096 Jan 23 07:38 ..
srw-rw---- 1 root 2121    0 Jan 23 07:38 csi-vpc-file.sock

# Controller server
❯ kubectl exec ibm-vpc-file-csi-controller-7b4bcf6cb9-tqxpn -n kube-system -c iks-vpc-file-driver -it -- bash
groups: cannot find name for group ID 2121
I have no name!@ibm-vpc-file-csi-controller-7b4bcf6cb9-tqxpn:/$ id
uid=2121 gid=2121 groups=2121
I have no name!@ibm-vpc-file-csi-controller-7b4bcf6cb9-tqxpn:/$ cd /csi/
I have no name!@ibm-vpc-file-csi-controller-7b4bcf6cb9-tqxpn:/csi$ ls -la
total 8
drwxrwxrwx 2 root root 4096 Jan 23 07:38 .
drwxr-xr-x 1 root root 4096 Jan 23 07:38 ..
srwxr-xr-x 1 2121 2121    0 Jan 23 07:38 csi-vpc-file.sock
srwxr-xr-x 1 2121 2121    0 Jan 23 07:38 providerfile.sock

Test functionality and logs

❯ kubectl get pods -n kube-system -o wide| grep file
ibm-vpc-file-csi-controller-7b4bcf6cb9-tqxpn          5/5     Running   0          2m36s   172.17.35.19    10.240.0.8   <none>           <none>
ibm-vpc-file-csi-controller-7b4bcf6cb9-xt2vh          5/5     Running   0          2m36s   172.17.20.221   10.240.0.7   <none>           <none>
ibm-vpc-file-csi-node-l74rw                           4/4     Running   0          2m37s   172.17.35.18    10.240.0.8   <none>           <none>
ibm-vpc-file-csi-node-wnprn                           4/4     Running   0          2m37s   172.17.20.220   10.240.0.7   <none>           <none>

# Create PVC and attach to pod
❯ kubectl get pvc
NAME     STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS        AGE
my-pvc   Bound    pvc-7e09f908-b11c-4542-8356-5b1743bdf16f   20Gi       RWX            ibmc-vpc-file-dp2   2m7s
❯ kubectl get pods
NAME                      READY   STATUS    RESTARTS   AGE
my-dep-5c76969568-fqdkw   1/1     Running   0          12s
❯ 
❯ kubectl exec my-dep-5c76969568-fqdkw -it -- bash
root@my-dep-5c76969568-fqdkw:/# df -h
Filesystem                                         Size  Used Avail Use% Mounted on
overlay                                             98G  9.2G   84G  10% /
tmpfs                                               64M     0   64M   0% /dev
tmpfs                                              7.8G     0  7.8G   0% /sys/fs/cgroup
10.240.0.16:/4e2e6e1d_ca0a_431c_94a0_228f62447686   20G  320K   20G   1% /mount-path
/dev/vda2                                           98G  9.2G   84G  10% /etc/hosts
shm                                                 64M     0   64M   0% /dev/shm
tmpfs                                               13G   16K   13G   1% /run/secrets/kubernetes.io/serviceaccount
tmpfs                                              7.8G     0  7.8G   0% /proc/acpi
tmpfs                                              7.8G     0  7.8G   0% /proc/scsi
tmpfs                                              7.8G     0  7.8G   0% /sys/firmware
root@my-dep-5c76969568-fqdkw:/# cd mount-path/
root@my-dep-5c76969568-fqdkw:/mount-path# echo "HI" > test.txt
root@my-dep-5c76969568-fqdkw:/mount-path# ls -la
total 8
drwxrwxr-x 2 nobody 4294967294 4096 Jan 23 07:46 .
drwxr-xr-x 1 root   root       4096 Jan 23 07:45 ..
-rw-r--r-- 1 nobody 4294967294    3 Jan 23 07:46 test.txt

@prankulmahajan prankulmahajan force-pushed the add_permissions branch 11 times, most recently from 1303135 to f917c77 Compare January 10, 2024 03:47
@prankulmahajan prankulmahajan force-pushed the add_permissions branch 3 times, most recently from c97c602 to f411175 Compare January 17, 2024 11:15
@contsto
Copy link

contsto commented Jan 17, 2024

👍 Coverage increased from [%] to [68.175%]

@contsto
Copy link

contsto commented Jan 17, 2024

👍 Coverage increased from [%] to [68.725%]

@contsto
Copy link

contsto commented Jan 19, 2024

👍 Coverage increased from [%] to [68.3375%]

@contsto
Copy link

contsto commented Jan 19, 2024

👍 Coverage increased from [%] to [68.0125%]

@contsto
Copy link

contsto commented Jan 22, 2024

👍 Coverage increased from [%] to [68.9875%]

Copy link
Member

@ambiknai ambiknai left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Minor comments

@prankulmahajan prankulmahajan changed the title Create CSI socket file using non-root user Run livenessprobe as non-root + Run all possible sidecars as nonRootGroup Jan 23, 2024
@contsto
Copy link

contsto commented Jan 23, 2024

👍 Coverage increased from [%] to [69.35%]

ambiknai
ambiknai previously approved these changes Jan 23, 2024
@contsto
Copy link

contsto commented Jan 23, 2024

👍 Coverage increased from [%] to [69.55%]

Copy link
Member

@sameshai sameshai left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@prankulmahajan in case of any error on this setup the driver wont be running ?

@contsto
Copy link

contsto commented Aug 22, 2024

👍 Coverage increased from [%] to [82.0222%]

@contsto
Copy link

contsto commented Aug 22, 2024

👍 Coverage increased from [%] to [82.2556%]

@contsto
Copy link

contsto commented Aug 22, 2024

👍 Coverage increased from [%] to [81.9111%]

@contsto
Copy link

contsto commented Aug 22, 2024

👍 Coverage increased from [%] to [82.2556%]

@contsto
Copy link

contsto commented Aug 22, 2024

👍 Coverage increased from [%] to [82.2556%]

@contsto
Copy link

contsto commented Aug 22, 2024

👍 Coverage increased from [%] to [82.2556%]

@contsto
Copy link

contsto commented Aug 23, 2024

👍 Coverage increased from [%] to [82.2556%]

@prankulmahajan
Copy link
Member Author

prankulmahajan commented Aug 23, 2024

Test Results: ROKS

  • Build file driver image
[+] Building 26.6s (11/11) FINISHED                                                                    docker:default
 => [internal] load build definition from Dockerfile                                                             0.0s
 => => transferring dockerfile: 1.14kB                                                                           0.0s
 => [internal] load metadata for docker.io/library/ubuntu:16.04                                                  0.6s
 => [auth] library/ubuntu:pull token for registry-1.docker.io                                                    0.0s
 => [internal] load .dockerignore                                                                                0.0s
 => => transferring context: 2B                                                                                  0.0s
 => [internal] load build context                                                                                0.9s
 => => transferring context: 63.15MB                                                                             0.9s
 => [1/5] FROM docker.io/library/ubuntu:16.04@sha256:1f1a2d56de1d604801a9671f301190704c25d604a416f59e03c04f5c6f  3.0s
 => => resolve docker.io/library/ubuntu:16.04@sha256:1f1a2d56de1d604801a9671f301190704c25d604a416f59e03c04f5c6f  0.0s
 => => sha256:b6f50765242581c887ff1acc2511fa2d885c52d8fb3ac8c4bba131fd86567f2e 3.36kB / 3.36kB                   0.0s
 => => sha256:58690f9b18fca6469a14da4e212c96849469f9b1be6661d2342a4bf01774aa50 46.50MB / 46.50MB                 0.5s
 => => sha256:b51569e7c50720acf6860327847fe342a1afbe148d24c529fb81df105e3eed01 857B / 857B                       0.2s
 => => sha256:da8ef40b9ecabc2679fe2419957220c0272a965c5cf7e0269fa1aeeb8c56f2e1 528B / 528B                       0.2s
 => => sha256:1f1a2d56de1d604801a9671f301190704c25d604a416f59e03c04f5c6ffee0d6 1.42kB / 1.42kB                   0.0s
 => => sha256:a3785f78ab8547ae2710c89e627783cfa7ee7824d3468cae6835c9f4eae23ff7 1.15kB / 1.15kB                   0.0s
 => => sha256:fb15d46c38dcd1ea0b1990006c3366ecd10c79d374f341687eb2cb23a2c8672e 170B / 170B                       0.2s
 => => extracting sha256:58690f9b18fca6469a14da4e212c96849469f9b1be6661d2342a4bf01774aa50                        2.3s
 => => extracting sha256:b51569e7c50720acf6860327847fe342a1afbe148d24c529fb81df105e3eed01                        0.0s
 => => extracting sha256:da8ef40b9ecabc2679fe2419957220c0272a965c5cf7e0269fa1aeeb8c56f2e1                        0.0s
 => => extracting sha256:fb15d46c38dcd1ea0b1990006c3366ecd10c79d374f341687eb2cb23a2c8672e                        0.0s
 => [2/5] RUN apt-get update && apt-get install -y --no-install-recommends nfs-common &&    apt-get install -y  20.0s
 => [3/5] RUN mkdir -p /home/ibm-csi-drivers/                                                                    0.2s
 => [4/5] ADD ibm-vpc-file-csi-driver /home/ibm-csi-drivers                                                      0.7s
 => [5/5] RUN chmod +x /home/ibm-csi-drivers/ibm-vpc-file-csi-driver                                             1.3s
 => exporting to image                                                                                           0.6s
 => => exporting layers                                                                                          0.6s
 => => writing image sha256:d7aab2c78f0b26b9a771838672f4b532569620f7b3a239d20ad7adc9d2c590ee                     0.0s
 => => naming to docker.io/library/ibm-vpc-file-csi-driver:latest-amd64                                          0.0s
docker tag ibm-vpc-file-csi-driver:latest-amd64 ibm-vpc-file-csi-driver:latest

root@docker-build:~# docker tag d7aab2c78f0b prankulibm/ibm-vpc-file-csi-driver:nonroot
❯ kubectl apply -f manifests
configmap/addon-vpc-file-csi-driver-configmap created
rolebinding.rbac.authorization.k8s.io/controller-server-binding created
serviceaccount/ibm-vpc-file-controller-sa created
configmap/ibm-vpc-file-csi-configmap created
deployment.apps/ibm-vpc-file-csi-controller created
csidriver.storage.k8s.io/vpc.file.csi.ibm.io created
daemonset.apps/ibm-vpc-file-csi-node created
serviceaccount/ibm-vpc-file-node-sa created
rolebinding.rbac.authorization.k8s.io/node-server-binding created
role.rbac.authorization.k8s.io/vpc-file-csi-driver-role created
clusterrolebinding.rbac.authorization.k8s.io/vpc-file-driver-registrar-binding created
clusterrole.rbac.authorization.k8s.io/vpc-file-driver-registrar-role created
clusterrolebinding.rbac.authorization.k8s.io/vpc-file-external-resizer-binding created
clusterrole.rbac.authorization.k8s.io/vpc-file-external-resizer-role created
clusterrolebinding.rbac.authorization.k8s.io/vpc-file-provisioner-binding created
clusterrole.rbac.authorization.k8s.io/vpc-file-provisioner-role created
  • Check file pod status and confirm from logs if livenessprobe/registrar connection is established
❯ kubectl get pods -n kube-system -o wide | grep file
ibm-vpc-file-csi-controller-5b5466cdf5-mllcv   5/5     Running   0             47s   172.17.31.153   10.240.0.57   <none>           <none>
ibm-vpc-file-csi-controller-5b5466cdf5-tjbjj   5/5     Running   0             47s   172.17.26.83    10.240.0.58   <none>           <none>
ibm-vpc-file-csi-node-7vzgh                    4/4     Running   0             44s   10.240.0.57     10.240.0.57   <none>           <none>
ibm-vpc-file-csi-node-hswsc                    4/4     Running   0             44s   10.240.0.58     10.240.0.58   <none>           <none>

❯ kubectl logs ibm-vpc-file-csi-node-7vzgh -n kube-system -c iks-vpc-file-node-driver
{"level":"info","timestamp":"2024-08-23T10:01:14.631Z","caller":"cmd/main.go:85","msg":"IBM CSI driver version","name":"ibm-vpc-file-csi-driver","CSIDriverName":"IBM VPC file driver","DriverVersion":"vpcFileDriver-"}
{"level":"info","timestamp":"2024-08-23T10:01:14.631Z","caller":"cmd/main.go:86","msg":"Controller Mutex Lock enabled","name":"ibm-vpc-file-csi-driver","CSIDriverName":"IBM VPC file driver","LockEnabled":true}
{"level":"info","timestamp":"2024-08-23T10:01:14.633Z","caller":"ibmcloudprovider/volume_provider.go:49","msg":"NewIBMCloudStorageProvider-Reading provider configuration...","name":"ibm-vpc-file-csi-driver","CSIDriverName":"IBM VPC file driver"}
{"level":"warn","timestamp":"2024-08-23T10:01:14.750Z","caller":"ibmcloudprovider/volume_provider.go:61","msg":"Failed to parse VPC_API_VERSION, setting default value","name":"ibm-vpc-file-csi-driver","CSIDriverName":"IBM VPC file driver"}
{"level":"info","timestamp":"2024-08-23T10:01:14.750Z","caller":"ibmcloudprovider/volume_provider.go:67","msg":"Fetching clusterInfo","name":"ibm-vpc-file-csi-driver","CSIDriverName":"IBM VPC file driver"}
{"level":"info","timestamp":"2024-08-23T10:01:14.780Z","caller":"ibmcloudprovider/volume_provider.go:73","msg":"Fetched clusterInfo..","name":"ibm-vpc-file-csi-driver","CSIDriverName":"IBM VPC file driver"}
{"level":"info","timestamp":"2024-08-23T10:01:14.780Z","caller":"utils/init_provider.go:43","msg":"Configuring VPC File Provider","name":"ibm-vpc-file-csi-driver","CSIDriverName":"IBM VPC file driver"}
{"level":"info","timestamp":"2024-08-23T10:01:14.780Z","caller":"provider/provider.go:72","msg":"Entering NewProvider","name":"ibm-vpc-file-csi-driver","CSIDriverName":"IBM VPC file driver"}
{"level":"info","timestamp":"2024-08-23T10:01:14.780Z","caller":"secret_provider/managed_secret_provider.go:54","msg":"Connecting to sidecar","name":"secret-provider","secret-provider-type":"managed-secret-provider"}
{"level":"info","timestamp":"2024-08-23T10:01:25.245Z","caller":"secret_provider/managed_secret_provider.go:92","msg":"Unable to fetch endpoints from cloud-conf","name":"secret-provider","secret-provider-type":"managed-secret-provider","error":"configmaps \"cloud-conf\" not found"}
{"level":"info","timestamp":"2024-08-23T10:01:25.271Z","caller":"secret_provider/managed_secret_provider.go:99","msg":"Initialized managed secret provider","name":"secret-provider","secret-provider-type":"managed-secret-provider"}
{"level":"info","timestamp":"2024-08-23T10:01:25.272Z","caller":"secret_provider/managed_secret_provider.go:54","msg":"Connecting to sidecar","name":"secret-provider","secret-provider-type":"managed-secret-provider"}
{"level":"info","timestamp":"2024-08-23T10:01:25.701Z","caller":"secret_provider/managed_secret_provider.go:92","msg":"Unable to fetch endpoints from cloud-conf","name":"secret-provider","secret-provider-type":"managed-secret-provider","error":"configmaps \"cloud-conf\" not found"}
{"level":"info","timestamp":"2024-08-23T10:01:25.719Z","caller":"secret_provider/managed_secret_provider.go:99","msg":"Initialized managed secret provider","name":"secret-provider","secret-provider-type":"managed-secret-provider"}
{"level":"info","timestamp":"2024-08-23T10:01:25.719Z","caller":"utils/init_provider.go:55","msg":"Configuring IKS-VPC FILE Provider","name":"ibm-vpc-file-csi-driver","CSIDriverName":"IBM VPC file driver"}
{"level":"info","timestamp":"2024-08-23T10:01:25.719Z","caller":"provider/provider.go:72","msg":"Entering NewProvider","name":"ibm-vpc-file-csi-driver","CSIDriverName":"IBM VPC file driver"}
{"level":"info","timestamp":"2024-08-23T10:01:25.719Z","caller":"secret_provider/managed_secret_provider.go:54","msg":"Connecting to sidecar","name":"secret-provider","secret-provider-type":"managed-secret-provider"}
{"level":"info","timestamp":"2024-08-23T10:01:26.044Z","caller":"secret_provider/managed_secret_provider.go:92","msg":"Unable to fetch endpoints from cloud-conf","name":"secret-provider","secret-provider-type":"managed-secret-provider","error":"configmaps \"cloud-conf\" not found"}
{"level":"info","timestamp":"2024-08-23T10:01:26.063Z","caller":"secret_provider/managed_secret_provider.go:99","msg":"Initialized managed secret provider","name":"secret-provider","secret-provider-type":"managed-secret-provider"}
{"level":"info","timestamp":"2024-08-23T10:01:26.063Z","caller":"secret_provider/managed_secret_provider.go:54","msg":"Connecting to sidecar","name":"secret-provider","secret-provider-type":"managed-secret-provider"}
{"level":"info","timestamp":"2024-08-23T10:01:26.144Z","caller":"secret_provider/managed_secret_provider.go:92","msg":"Unable to fetch endpoints from cloud-conf","name":"secret-provider","secret-provider-type":"managed-secret-provider","error":"configmaps \"cloud-conf\" not found"}
{"level":"info","timestamp":"2024-08-23T10:01:26.164Z","caller":"secret_provider/managed_secret_provider.go:99","msg":"Initialized managed secret provider","name":"secret-provider","secret-provider-type":"managed-secret-provider"}
{"level":"info","timestamp":"2024-08-23T10:01:26.164Z","caller":"provider/provider.go:72","msg":"Entering NewProvider","name":"ibm-vpc-file-csi-driver","CSIDriverName":"IBM VPC file driver"}
{"level":"info","timestamp":"2024-08-23T10:01:26.164Z","caller":"secret_provider/managed_secret_provider.go:54","msg":"Connecting to sidecar","name":"secret-provider","secret-provider-type":"managed-secret-provider"}
{"level":"info","timestamp":"2024-08-23T10:01:26.446Z","caller":"secret_provider/managed_secret_provider.go:92","msg":"Unable to fetch endpoints from cloud-conf","name":"secret-provider","secret-provider-type":"managed-secret-provider","error":"configmaps \"cloud-conf\" not found"}
{"level":"info","timestamp":"2024-08-23T10:01:26.464Z","caller":"secret_provider/managed_secret_provider.go:99","msg":"Initialized managed secret provider","name":"secret-provider","secret-provider-type":"managed-secret-provider"}
{"level":"info","timestamp":"2024-08-23T10:01:26.464Z","caller":"secret_provider/managed_secret_provider.go:54","msg":"Connecting to sidecar","name":"secret-provider","secret-provider-type":"managed-secret-provider"}
{"level":"info","timestamp":"2024-08-23T10:01:26.543Z","caller":"secret_provider/managed_secret_provider.go:92","msg":"Unable to fetch endpoints from cloud-conf","name":"secret-provider","secret-provider-type":"managed-secret-provider","error":"configmaps \"cloud-conf\" not found"}
{"level":"info","timestamp":"2024-08-23T10:01:26.565Z","caller":"secret_provider/managed_secret_provider.go:99","msg":"Initialized managed secret provider","name":"secret-provider","secret-provider-type":"managed-secret-provider"}
{"level":"info","timestamp":"2024-08-23T10:01:26.569Z","caller":"secret_provider/managed_secret_provider.go:54","msg":"Connecting to sidecar","name":"secret-provider","secret-provider-type":"managed-secret-provider"}
{"level":"info","timestamp":"2024-08-23T10:01:26.844Z","caller":"secret_provider/managed_secret_provider.go:92","msg":"Unable to fetch endpoints from cloud-conf","name":"secret-provider","secret-provider-type":"managed-secret-provider","error":"configmaps \"cloud-conf\" not found"}
{"level":"info","timestamp":"2024-08-23T10:01:26.862Z","caller":"secret_provider/managed_secret_provider.go:99","msg":"Initialized managed secret provider","name":"secret-provider","secret-provider-type":"managed-secret-provider"}
{"level":"info","timestamp":"2024-08-23T10:01:26.862Z","caller":"secret_provider/managed_secret_provider.go:54","msg":"Connecting to sidecar","name":"secret-provider","secret-provider-type":"managed-secret-provider"}
{"level":"info","timestamp":"2024-08-23T10:01:26.948Z","caller":"secret_provider/managed_secret_provider.go:92","msg":"Unable to fetch endpoints from cloud-conf","name":"secret-provider","secret-provider-type":"managed-secret-provider","error":"configmaps \"cloud-conf\" not found"}
{"level":"info","timestamp":"2024-08-23T10:01:26.969Z","caller":"secret_provider/managed_secret_provider.go:99","msg":"Initialized managed secret provider","name":"secret-provider","secret-provider-type":"managed-secret-provider"}
{"level":"info","timestamp":"2024-08-23T10:01:26.969Z","caller":"utils/init_provider.go:66","msg":"Provider registration done!!!","name":"ibm-vpc-file-csi-driver","CSIDriverName":"IBM VPC file driver"}
{"level":"info","timestamp":"2024-08-23T10:01:26.969Z","caller":"ibmcloudprovider/volume_provider.go:101","msg":"Successfully read provider configuration","name":"ibm-vpc-file-csi-driver","CSIDriverName":"IBM VPC file driver"}
{"level":"info","timestamp":"2024-08-23T10:01:26.973Z","caller":"ibmcsidriver/ibm_csi_driver.go:57","msg":"IBMCSIDriver-SetupIBMCSIDriver setting up IBM CSI Driver...","name":"ibm-vpc-file-csi-driver","CSIDriverName":"IBM VPC file driver"}
{"level":"info","timestamp":"2024-08-23T10:01:26.973Z","caller":"ibmcsidriver/ibm_csi_driver.go:116","msg":"IBMCSIDriver-AddVolumeCapabilityAccessModes...","name":"ibm-vpc-file-csi-driver","CSIDriverName":"IBM VPC file driver","VolumeCapabilityAccessModes":[1,5]}
{"level":"info","timestamp":"2024-08-23T10:01:26.973Z","caller":"ibmcsidriver/ibm_csi_driver.go:119","msg":"Enabling volume access mode","name":"ibm-vpc-file-csi-driver","CSIDriverName":"IBM VPC file driver","Mode":"SINGLE_NODE_WRITER"}
{"level":"info","timestamp":"2024-08-23T10:01:26.973Z","caller":"ibmcsidriver/ibm_csi_driver.go:119","msg":"Enabling volume access mode","name":"ibm-vpc-file-csi-driver","CSIDriverName":"IBM VPC file driver","Mode":"MULTI_NODE_MULTI_WRITER"}
{"level":"info","timestamp":"2024-08-23T10:01:26.973Z","caller":"ibmcsidriver/ibm_csi_driver.go:123","msg":"Successfully enabled Volume Capability Access Modes","name":"ibm-vpc-file-csi-driver","CSIDriverName":"IBM VPC file driver"}
{"level":"info","timestamp":"2024-08-23T10:01:26.973Z","caller":"ibmcsidriver/ibm_csi_driver.go:129","msg":"IBMCSIDriver-AddControllerServiceCapabilities...","name":"ibm-vpc-file-csi-driver","CSIDriverName":"IBM VPC file driver","ControllerServiceCapabilities":[1,3,9]}
{"level":"info","timestamp":"2024-08-23T10:01:26.973Z","caller":"ibmcsidriver/ibm_csi_driver.go:132","msg":"Adding controller service capability","name":"ibm-vpc-file-csi-driver","CSIDriverName":"IBM VPC file driver","Capability":"CREATE_DELETE_VOLUME"}
{"level":"info","timestamp":"2024-08-23T10:01:26.973Z","caller":"ibmcsidriver/ibm_csi_driver.go:132","msg":"Adding controller service capability","name":"ibm-vpc-file-csi-driver","CSIDriverName":"IBM VPC file driver","Capability":"LIST_VOLUMES"}
{"level":"info","timestamp":"2024-08-23T10:01:26.973Z","caller":"ibmcsidriver/ibm_csi_driver.go:132","msg":"Adding controller service capability","name":"ibm-vpc-file-csi-driver","CSIDriverName":"IBM VPC file driver","Capability":"EXPAND_VOLUME"}
{"level":"info","timestamp":"2024-08-23T10:01:26.973Z","caller":"ibmcsidriver/ibm_csi_driver.go:136","msg":"Successfully added Controller Service Capabilities","name":"ibm-vpc-file-csi-driver","CSIDriverName":"IBM VPC file driver"}
{"level":"info","timestamp":"2024-08-23T10:01:26.973Z","caller":"ibmcsidriver/ibm_csi_driver.go:142","msg":"IBMCSIDriver-AddNodeServiceCapabilities...","name":"ibm-vpc-file-csi-driver","CSIDriverName":"IBM VPC file driver","NodeServiceCapabilities":[2]}
{"level":"info","timestamp":"2024-08-23T10:01:26.973Z","caller":"ibmcsidriver/ibm_csi_driver.go:145","msg":"Adding node service capability","name":"ibm-vpc-file-csi-driver","CSIDriverName":"IBM VPC file driver","NodeServiceCapabilities":"GET_VOLUME_STATS"}
{"level":"info","timestamp":"2024-08-23T10:01:26.973Z","caller":"ibmcsidriver/ibm_csi_driver.go:149","msg":"Successfully added Node Service Capabilities","name":"ibm-vpc-file-csi-driver","CSIDriverName":"IBM VPC file driver"}
{"level":"info","timestamp":"2024-08-23T10:01:26.973Z","caller":"ibmcsidriver/ibm_csi_driver.go:109","msg":"Successfully setup IBM CSI driver","name":"ibm-vpc-file-csi-driver","CSIDriverName":"IBM VPC file driver"}
{"level":"info","timestamp":"2024-08-23T10:01:26.973Z","caller":"cmd/main.go:110","msg":"Successfully initialized driver...","name":"ibm-vpc-file-csi-driver","CSIDriverName":"IBM VPC file driver"}
{"level":"info","timestamp":"2024-08-23T10:01:26.974Z","caller":"cmd/main.go:125","msg":"Starting metrics endpoint","name":"ibm-vpc-file-csi-driver","CSIDriverName":"IBM VPC file driver"}
{"level":"info","timestamp":"2024-08-23T10:01:26.974Z","caller":"ibmcsidriver/ibm_csi_driver.go:196","msg":"IBMCSIDriver-Run...","name":"ibm-vpc-file-csi-driver","CSIDriverName":"IBM VPC file driver","Endpoint":"unix:/csi/csi-vpc-file.sock"}
{"level":"info","timestamp":"2024-08-23T10:01:26.974Z","caller":"ibmcsidriver/ibm_csi_driver.go:197","msg":"CSI Driver Name","name":"ibm-vpc-file-csi-driver","CSIDriverName":"IBM VPC file driver","Name":"vpc.file.csi.ibm.io"}
{"level":"info","timestamp":"2024-08-23T10:01:26.974Z","caller":"ibmcsidriver/server.go:153","msg":"nonBlockingGRPCServer-serve...","name":"ibm-vpc-file-csi-driver","CSIDriverName":"IBM VPC file driver","Endpoint":"unix:/csi/csi-vpc-file.sock"}
{"level":"info","timestamp":"2024-08-23T10:01:26.974Z","caller":"ibmcsidriver/server.go:86","msg":"nonBlockingGRPCServer-Setup...","name":"ibm-vpc-file-csi-driver","CSIDriverName":"IBM VPC file driver","Endpoint":"unix:/csi/csi-vpc-file.sock"}
{"level":"info","timestamp":"2024-08-23T10:01:26.974Z","caller":"ibmcsidriver/server.go:115","msg":"Start listening GRPC Server","name":"ibm-vpc-file-csi-driver","CSIDriverName":"IBM VPC file driver","Scheme":"unix","Addr":"/csi/csi-vpc-file.sock"}
{"level":"info","timestamp":"2024-08-23T10:01:26.974Z","caller":"ibmcsidriver/config_watcher.go:58","msg":"ConfigWatcher started - start watching for any updates in subnet list","name":"ibm-vpc-file-csi-driver","CSIDriverName":"IBM VPC file driver","configmap name":"ibm-cloud-provider-data","configmap namespace":"kube-system"}
{"level":"info","timestamp":"2024-08-23T10:01:26.974Z","caller":"ibmcsidriver/server.go:159","msg":"Listening GRPC server for connections","name":"ibm-vpc-file-csi-driver","CSIDriverName":"IBM VPC file driver","Addr":{"Name":"/csi/csi-vpc-file.sock","Net":"unix"}}


❯ kubectl logs ibm-vpc-file-csi-node-7vzgh -n kube-system -c liveness-probe
W0823 10:01:25.927435       1 connection.go:183] Still connecting to unix:///csi/csi-vpc-file.sock
I0823 10:01:27.433666       1 main.go:149] calling CSI driver to discover driver name
I0823 10:01:27.727468       1 main.go:155] CSI driver name: "vpc.file.csi.ibm.io"
I0823 10:01:27.727515       1 main.go:183] ServeMux listening at "0.0.0.0:9808"

❯ kubectl logs ibm-vpc-file-csi-node-7vzgh -n kube-system -c csi-driver-registrar
I0823 10:00:59.026663       1 main.go:135] Version: v2.9.3
I0823 10:00:59.026862       1 main.go:136] Running node-driver-registrar in mode=
I0823 10:00:59.026922       1 main.go:157] Attempting to open a gRPC connection with: "/csi/csi-vpc-file.sock"
I0823 10:00:59.027108       1 connection.go:214] Connecting to unix:///csi/csi-vpc-file.sock
W0823 10:01:09.028357       1 connection.go:233] Still connecting to unix:///csi/csi-vpc-file.sock
W0823 10:01:19.027757       1 connection.go:233] Still connecting to unix:///csi/csi-vpc-file.sock
I0823 10:01:27.424403       1 main.go:164] Calling CSI driver to discover driver name
I0823 10:01:27.424464       1 connection.go:243] GRPC call: /csi.v1.Identity/GetPluginInfo
I0823 10:01:27.424475       1 connection.go:244] GRPC request: {}
I0823 10:01:27.429976       1 connection.go:250] GRPC response: {"name":"vpc.file.csi.ibm.io","vendor_version":"vpcFileDriver-"}
I0823 10:01:27.430004       1 connection.go:251] GRPC error: <nil>
I0823 10:01:27.430019       1 main.go:173] CSI driver name: "vpc.file.csi.ibm.io"
I0823 10:01:27.430092       1 node_register.go:55] Starting Registration Server at: /registration/vpc.file.csi.ibm.io-reg.sock
I0823 10:01:27.430422       1 node_register.go:64] Registration Server started at: /registration/vpc.file.csi.ibm.io-reg.sock
I0823 10:01:27.430761       1 node_register.go:88] Skipping HTTP server because endpoint is set to: ""
I0823 10:01:27.827503       1 main.go:90] Received GetInfo call: &InfoRequest{}
I0823 10:01:27.970261       1 main.go:101] Received NotifyRegistrationStatus call: &RegistrationStatus{PluginRegistered:true,Error:,}
  • Exec into iks-vpc-file-node-driver container and check csi socker permissions and user/group
❯ kubectl exec ibm-vpc-file-csi-node-7vzgh -n kube-system -c iks-vpc-file-node-driver -it -- bash
root@test-cr2qts420ernncod12tg-roks415-default-00000479:/# cd /csi/
root@test-cr2qts420ernncod12tg-roks415-default-00000479:/csi# ls
csi-vpc-file.sock
root@test-cr2qts420ernncod12tg-roks415-default-00000479:/csi# ls -la
total 12
drwxr-xr-x. 2 root root 4096 Aug 23 10:01 .
dr-xr-xr-x. 1 root root 4096 Aug 23 10:01 ..
srw-rw----. 1 root 2121    0 Aug 23 10:01 csi-vpc-file.sock
root@test-cr2qts420ernncod12tg-roks415-default-00000479:/csi# id
uid=0(root) gid=0(root) groups=0(root) context=system_u:system_r:spc_t:s0
root@test-cr2qts420ernncod12tg-roks415-default-00000479:/csi#

❯ kubectl exec ibm-vpc-file-csi-controller-5b5466cdf5-mllcv -n kube-system -c iks-vpc-file-driver -it -- bash
groups: cannot find name for group ID 2121
2121@ibm-vpc-file-csi-controller-5b5466cdf5-mllcv:/$ id
uid=2121(2121) gid=2121 groups=2121
2121@ibm-vpc-file-csi-controller-5b5466cdf5-mllcv:/$ cd /csi/
2121@ibm-vpc-file-csi-controller-5b5466cdf5-mllcv:/csi$ ls
csi-vpc-file.sock  providerfile.sock
2121@ibm-vpc-file-csi-controller-5b5466cdf5-mllcv:/csi$ ls -la
total 12
drwxrwxrwx. 2 root root 4096 Aug 23 10:01 .
dr-xr-xr-x. 1 root root 4096 Aug 23 10:01 ..
srwxr-xr-x. 1 2121 2121    0 Aug 23 10:01 csi-vpc-file.sock
srwxr-xr-x. 1 2121 2121    0 Aug 23 10:01 providerfile.sock
2121@ibm-vpc-file-csi-controller-5b5466cdf5-mllcv:/csi$
  • Create VNI PVC and mount
❯ kubectl get pvc
NAME   STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS             VOLUMEATTRIBUTESCLASS   AGE
test   Bound    pvc-30ba9278-17d0-455d-b18a-b90e37918106   20Gi       RWX            ibmc-vpc-file-500-iops   <unset>                 103s

❯ kubectl get pods
NAME                        READY   STATUS    RESTARTS   AGE
eni-app2-56fd9c8fd4-9c8zj   1/1     Running   0          39s
eni-app2-56fd9c8fd4-fnhz5   1/1     Running   0          39s

❯ kubectl exec eni-app2-56fd9c8fd4-9c8zj -it -- bash
root@eni-app2-56fd9c8fd4-9c8zj:/# cd mount-path/
root@eni-app2-56fd9c8fd4-9c8zj:/mount-path# touch test
root@eni-app2-56fd9c8fd4-9c8zj:/mount-path# exit
exit
  • Create EIT PVC and mount
❯ kubectl get pvc
NAME      STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS        VOLUMEATTRIBUTESCLASS   AGE
pvc-eit   Bound    pvc-a9c6a891-0c11-4fba-b358-97a695e91c2c   10Gi       RWX            ibmc-vpc-file-eit   <unset>                 2m24s

❯ kubectl get pods
NAME                       READY   STATUS    RESTARTS   AGE
eit-app-77f9bcc585-6sszq   1/1     Running   0          23s
eit-app-77f9bcc585-p9chd   1/1     Running   0          23s

❯ kubectl exec eit-app-77f9bcc585-6sszq -it -- bash
bash-5.1# cd mount-path-eit/
bash-5.1# touch test
bash-5.1# exit
exit

Copy link
Member Author

@prankulmahajan prankulmahajan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Test case to be verified

  1. IKS and ROKS (rhel and rhcos)
  2. Migration case/existing users

[Edit]: Added test results below

pkg/ibmcsidriver/fileOps.go Show resolved Hide resolved
pkg/ibmcsidriver/fileOps.go Show resolved Hide resolved
@contsto
Copy link

contsto commented Aug 26, 2024

👍 Coverage increased from [%] to [82.7333%]

@prankulmahajan
Copy link
Member Author

prankulmahajan commented Aug 26, 2024

Migration/Existing driver users

  • Setup
# Have csi driver installed using prod image
❯ kubectl get pods -n kube-system -o wide | grep file
ibm-vpc-file-csi-controller-f4ddc58db-fncjh           5/5     Running   0               57s     172.17.112.147   10.240.0.86   <none>           <none>
ibm-vpc-file-csi-controller-f4ddc58db-w4stk           5/5     Running   0               57s     172.17.109.7     10.240.0.85   <none>           <none>
ibm-vpc-file-csi-node-bj2kg                           4/4     Running   0               54s     10.240.0.85      10.240.0.85   <none>           <none>
ibm-vpc-file-csi-node-drtt2                           4/4     Running   0               54s     10.240.0.86      10.240.0.86   <none>           <none>

Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  72s   default-scheduler  Successfully assigned kube-system/ibm-vpc-file-csi-controller-f4ddc58db-fncjh to 10.240.0.86
  Normal  Pulling    71s   kubelet            Pulling image "icr.io/ext/sig-storage/csi-provisioner:v3.6.3"
  Normal  Pulled     71s   kubelet            Successfully pulled image "icr.io/ext/sig-storage/csi-provisioner:v3.6.3" in 303ms (303ms including waiting). Image size: 29334944 bytes.
  Normal  Created    71s   kubelet            Created container csi-provisioner
  Normal  Started    70s   kubelet            Started container csi-provisioner
  Normal  Pulling    70s   kubelet            Pulling image "icr.io/ext/sig-storage/csi-resizer:v1.9.3"
  Normal  Pulled     70s   kubelet            Successfully pulled image "icr.io/ext/sig-storage/csi-resizer:v1.9.3" in 361ms (361ms including waiting). Image size: 27614355 bytes.
  Normal  Created    70s   kubelet            Created container csi-resizer
  Normal  Started    70s   kubelet            Started container csi-resizer
  Normal  Pulled     70s   kubelet            Container image "icr.io/ext/sig-storage/livenessprobe:v2.11.0" already present on machine
  Normal  Created    70s   kubelet            Created container liveness-probe
  Normal  Started    70s   kubelet            Started container liveness-probe
  Normal  Pulling    70s   kubelet            Pulling image "icr.io/ibm/ibm-vpc-file-csi-driver:v2.0.6"
  Normal  Pulled     66s   kubelet            Successfully pulled image "icr.io/ibm/ibm-vpc-file-csi-driver:v2.0.6" in 4.163s (4.163s including waiting). Image size: 70650030 bytes.
  Normal  Created    66s   kubelet            Created container iks-vpc-file-driver
  Normal  Started    65s   kubelet            Started container iks-vpc-file-driver
  Normal  Pulling    65s   kubelet            Pulling image "icr.io/obs/armada-storage-secret:v1.2.40"
  Normal  Pulled     65s   kubelet            Successfully pulled image "icr.io/obs/armada-storage-secret:v1.2.40" in 357ms (357ms including waiting). Image size: 68120569 bytes.
  Normal  Created    65s   kubelet            Created container storage-secret-sidecar
  Normal  Started    65s   kubelet            Started container storage-secret-sidecar

# Login into node
root@test-cr624av200pnjn76o2i0-iks130-default-0000018a:/var/lib/kubelet/plugins/vpc.file.csi.ibm.io# ls
csi-vpc-file.sock
root@test-cr624av200pnjn76o2i0-iks130-default-0000018a:/var/lib/kubelet/plugins/vpc.file.csi.ibm.io#
  • Delete resource and check if csi socket is present -- gets deleted automatically
root@test-cr624av200pnjn76o2i0-iks130-default-0000018a:/# cd /var/lib/kubelet/plugins/vpc.file.csi.ibm.io/
root@test-cr624av200pnjn76o2i0-iks130-default-0000018a:/var/lib/kubelet/plugins/vpc.file.csi.ibm.io# ls
csi-vpc-file.sock
root@test-cr624av200pnjn76o2i0-iks130-default-0000018a:/var/lib/kubelet/plugins/vpc.file.csi.ibm.io#
root@test-cr624av200pnjn76o2i0-iks130-default-0000018a:/var/lib/kubelet/plugins/vpc.file.csi.ibm.io# ls
root@test-cr624av200pnjn76o2i0-iks130-default-0000018a:/var/lib/kubelet/plugins/vpc.file.csi.ibm.io#
root@test-cr624av200pnjn76o2i0-iks130-default-0000018a:/var/lib/kubelet/plugins/vpc.file.csi.ibm.io#
root@test-cr624av200pnjn76o2i0-iks130-default-0000018a:/var/lib/kubelet/plugins/vpc.file.csi.ibm.io# ls -la
total 8
drwxr-xr-x 2 root root 4096 Aug 26 14:27 .
drwxr-x--- 4 root root 4096 Aug 26 14:21 ..
  • Reapply latest image (create PVC and mount)
❯ kubectl get pvc -w
NAME   STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS             VOLUMEATTRIBUTESCLASS   AGE
test   Bound     pvc-70431e62-5dd8-493b-ba88-f182d704f36a   20Gi       RWX            ibmc-vpc-file-500-iops   <unset>                 62s

❯ kubectl get pods
NAME                        READY   STATUS    RESTARTS   AGE
eni-app2-794f5f989d-dsc8b   1/1     Running   0          2m7s
eni-app2-794f5f989d-qr287   1/1     Running   0          2m7s

❯ kubectl exec eni-app2-794f5f989d-dsc8b -it -- bash
root@eni-app2-794f5f989d-dsc8b:/# df -h
Filesystem                                        Size  Used Avail Use% Mounted on
overlay                                            98G  5.6G   88G   6% /
tmpfs                                              64M     0   64M   0% /dev
tmpfs                                             7.8G     0  7.8G   0% /sys/fs/cgroup
10.240.0.5:/5e319b4e_aa0a_41c2_815f_6fb5249fb309   21G  384K   20G   1% /mount-path
/dev/vda2                                          98G  5.6G   88G   6% /etc/hosts
shm                                                64M     0   64M   0% /dev/shm
tmpfs                                              13G   16K   13G   1% /run/secrets/kubernetes.io/serviceaccount
tmpfs                                             7.8G     0  7.8G   0% /proc/acpi
tmpfs                                             7.8G     0  7.8G   0% /proc/scsi
tmpfs                                             7.8G     0  7.8G   0% /sys/firmware
root@eni-app2-794f5f989d-dsc8b:/# cd mount-path/
root@eni-app2-794f5f989d-dsc8b:/mount-path# touch test
root@eni-app2-794f5f989d-dsc8b:/mount-path# ls
test
root@eni-app2-794f5f989d-dsc8b:/mount-path#
root@eni-app2-794f5f989d-dsc8b:/mount-path# exit
exit
  • patch new image and check existing resources
❯ kubectl get pods -n kube-system -o wide | grep file
ibm-vpc-file-csi-controller-67f9d49458-2wm64          5/5     Running   0               105s    172.17.109.9     10.240.0.85   <none>           <none>
ibm-vpc-file-csi-controller-67f9d49458-fbgwl          5/5     Running   0               64s     172.17.112.149   10.240.0.86   <none>           <none>
ibm-vpc-file-csi-node-5xptf                           4/4     Running   0               104s    10.240.0.85      10.240.0.85   <none>           <none>
ibm-vpc-file-csi-node-vrghn                           4/4     Running   0               63s     10.240.0.86      10.240.0.86   <none>           <none>

Events:
  Type    Reason     Age    From               Message
  ----    ------     ----   ----               -------
  Normal  Scheduled  2m11s  default-scheduler  Successfully assigned kube-system/ibm-vpc-file-csi-controller-67f9d49458-2wm64 to 10.240.0.85
  Normal  Pulling    2m10s  kubelet            Pulling image "icr.io/ext/sig-storage/csi-provisioner:v3.6.3"
  Normal  Pulled     2m10s  kubelet            Successfully pulled image "icr.io/ext/sig-storage/csi-provisioner:v3.6.3" in 395ms (395ms including waiting). Image size: 29334944 bytes.
  Normal  Created    2m10s  kubelet            Created container csi-provisioner
  Normal  Started    2m10s  kubelet            Started container csi-provisioner
  Normal  Pulling    2m10s  kubelet            Pulling image "icr.io/ext/sig-storage/csi-resizer:v1.9.3"
  Normal  Pulled     2m9s   kubelet            Successfully pulled image "icr.io/ext/sig-storage/csi-resizer:v1.9.3" in 353ms (353ms including waiting). Image size: 27614355 bytes.
  Normal  Created    2m9s   kubelet            Created container csi-resizer
  Normal  Started    2m9s   kubelet            Started container csi-resizer
  Normal  Pulled     2m9s   kubelet            Container image "icr.io/ext/sig-storage/livenessprobe:v2.11.0" already present on machine
  Normal  Created    2m9s   kubelet            Created container liveness-probe
  Normal  Started    2m9s   kubelet            Started container liveness-probe
  Normal  Pulling    2m9s   kubelet            Pulling image "prankulibm/ibm-vpc-file-csi-driver:nonroot"
  Normal  Pulled     2m2s   kubelet            Successfully pulled image "prankulibm/ibm-vpc-file-csi-driver:nonroot" in 7.271s (7.271s including waiting). Image size: 113064213 bytes.
  Normal  Created    2m1s   kubelet            Created container iks-vpc-file-driver
  Normal  Started    2m1s   kubelet            Started container iks-vpc-file-driver
  Normal  Pulling    2m1s   kubelet            Pulling image "icr.io/obs/armada-storage-secret:v1.2.40"
  Normal  Pulled     2m1s   kubelet            Successfully pulled image "icr.io/obs/armada-storage-secret:v1.2.40" in 337ms (337ms including waiting). Image size: 68120569 bytes.
  Normal  Created    2m1s   kubelet            Created container storage-secret-sidecar
  Normal  Started    2m1s   kubelet            Started container storage-secret-sidecar

# Confirm the permissions and owner now
root@test-cr624av200pnjn76o2i0-iks130-default-0000018a:/var/lib/kubelet/plugins/vpc.file.csi.ibm.io# ls -la
total 8
drwxr-xr-x 2 root root 4096 Aug 26 14:30 .
drwxr-x--- 4 root root 4096 Aug 26 14:21 ..
srw-rw---- 1 root 2121    0 Aug 26 14:30 csi-vpc-file.sock


❯ kubectl get pvc
NAME   STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS             VOLUMEATTRIBUTESCLASS   AGE
test   Bound    pvc-70431e62-5dd8-493b-ba88-f182d704f36a   20Gi       RWX            ibmc-vpc-file-500-iops   <unset>                 10m
❯
❯ kubectl get pods -o wide
NAME                        READY   STATUS    RESTARTS   AGE    IP               NODE          NOMINATED NODE   READINESS GATES
eni-app2-794f5f989d-dsc8b   1/1     Running   0          9m3s   172.17.109.11    10.240.0.85   <none>           <none>
eni-app2-794f5f989d-qr287   1/1     Running   0          9m3s   172.17.112.151   10.240.0.86   <none>           <none>
❯
❯
❯ kubectl exec eni-app2-794f5f989d-dsc8b -it -- bash
root@eni-app2-794f5f989d-dsc8b:/# cd mount-path/
root@eni-app2-794f5f989d-dsc8b:/mount-path# ls
test
root@eni-app2-794f5f989d-dsc8b:/mount-path# touch test2
root@eni-app2-794f5f989d-dsc8b:/mount-path# ls -la
total 8
drwxrwxr-x 2 nobody 4294967294 4096 Aug 26 14:49 .
drwxr-xr-x 1 root   root       4096 Aug 26 14:40 ..
-rw-r--r-- 1 nobody 4294967294    0 Aug 26 14:42 test
-rw-r--r-- 1 nobody 4294967294    0 Aug 26 14:49 test2
root@eni-app2-794f5f989d-dsc8b:/mount-path# exit
exit

@prankulmahajan
Copy link
Member Author

prankulmahajan commented Aug 26, 2024

ROKS with Coreos

  • Setup
❯ ibmcloud ks clusters | grep coreos
prankul-coreos              cr65pfq20mpa9sfp8c7g   normal   4 hours ago   2         Dallas     4.15.25_1556_openshift   Default               vpc-gen2

❯ kubectl get nodes
NAME                                                     STATUS   ROLES           AGE     VERSION
test-cr65pfq20mpa9sfp8c7g-prankulcore-default-0000019d   Ready    master,worker   3h40m   v1.28.12+396c881
test-cr65pfq20mpa9sfp8c7g-prankulcore-default-0000021f   Ready    master,worker   3h40m   v1.28.12+396c881
  • Apply file resources and check permissions
❯ kubectl get pods -n kube-system -o wide | grep file
ibm-vpc-file-csi-controller-7cdd66cd7c-54dw7                                  5/5     Running   0               20s     172.17.182.250   test-cr65pfq20mpa9sfp8c7g-prankulcore-default-0000021f   <none>           <none>
ibm-vpc-file-csi-controller-7cdd66cd7c-t8nv5                                  5/5     Running   0               20s     172.17.165.151   test-cr65pfq20mpa9sfp8c7g-prankulcore-default-0000019d   <none>           <none>
ibm-vpc-file-csi-node-b68d7                                                   4/4     Running   0               17s     10.240.0.97      test-cr65pfq20mpa9sfp8c7g-prankulcore-default-0000021f   <none>           <none>
ibm-vpc-file-csi-node-vvsx7                                                   4/4     Running   0               17s     10.240.0.98      test-cr65pfq20mpa9sfp8c7g-prankulcore-default-0000019d   <none>           <none>

❯ oc debug node/test-cr65pfq20mpa9sfp8c7g-prankulcore-default-0000019d
Starting pod/test-cr65pfq20mpa9sfp8c7g-prankulcore-default-0000019d-debug ...
To use host binaries, run `chroot /host`
If you don't see a command prompt, try pressing enter.
sh-4.2# chroot /host
sh-5.1# cd /var/lib/kubelet/plugins/vpc.file.csi.ibm.io
sh-5.1# ls
csi-vpc-file.sock
sh-5.1# ls -la
total 0
drwxr-xr-x. 2 root root 31 Aug 26 14:53 .
drwxr-x---. 4 root root 61 Aug 26 14:52 ..
srw-rw----. 1 root 2121  0 Aug 26 14:53 csi-vpc-file.sock
sh-5.1#
  • Create PVC and mount
❯ kubectl get pvc
NAME   STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS             AGE
test   Bound    pvc-e9799714-fb12-40c8-9c03-a15444bb9eb3   20Gi       RWX            ibmc-vpc-file-500-iops   2m7s

❯ kubectl get pods -o wide
NAME                        READY   STATUS    RESTARTS   AGE   IP               NODE                                                     NOMINATED NODE   READINESS GATES
eni-app2-56fd9c8fd4-4t72g   1/1     Running   0          21s   172.17.182.241   test-cr65pfq20mpa9sfp8c7g-prankulcore-default-0000021f   <none>           <none>
eni-app2-56fd9c8fd4-sg7g8   1/1     Running   0          21s   172.17.165.166   test-cr65pfq20mpa9sfp8c7g-prankulcore-default-0000019d   <none>           <none>

❯ kubectl exec eni-app2-56fd9c8fd4-4t72g -it -- bash
root@eni-app2-56fd9c8fd4-4t72g:/# df -h
Filesystem                                         Size  Used Avail Use% Mounted on
overlay                                            100G   24G   76G  24% /
tmpfs                                               64M     0   64M   0% /dev
shm                                                 64M     0   64M   0% /dev/shm
/dev/vda4                                          100G   24G   76G  24% /etc/hosts
10.240.0.12:/5d92674a_b6b5_4799_936c_4f69587d7a19   21G  384K   20G   1% /mount-path
tmpfs                                               14G   24K   14G   1% /run/secrets/kubernetes.io/serviceaccount
tmpfs                                              7.9G     0  7.9G   0% /proc/acpi
tmpfs                                              7.9G     0  7.9G   0% /proc/scsi
tmpfs                                              7.9G     0  7.9G   0% /sys/firmware
root@eni-app2-56fd9c8fd4-4t72g:/# cd mount-path/
root@eni-app2-56fd9c8fd4-4t72g:/mount-path# touch test
root@eni-app2-56fd9c8fd4-4t72g:/mount-path# ls
test
root@eni-app2-56fd9c8fd4-4t72g:/mount-path# exit
exit

@contsto
Copy link

contsto commented Aug 27, 2024

👍 Coverage increased from [%] to [82.3222%]

Copy link
Member

@sameshai sameshai left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

@prankulmahajan prankulmahajan merged commit 993391c into master Aug 27, 2024
3 checks passed
@arahamad arahamad deleted the add_permissions branch October 15, 2024 12:35
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants