Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add ServiceAccount to csi-nfs-node DaemonSet #334

Closed
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions deploy/csi-nfs-node.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,7 @@ spec:
spec:
hostNetwork: true # original nfs connection would be broken without hostNetwork setting
dnsPolicy: Default # available values: Default, ClusterFirstWithHostNet, ClusterFirst
serviceAccountName: csi-nfs-controller-sa
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think the driver on the node needs any access? what's the blocking issue now?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Without the ServiceAccount, OpenShift refuses to create the DaemonSet/csi-nfs-node Pods. My complete installation notes and the error message are:

oc new-project nfs-csi
oc adm policy add-scc-to-user privileged system:serviceaccount:nfs-csi:csi-nfs-controller-sa

sed -i.backup 's/kube-system/nfs-csi/g' ./deploy/v4.0.0/rbac-csi-nfs-controller.yaml
sed -i.backup 's/kube-system/nfs-csi/g' ./deploy/v4.0.0/csi-nfs-node.yaml
sed -i.backup 's/kube-system/nfs-csi/g' ./deploy/v4.0.0/csi-nfs-controller.yaml

# ADD 'spec.template.spec.serviceAccountName: csi-nfs-controller-sa'
vi deploy/csi-nfs-node.yaml
./deploy/install-driver.sh v4.0.0 local

# ADD 'parameters:...'
vi deploy/example/storageclass-nfs.yaml
oc create -f deploy/example/storageclass-nfs.yaml
LAST SEEN   TYPE      REASON              OBJECT                                     MESSAGE
35s         Warning   FailedCreate        daemonset/csi-nfs-node                     Error creating: pods "csi-nfs-node-" is forbidden: unable to validate against any security context constraint: [provider "anyuid": Forbidden: not usable by user or serviceaccount, provider restricted: .spec.securityContext.hostNetwork: Invalid value: true: Host network is not allowed to be used, spec.volumes[0]: Invalid value: "hostPath": hostPath volumes are not allowed to be used, spec.volumes[1]: Invalid value: "hostPath": hostPath volumes are not allowed to be used, spec.volumes[2]: Invalid value: "hostPath": hostPath volumes are not allowed to be used, spec.containers[0].securityContext.hostNetwork: Invalid value: true: Host network is not allowed to be used, spec.containers[0].securityContext.containers[2].hostPort: Invalid value: 29653: Host ports are not allowed to be used, spec.containers[1].securityContext.hostNetwork: Invalid value: true: Host network is not allowed to be used, spec.containers[1].securityContext.containers[2].hostPort: Invalid value: 29653: Host ports are not allowed to be used, spec.containers[2].securityContext.privileged: Invalid value: true: Privileged containers are not allowed, spec.containers[2].securityContext.capabilities.add: Invalid value: "SYS_ADMIN": capability may not be added, spec.containers[2].securityContext.hostNetwork: Invalid value: true: Host network is not allowed to be used, spec.containers[2].securityContext.containers[2].hostPort: Invalid value: 29653: Host ports are not allowed to be used, provider "nonroot": Forbidden: not usable by user or serviceaccount, provider "hostmount-anyuid": Forbidden: not usable by user or serviceaccount, provider "machine-api-termination-handler": Forbidden: not usable by user or serviceaccount, provider "hostnetwork": Forbidden: not usable by user or serviceaccount, provider "hostaccess": Forbidden: not usable by user or serviceaccount, provider "node-exporter": Forbidden: not usable by user or serviceaccount, provider "privileged": Forbidden: not usable by user or serviceaccount]

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok, maybe we need an empty serviceAccount for node daemonset

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

btw, what's the k8s version you are running? is this only required on OpenShift?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm using OpenShift version 4.10 which is Kubernetes Version: v1.23.5+9ce5071

I should have been clear and mentioned that after "add-scc-to-user" and adding the ServiceAccount to the DaemonSet, everything works wonderfully! Thank you! I even created a StorageClass and am able to dynamically provision directories (PVs) on my external NFS server (a RHEL8 host)!

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have added an empty serviceAccount by this PR: #335, could you verify it works on OpenShift? use csi-nfs-controller-sa on driver daemonset is giving too much privilege

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

addressed by this PR: #335, could you check whether the master branch works well on OpenShift.

Copy link
Author

@johnsimcall johnsimcall May 12, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@andyzhangx I was able to test the updated master branch. I found that OpenShift's default restricted SCC that gets applied to the new csi-nfs-node-sa ServiceAccount still prevents the DaemonSet pods from running.

I see that the nfs containers from the controller Deployment and the DaemonSet pods ask for very generous securityContext options. You said that "driver daemonset is giving too much privilege," but this would require reducing the privileges requested.

Thank you for looking at this!

In the mean time, I created a custom SCC, ClusterRole, and ClusterRoleBindings like this:

---
apiVersion: security.openshift.io/v1
kind: SecurityContextConstraints
metadata:
  name: csi-nfs-scc
allowHostDirVolumePlugin: true
allowHostNetwork: true
allowHostPorts: true
allowPrivilegedContainer: true
allowPrivilegeEscalation: true
allowedCapabilities:
  - SYS_ADMIN
runAsUser:
  type: RunAsAny
seLinuxContext:
  type: RunAsAny

–--
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: system:openshift:scc:csi-nfs-scc
rules:
- apiGroups:
  - security.openshift.io
  resourceNames:
  - csi-nfs-scc
  resources:
  - securitycontextconstraints
  verbs:
  - use

–--
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: system:openshift:scc:csi-nfs-scc
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:openshift:scc:csi-nfs-scc
subjects:
- kind: ServiceAccount
  name: csi-nfs-node-sa
  namespace: nfs-csi
- kind: ServiceAccount
  name: csi-nfs-controller-sa
  namespace: nfs-csi

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

so what's the working one for reducing securityContext options?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I haven't tried to reduce the securityContext options in the Deployment or DaemonSet pods yet. I will need to examine the container images when I get some more time...

nodeSelector:
kubernetes.io/os: linux
tolerations:
Expand Down