Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

User "system:serviceaccount:kube-system:default" cannot get namespaces in the namespace "default" #3130

Closed
noprom opened this issue Nov 12, 2017 · 30 comments

Comments

@noprom
Copy link

noprom commented Nov 12, 2017

When install a helm package, I got the following error like this:

[root@k8s-master3 ~]# helm install --name nginx stable/nginx-ingress
Error: release nginx failed: namespaces "default" is forbidden: User "system:serviceaccount:kube-system:default" cannot get namespaces in the namespace "default"

Here is my helm version:

[root@k8s-master3 ~]# helm version
Client: &version.Version{SemVer:"v2.7.0", GitCommit:"08c1144f5eb3e3b636d9775617287cc26e53dba4", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.7.0", GitCommit:"08c1144f5eb3e3b636d9775617287cc26e53dba4", GitTreeState:"clean"}

And my kubectl version:

[root@k8s-master3 ~]# kubectl version
Client Version: version.Info{Major:"1", Minor:"8+", GitVersion:"v1.8.1-alicloud", GitCommit:"19408ab2a1b736fe97a9d9cf24c6fb228f23f12f", GitTreeState:"clean", BuildDate:"2017-10-19T04:05:24Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.1", GitCommit:"f38e43b221d08850172a9a4ea785a86a3ffa3b3a", GitTreeState:"clean", BuildDate:"2017-10-11T23:16:41Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}

Any help will be appreciated, thanks a lot!

@flyer103
Copy link
Contributor

It seems that you have encountered a problem related to privileges.
You could enable rbac in when deploying the chart:

$ helm install --name nginx --set rbac.create=true stable/nginx-ingress

@noprom
Copy link
Author

noprom commented Nov 13, 2017

@flyer103

It still can not work.
image

@owetterau
Copy link

Same problem here. Enabling rbac does not help.

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.3", GitCommit:"f0efb3cb883751c5ffdbe6d515f3cb4fbe7b7acd", GitTreeState:"clean", BuildDate:"2017-11-10T13:17:12Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.2", GitCommit:"bdaeafa71f6c7c04636251031f93464384d54963", GitTreeState:"clean", BuildDate:"2017-10-24T19:38:10Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}

$ helm install --name my-hdfs-namenode hdfs-namenode-k8s
Error: release my-hdfs-namenode failed: namespaces "default" is forbidden: User "system:serviceaccount:kube-system:default" cannot get namespaces in the namespace "default"

Help would really be appreciated!

@bacongobbler
Copy link
Member

What you need to do is grant tiller (via the default service account) access to install resources in the default namespace. See https://github.com/kubernetes/helm/blob/master/docs/service_accounts.md

@noprom
Copy link
Author

noprom commented Nov 14, 2017

Hi, @bacongobbler
Thanks for help. I follow your instructions mentioned above, and I've done the following things:
First of all, I reset the tiller:

helm reset --force

After doing this, I create a RBAC yaml file:

[root@k8s-master3 ~]# cat rbac-config.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: tiller
  namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: tiller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: tiller
    namespace: default

And then init my tiller:

helm init --service-account tiller --upgrade -i registry.cn-hangzhou.aliyuncs.com/google_containers/tiller:v2.7.0 --stable-repo-url https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts

However, the tiller is not installed successfully:

[root@k8s-master3 ~]# helm version
Client: &version.Version{SemVer:"v2.7.0", GitCommit:"08c1144f5eb3e3b636d9775617287cc26e53dba4", GitTreeState:"clean"}
Error: cannot connect to Tiller

And I sew the deployments in kube-system namespace is like this:

[root@k8s-master3 ~]# kubectl get deployments --all-namespaces
NAMESPACE     NAME                       DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
ci            jenkins                    1         1         1            1           5d
default       redis-master               1         1         1            0           4d
kube-system   default-http-backend       1         1         1            1           5d
kube-system   heapster                   1         1         1            1           5d
kube-system   kube-dns                   1         1         1            1           5d
kube-system   kubernetes-dashboard       1         1         1            1           5d
kube-system   monitoring-influxdb        1         1         1            1           5d
kube-system   nginx-ingress-controller   1         1         1            1           5d
kube-system   tiller-deploy              1         0         0            0           9m

Any ideas about how to solve this problem?
Thanks in advance!

@innovia
Copy link

innovia commented Nov 19, 2017

@noprom try this

delete the deployment of tiller manually

create these rbac config for tiller

apiVersion: v1
kind: ServiceAccount
metadata:
  name: tiller
  namespace: kube-system
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: tiller-clusterrolebinding
subjects:
- kind: ServiceAccount
  name: tiller
  namespace: kube-system
roleRef:
  kind: ClusterRole
  name: cluster-admin
  apiGroup: ""

run delete (yes delete) on that rbac config
run create again
then run helm init --upgrade to replace

you should not have any more errors.

@noprom
Copy link
Author

noprom commented Nov 21, 2017

@innovia
Great! Thanks, I've solved this problem.
Thanks a lot!

@noprom noprom closed this as completed Nov 21, 2017
@innovia
Copy link

innovia commented Nov 21, 2017

Happy to help :)

@innovia
Copy link

innovia commented Nov 21, 2017

@noprom please check my post on how to setup helm and tiller with rbac per namespace

@noprom
Copy link
Author

noprom commented Nov 21, 2017

@innovia
Fantastic post!😄

@innovia
Copy link

innovia commented Nov 21, 2017

Thanks!

@mfojtak
Copy link

mfojtak commented Nov 23, 2017

the above doesn't work Still getting

namespaces "default" is forbidden: User "system:serviceaccount:kube-system:default" cannot get namespaces in the namespace "default"

@ykfq
Copy link

ykfq commented Mar 14, 2018

That's because you don't have the permission to deploy tiller, add an account for it:

kubectl --namespace kube-system create serviceaccount tiller

kubectl create clusterrolebinding tiller-cluster-rule \
 --clusterrole=cluster-admin --serviceaccount=kube-system:tiller

kubectl --namespace kube-system patch deploy tiller-deploy \
 -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}' 

Console output:

serviceaccount "tiller" created
clusterrolebinding "tiller-cluster-rule" created
deployment "tiller-deploy" patched

Then run command below to check it :

helm list
helm repo update
helm install --name nginx-ingress stable/nginx-ingress

@antran89
Copy link

antran89 commented May 30, 2018

@ykfq Thanks a ton, it works! But every time, we deploy on a new cluster, we need to do this? What a inconvenience!

@ykfq
Copy link

ykfq commented Jun 1, 2018

@antran89
If you use the official tiller installation instruction, you'll have to do so:

  • Create a serviceaccount for tiller
  • Bind a role for the ServiceAccout created above (cluster-admin role is needed)
  • Make a ClusterRoleBinding for ServiceAccout
  • Patch the deployment created when using helm init

So, there is another way to make it easer - install via yaml file:

vim tiller.yaml

apiVersion: v1
kind: Service
metadata:
  name: tiller-deploy
  namespace: kube-system
  labels:
    app: helm
    name: tiller
spec:
  ports:
  - name: tiller
    port: 44134
    protocol: TCP
    targetPort: tiller
  selector:
    app: helm
    name: tiller
  type: ClusterIP
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: tiller-deploy
  namespace: kube-system
  labels:
    app: helm
    name: tiller
  annotations:
    deployment.kubernetes.io/revision: "5"
spec:
  replicas: 1
  selector:
    matchLabels:
      app: helm
      name: tiller
  template:
    metadata:
      labels:
        app: helm
        name: tiller
    spec:
      containers:
      - env:
        - name: TILLER_NAMESPACE
          value: kube-system
        - name: TILLER_HISTORY_MAX
          value: "0"
        name: tiller
        image: gcr.io/kubernetes-helm/tiller:v2.8.2
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 44134
          name: tiller
          protocol: TCP
        - containerPort: 44135
          name: http
          protocol: TCP
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /liveness
            port: 44135
            scheme: HTTP
          initialDelaySeconds: 1
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /readiness
            port: 44135
            scheme: HTTP
          initialDelaySeconds: 1
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      serviceAccount: tiller
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: tiller
  namespace: kube-system
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: tiller-cluster-rule
subjects:
- kind: ServiceAccount
  name: tiller
  namespace: kube-system
roleRef:
  kind: ClusterRole
  name: cluster-admin
  apiGroup: ""

Then create the resourses:

kubectl create -f tiller.yaml

Make sure to check your service .

the above yaml content was exported from a running cluster, using command:

kubectl -n kube-system get svc tiller-deploy -o=yaml
kubectl -n kube-system get deploy tiller-deploy -o=yaml
kubectl -n kube-system get sa tiller -o=yaml
kubectl -n kube-system get clusterrolebinding tiller-cluster-rule -o=yaml

This yaml hasn't tested yet, if you have any question, make a comment.

@brunoban
Copy link

brunoban commented Jun 4, 2018

@ykfq I don't like the idea of giving Tiller full cluster admin privileges, but nothing else worked for me. I tried following this example. I was trying to restrict Tiller to acting only on namespaces I let it act.

But always ran into this issue (was deploying Concourse):

Error: release concourse failed: namespaces "concourse" is forbidden: User "system:serviceaccount:tiller-system:tiller-user" cannot get namespaces in the namespace "concourse": Unknown user "system:serviceaccount:tiller-system:tiller-user"

Any ideas of how to make that specific example work? I changed some parameters around, the entire YAML with RBACs was this one:

apiVersion: v1
kind: Namespace
metadata:
  name: tiller-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: tiller-user
  namespace: tiller-system
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: tiller-manager
  namespace: tiller-system
rules:
- apiGroups: ["", "extensions", "apps"]
  resources: ["configmaps"]
  verbs: ["*"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: tiller-binding
  namespace: tiller-system
subjects:
- kind: ServiceAccount
  name: tiller-user
  namespace: tiller-system
roleRef:
  kind: Role
  name: tiller-manager
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: Namespace
metadata:
  name: concourse
---
apiVersion: v1
kind: Namespace
metadata:
  name: concourse-main
----
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: tiller-concourse-role
  namespace: concourse
rules:
- apiGroups: ["", "extensions", "apps"]
  resources: ["*"]
  verbs: ["*"]
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: tiller-concourse-namespace-role
  namespace: concourse
rules:
- apiGroups: ["", "extensions", "apps"]
  resources: ["namespaces"]
  verbs: ["*"]
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: tiller-concourse-main-role
  namespace: concourse-main
rules:
- apiGroups: ["", "extensions", "apps"]
  resources: ["*"]
  verbs: ["*"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: tiller-concourse-main-role
subjects:
- kind: ServiceAccount
  name: tiller-user
  namespace: tiller-system
roleRef:
  kind: Role
  name: tiller-concourse-main-role
  apiGroup: rbac.authorization.k8s.io
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: tiller-concourse-role
subjects:
- kind: ServiceAccount
  name: tiller-user
  namespace: tiller-system
roleRef:
  kind: Role
  name: tiller-concourse-role
  apiGroup: rbac.authorization.k8s.io
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: tiller-concourse-namespace-role
subjects:
- kind: ServiceAccount
  name: tiller-user
  namespace: tiller-system
roleRef:
  kind: Role
  name: tiller-concourse-namespace-role
  apiGroup: rbac.authorization.k8s.io

@banyaszb
Copy link

banyaszb commented Jun 7, 2018

helm init --upgrade --service-account tiller

@innovia
Copy link

innovia commented Jun 7, 2018

@brunoban helm v3 will remove tiller so from what i understood the permissions will be by the user who apply it

@brunoban
Copy link

brunoban commented Jun 7, 2018

@innovia Oh... I did not know that. Gonna try to get up to speed now then. Thanks!

@cjbottaro
Copy link

cjbottaro commented Jul 7, 2018

then run helm init --upgrade to replace

@innovia Where to put the rbac config file?

@innovia
Copy link

innovia commented Jul 8, 2018

@cjbottaro did you read the post i wrote Hwo to setup helm and tiller per namespace ?

I don't follow your question, can you please re-explain?

@cjbottaro
Copy link

@innovia Nevermind, I figured it out. Just had to run

kubectl create -f tiller.yaml
helm init --upgrade --service-account tiller

@qiangli
Copy link

qiangli commented Jul 25, 2018

this worked for me:

kubectl --namespace kube-system create serviceaccount tiller
kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller
helm init --service-account tiller --upgrade

@RehanSaeed
Copy link

RehanSaeed commented Nov 15, 2018

I'm following the official Helm documentation for "Deploy Tiller in a namespace, restricted to deploying resources only in that namespace". Here is my bash script:

Namespace="$1"

kubectl create namespace $Namespace
kubectl create serviceaccount "tiller-$Namespace" --namespace $Namespace
kubectl create role "tiller-role-$Namespace" /
    --namespace $Namespace /
    --verb=* /
    --resource=*.,*.apps,*.batch,*.extensions
kubectl create rolebinding "tiller-rolebinding-$Namespace" /
    --namespace $Namespace /
    --role="tiller-role-$Namespace" /
    --serviceaccount="$Namespace:tiller-$Namespace"

Running helm upgrade gives me the following error:

Error: UPGRADE FAILED: configmaps is forbidden: User "system:serviceaccount:kube-system:default" cannot list configmaps in the namespace "kube-system"

Is there a bug in the official documentation? Have I read it wrong?

@bacongobbler
Copy link
Member

bacongobbler commented Nov 15, 2018

What was the full command for helm init? Can you please open a separate ticket for this?

@RehanSaeed
Copy link

@bacongobbler Moved issue here #4933

@devops-team-92
Copy link

the above doesn't work Still getting

namespaces "default" is forbidden: User "system:serviceaccount:kube-system:default" cannot get namespaces in the namespace "default"

Follow below Command:-

helm init --service-account tiller --upgrade -i registry.cn-hangzhou.aliyuncs.com/google_containers/tiller:v2.14.0 --stable-repo-url https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts

@HYChou0515
Copy link

What you need to do is grant tiller (via the default service account) access to install resources in the default namespace. See https://github.com/kubernetes/helm/blob/master/docs/service_accounts.md

The file name is now rbac.md and the link is at https://github.com/helm/helm/blob/master/docs/rbac.md.

@ishankhare07
Copy link

That's because you don't have the permission to deploy tiller, add an account for it:

kubectl --namespace kube-system create serviceaccount tiller

kubectl create clusterrolebinding tiller-cluster-rule \
 --clusterrole=cluster-admin --serviceaccount=kube-system:tiller

kubectl --namespace kube-system patch deploy tiller-deploy \
 -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}' 

Console output:

serviceaccount "tiller" created
clusterrolebinding "tiller-cluster-rule" created
deployment "tiller-deploy" patched

Then run command below to check it :

helm list
helm repo update
helm install --name nginx-ingress stable/nginx-ingress

It would be great if tiller installation docs be updated with these precise instructions
I had the following yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  name: tiller
  namespace: kube-system
---

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: tiller
roleRef:
  apiGroup: ""
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: tiller
    namespace: kube-system

If I'm correct i was missing the tiller deployment in this yaml?

@CloudA2Z-Code
Copy link

helm init --upgrade --service-account tiller

The above command fixes this issue, highly recommend this step at first :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests