Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Need ability to exclude namespaces #2799

Closed
MarkDeckert opened this issue May 7, 2019 · 3 comments
Closed

Need ability to exclude namespaces #2799

MarkDeckert opened this issue May 7, 2019 · 3 comments
Labels

Comments

@MarkDeckert
Copy link

MarkDeckert commented May 7, 2019

Feature Request

An "exclude namespaces" feature

What problem are you trying to solve?

Many enterprises use larger multi-tenant clusters. Currently linkerd2 requires several cluster level permissions that allow anyone with access to linkerd to view pods in other namespaces they may not otherwise have access to. In a large enough cluster there also may be alternate linkerd2 control planes managed by different teams based on their specific needs that should have access to different namespaces.

How should the problem be solved?

There should be an option similar to Istio's "--exclude-namespaces" arg at install (or perhaps the opposite: an "--include-namespaces" arg). This would alter the ClusterRoleBindings as necessary to ensure only validatingwebhookconfigurations and tokenreviews are bound at the cluster level by their respective serviceaccounts. All other namespace level objects would then be bound using RoleBindings in the non-excluded namespaces.

There are a couple problems/issues with this.
A minor issue is this creates clutter, as it will create RoleBindings in every namespace that wasn't excluded. Obviously careful use of the command at install time would be necessary to prevent unnecessary clutter.

A larger issue (but perhaps more easily solved) is the linkerd-controller pod crashes when it doesn't have cluster-level access to pods. (See Alternatives below). Hopefully this is just a validating check at startup that can be corrected by only checking the control-plane's namespace itself.

Any alternatives you've considered?

I've attempted to replicate the end-result by modifying the ClusterRoleBindings into RoleBindings but unfortunately both the linkerd-controller pod destination container and the linkerd-sp-validator pod sp-validator container go into crash-loop when they lack cluster level list pods permission.

kubectl logs -n linkerd linkerd-controller-59685d998d-7zpmf -c destination time="2019-05-07T02:09:20Z" level=info msg="running version stable-2.3.0" time="2019-05-07T02:09:20Z" level=fatal msg="Failed to initialize K8s API: not authorized to access pods"

kubectl logs -n linkerd linkerd-sp-validator-76dd654cc-58xsg -c sp-validator time="2019-05-07T02:44:03Z" level=info msg="running version stable-2.3.0" time="2019-05-07T02:44:03Z" level=fatal msg="failed to initialize Kubernetes API: not authorized to access pods"

How would users interact with this feature?

linkerd install --exclude-namespaces"comma separated list of namespaces to exclude"
OR
linkerd install --include-namespaces"comma separated list of namespaces to include"

Additionally perhaps some command to later install RoleBindings into additional namespaces on an ad-hoc basis. Although this isn't really necessary for anyone who knows how to use k8s.

@grampelberg grampelberg added the rfc label May 7, 2019
@grampelberg
Copy link
Contributor

At the moment we're going a slightly different route for this one. The first steps are in #2725, basically standard k8s RBAC will be usable to provide true multi-tenant dashboards and API access.

There's been some work in allowing multiple installs of control planes as well (and securing them). The multi-stage install work has been going towards that.

Do both of those solutions solve your specific problem? --include-namespaces feels like a solid interim step, just doesn't feel like the right end solution.

@MarkDeckert
Copy link
Author

Thanks for the quick reply. It looks like #2725 would actually be the superior solution.

In the meantime, if --include-namespaces feels too heavy-handed to you, it seems that just removing/modifying the cluster-level checks in https://github.com/linkerd/linkerd2/blob/master/pkg/k8s/authz.go would allow a manual RoleBinding-based workaround. Thoughts?

I don't actually see any reason why multiple linkerd2 control planes couldn't be installed even today, except that users of each would be seeing the other (which would be solved via the RBAC solution). A possible issue I can think of would be usage of the autoinject option. One solution is the ns annotation could have a customizable unique value for each control plane.

@grampelberg
Copy link
Contributor

I'm going to close this out. Because of k8s and how it handles things such as APIServices and admission-controllers, it is extremely difficult for there to be multiple control planes on a single cluster. For now, we'll be doing a single control plane per cluster and locking down the tenancy model via k8s RBAC.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Jul 17, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

No branches or pull requests

2 participants