Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Document a guide to management cluster security #4139

Closed
enxebre opened this issue Feb 3, 2021 · 13 comments
Closed

Document a guide to management cluster security #4139

enxebre opened this issue Feb 3, 2021 · 13 comments
Assignees
Labels
kind/bug Categorizes issue or PR as related to a bug. kind/documentation Categorizes issue or PR as related to documentation. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@enxebre
Copy link
Member

enxebre commented Feb 3, 2021

What steps did you take and what happened:
In v1alpha4 the recommend model fo multi tenancy is single controller for providers #4074.

With this model it might be the case where scalable resources and so machines for different clusters live in the same namespace. In such scenario the autoscaler would be watching nodes for the targeted cluster but would watch scalable resources for not only that cluster.

What did you expect to happen:
We should re evaluate the recommended multi tenancy model and document any caveats and recommended namespace topology when using autoscaling.

Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]

Environment:

  • Cluster-api version:
  • Minikube/KIND version:
  • Kubernetes version: (use kubectl version):
  • OS (e.g. from /etc/os-release):

/kind bug
[One or more /area label. See https://github.com/kubernetes-sigs/cluster-api/labels?q=area for the list of labels]

@enxebre
Copy link
Member Author

enxebre commented Feb 3, 2021

cc @randomvariable @fabriziopandini @JoelSpeed @elmiko

@vincepri
Copy link
Member

vincepri commented Feb 3, 2021

/kind documentation

Given the issue description seems to be referencing that an approach needs to be documented.

Is the main issue around credential management?

@k8s-ci-robot k8s-ci-robot added the kind/documentation Categorizes issue or PR as related to documentation. label Feb 3, 2021
@randomvariable
Copy link
Member

randomvariable commented Feb 3, 2021

I'm hoping it's mostly we do some documentation about how namespaces are your security boundary in the management cluster and that's inherited by any consumer, but will be meeting with @enxebre and @elmiko on Friday to get a better understanding, as I'm admittedly not up to speed on how the autoscaler integration was done.

@fabriziopandini
Copy link
Member

happy to join the meeting if possible

@JoelSpeed
Copy link
Contributor

I think @detiber might want to weigh in on this conversation as well. IIUC the autodiscovery logic that he added to the autoscaler should cater towards this at least partially?

@elmiko
Copy link
Contributor

elmiko commented Feb 4, 2021

i have a feeling the conversation this friday will be very high level, but i'm glad you brought up the autodiscovery stuff @JoelSpeed. i think you are correct that it could help for the multi-tenancy issue.

@vincepri
Copy link
Member

vincepri commented Feb 4, 2021

/milestone Next

@k8s-ci-robot k8s-ci-robot added this to the Next milestone Feb 4, 2021
@randomvariable
Copy link
Member

/retitle Document a guide to management cluster security

Follow up from today's call:

@randomvariable to write a doc on management cluster security as a whole, what are the security boundaries, and what does it mean to deploy multiple clusters in a single namespace.

@elmiko to contribute includes for Cluster Autoscaler docs in the CAPI book.

/assign

@k8s-ci-robot k8s-ci-robot changed the title Multitenancy and Autoscaling Document a guide to management cluster security Feb 5, 2021
@elmiko
Copy link
Contributor

elmiko commented Feb 8, 2021

added #4153

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 9, 2021
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jun 8, 2021
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-contributor-experience at kubernetes/community.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-contributor-experience at kubernetes/community.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. kind/documentation Categorizes issue or PR as related to documentation. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

8 participants