Skip to content
This repository has been archived by the owner on Nov 17, 2021. It is now read-only.

Make GC work also with RBAC #190

Closed
wants to merge 1 commit into from
Closed

Make GC work also with RBAC #190

wants to merge 1 commit into from

Conversation

mkmik
Copy link
Contributor

@mkmik mkmik commented Feb 16, 2018

Currently the GC does a pass through all the resource types
and performs a single cluster scoped list of all the objects.

This kind of requests is forbidden if the user has no rights list
every object of that type in all the namespaces.
When RBAC is configured, the user is likely to have at least one
namespace + resource type combination that they cannot list.

I tested this on a cluster with 4k objects in 12 namespaces,
and it was 2x slower on my slow ADSL (2m vs 1m before this patch).
That's bearable

But another cluster I have to use has 67 namespaces and it took
9m. That's tough.
There is no significant changes if by parallelizing by namespace.

Hence this change makes this new behavior optional and driven by flags:

  1. --gc-list-mode: governs whether to use the current "cluster-scope",
    or the slower "per-namespace" mode.
  2. --gc-ns-selector: allows users to only scan namespaces that match a
    given label selector. This is usually going to be some kind of team
    or project label that all namespaces associated with that user would
    have.

Unfortunately we cannot assume kubecfg can set the --gc-tag on the
namespaces objects it operates into, since they might not be writable by
users who have otherwise full control to all the other reasurces in it.

Closes #189

(also removes some dead code)

This PR doesn't add any tests. The existing code path for listing all resources for GC doesn't seem to have many tests either, so I'll shave this yak asynchronously.

Currently the GC does a pass through all the resource types
and performs a single cluster scoped list of all the objects.

This kind of requests is forbidden if the user has no rights list
every object of that type in all the namespaces.
When RBAC is configured, the user is likely to have at least one
namespace + resource type combination that they cannot list.

I tested this on a cluster with 4k objects in 12 namespaces,
and it was 2x slower on my slow ADSL (2m vs 1m before this patch).
That's bearable

But another cluster I have to use has 67 namespaces and it took
9m. That's tough.
There is no significant changes if by parallelizing by namespace.

Hence this change makes this new behavior optional and driven by flags:

1. --gc-list-mode: governs whether to use the current "cluster-scope",
   or the slower "per-namespace" mode.
2. --gc-ns-selector: allows users to only scan namespaces that match a
   given label selector. This is usually going to be some kind of team
   or project label that all namespaces associated with that user would
   have.

Unfortunately we cannot assume kubecfg can set the --gc-tag on the
namespaces objects it operates into, since they might not be writable by
users who have otherwise full control to all the other reasurces in it.

Closes #189

(also removes some dead code)
@anguslees
Copy link
Contributor

fyi, I added some tests for garbage collection just recently in #193

@anguslees
Copy link
Contributor

anguslees commented Mar 7, 2018

I like the change generally. I don't like adding new flags to force onto the user what could be determined automatically.

I think we should either just always do per-namespace listing, or (better) try global and fall back to per-namespace if we get a permission error. (separately for each resource type - if they all fail, then it's only one extra RPC per resource type)

@mkmik
Copy link
Contributor Author

mkmik commented Mar 9, 2018

I don't like adding new flags to force onto the user what could be determined automatically.

yeah it makes sense

@anguslees anguslees self-requested a review March 21, 2018 23:30
@aruiz14 aruiz14 closed this May 13, 2021
@aruiz14 aruiz14 deleted the branch vmware-archive:master May 13, 2021 08:41
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants