Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

list_namespaced_custom_object() enforces namespace, does not allow listing across namespaces #1750

Closed
f4z3r opened this issue Mar 16, 2022 · 13 comments
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@f4z3r
Copy link

f4z3r commented Mar 16, 2022

What is the feature and why do you need it:

When listing namespaced custom resources via list_namespaced_custom_object, a namespace needs to be provided. However the Kubernetes API provides an endpoint to list resources across namespaces, useful for performing label selection across the entire cluster.

see https://github.com/kubernetes-client/python/blob/master/kubernetes/docs/CustomObjectsApi.md#list_namespaced_custom_object

Describe the solution you'd like to see:

Make the namespace argument to the function optional, like in all other functions that perform "list" operations on the cluster. When not provided, the call goes to /apis/<domain>/<version>/<crd> instead of /apis/<domain>/<version>/namespaces/<ns>/<crd>.

@f4z3r f4z3r added the kind/feature Categorizes issue or PR as related to a new feature. label Mar 16, 2022
@roycaihw
Copy link
Member

kubectl supports listing across all the namespaces but I'm not sure if it's implemented client-side or server-side. Could you check what API kubectl calls by running kubectl list {custom_resource} -A -v=7?

@f4z3r
Copy link
Author

f4z3r commented Mar 29, 2022

It is implemented server-side. Namespced custom resources offer the /apis/<domain>/<version>/<crd> endpoint. See below (with CrunchyData's PostgresCluster as a CRD):

$ kubectl get postgresclusters -v7 -A
I0329 09:03:28.860330   52910 loader.go:375] Config loaded from file:  /home/jakob/.kube/config
I0329 09:03:28.889958   52910 round_trippers.go:420] GET https://127.0.0.1:16445/apis/postgres-operator.crunchydata.com/v1beta1/postgresclusters?limit=500
I0329 09:03:28.890074   52910 round_trippers.go:427] Request Headers:
I0329 09:03:28.890083   52910 round_trippers.go:431]     Accept: application/json;as=Table;v=v1beta1;g=meta.k8s.io, application/json
I0329 09:03:28.890089   52910 round_trippers.go:431]     User-Agent: kubectl/v1.17.13 (linux/amd64) kubernetes/30d651d
I0329 09:03:28.899887   52910 round_trippers.go:446] Response Status: 200 OK in 9 milliseconds
<redacted response>

@roycaihw
Copy link
Member

Thanks @f4z3r! Agree we need a method for the API. This can be fixed in the openapi spec we use to generate the custom_object API.

Would a list_custom_object_for_all_namespaces solve your needs? That naming seems more consistent with other APIs we have.

On a side note, we have a method calling that API already (list_cluster_custom_object), but the naming can be confusing.

@f4z3r
Copy link
Author

f4z3r commented Mar 30, 2022

Yes, such a method would be fine. I guess it depends whether you want to make a distinction between namespaced and cluster level custom objects. I realize that in terms of the python client there is little difference, but logically they are treated very differently by Kubernetes. If such a distinction is desired, I would name the method list_namespaced_custom_object_for_all_namespaces, such that it is consistent with the list_namespaced_custom_object function that queries a single namespace.

If the distinction is not desired, then I guess the existing method for cluster level custom objects would do the trick, but yes, the naming is very confusing. I would then rename it to simply list_custom_object (as it makes no difference whether it is namespaced or not). But this breaks backwards compatibility, so not sure this is a great solution 😄

@roycaihw
Copy link
Member

list_namespaced_custom_object_for_all_namespaces

I like that, and I'd love to express that in our openapi spec. However reading apigee-127/sway#32 it seems impossible to achieve in openapi.

Agree there are two approaches, using one method for both cases v.s. having two methods. As you pointed out changing the name of the existing method is backwards-incompatible. I think we can have a patch to add a second method list_namespaced_custom_object_for_all_namespaces, which simply calls the existing list_cluster_custom_object.

@unixunion
Copy link

I have run into this issue, was wondering if there is any progress on this? As I need to read clusterwide CustomResources.

@f4z3r
Copy link
Author

f4z3r commented Jun 21, 2022

Hi, I have not worked on this as it requires a change in the OpenAPI spec that I am unfamiliar with. I could, at best, have a look in a week or two. Depends on how much other work I have.

You can however currently "misuse" the list_cluster_custom_object function to list namespaced custom objects across all namespaces. As far as I remember it calls the correct API endpoint and performs no check whether the CR is actually namespaced or not.

@mukundjalan
Copy link

I think what you are asking for is implemented in #1377

@vivekjainx86
Copy link

Hello @f4z3r ,
you can just pass empty namespace to function, it will list custom resource across all namespaces.
e.g.
api.list_namespaced_custom_object(group="<group>",version="<version>",plural="<plural-name>", namespace="")

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 17, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Dec 17, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Jan 16, 2023
@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

7 participants