Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

protect kubernetes community owned API groups in CRDs #1111

Merged
merged 5 commits into from
Jul 23, 2019

Conversation

deads2k
Copy link
Contributor

@deads2k deads2k commented Jun 24, 2019

API groups are organized by namespace, similar to java packages. authorization.k8s.io is one example. When users create
CRDs, they get to specify an API group and their type will be injected into that group by the kube-apiserver.

The *.k8s.io or *.kubernetes.io groups are owned by the Kubernetes community and protected by API review (see What APIs need to be reviewed,
to ensure consistency and quality. To avoid confusion in our API groups and prevent accidentally claiming a
space inside of the kubernetes API groups, the kube-apiserver needs to be updated to protect these reserved API groups.

This KEP proposes adding an api-approved.kubernetes.io annotation to CustomResourceDefinition. This is only needed if
the CRD group is k8s.io, kubernetes.io, or ends with .k8s.io, .kubernetes.io. The value should be a link to the
pull request where the API has been approved.

metadata:
  annotations:
    "api-approved.kubernetes.io": "https://github.com/kubernetes/kubernetes/pull/78458"

/assign @jpbetz @liggitt @sttts

@kubernetes/sig-api-machinery-api-reviews @kubernetes/sig-architecture-api-reviews

@k8s-ci-robot k8s-ci-robot added sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. kind/api-change Categorizes issue or PR as related to adding, removing, or otherwise changing an API cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. sig/architecture Categorizes an issue or PR as relevant to SIG Architecture. kind/kep Categorizes KEP tracking issues and PRs modifying the KEP directory size/L Denotes a PR that changes 100-499 lines, ignoring generated files. labels Jun 24, 2019
@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Jun 24, 2019
@fejta-bot
Copy link

This PR may require API review.

If so, when the changes are ready, complete the pre-review checklist and request an API review.

Status of requested reviews is tracked in the API Review project.

@sttts
Copy link
Contributor

sttts commented Jun 25, 2019

Sgtm. Some clarification of the PR link would be good.

@deads2k
Copy link
Contributor Author

deads2k commented Jun 26, 2019

updated for comments.

@liggitt
Copy link
Member

liggitt commented Jul 2, 2019

one addition requested in #1111 (comment)

overall approach looks good to me

Copy link
Member

@timothysc timothysc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not really a fan of yet another annotation

@timothysc
Copy link
Member

TL;DR this should really be policy, then enforcement can be made from said policy.

@deads2k
Copy link
Contributor Author

deads2k commented Jul 2, 2019

TL;DR this should really be policy, then enforcement can be made from said policy.

This seems unnecessary if the above policy is spelled out in full.
Right now I think we need well defined policy in the community.

I think that is exactly what this KEP is and does.

We've had a well-defined policy for 11 months here: kubernetes/community#2433 . It was written, discussed, and reviewed thoroughly. The fact that it's apparently not well known seems to be an issue of enforcement.

This KEP takes the policy and provides a simple enforcement mechanism that makes these standards unignore-able without adding significant friction along the way.

@timothysc
Copy link
Member

timothysc commented Jul 2, 2019

We've had a well-defined policy for 11 months here: kubernetes/community#2433 . It was written, discussed, and reviewed thoroughly. The fact that it's apparently not well known seems to be an issue of enforcement.

It is well defined for core api's but it negates the larger ecosystem, which is where I think policy should be written then made enforceable.

We currently do not have any recommended guidelines for the community of non-core CRDs that are being published around k8s core.

@liggitt
Copy link
Member

liggitt commented Jul 2, 2019

It is well defined for core api's but it negates the larger ecosystem, which is where I think policy should be written then made enforceable.

https://github.com/kubernetes/community/blob/master/sig-architecture/api-review-process.md#what-apis-need-to-be-reviewed is the written policy that covers *.k8s.io APIs, regardless of whether they are CRD-based or not.

This is the proposal for the mechanism to make that policy enforceable.

@timothysc
Copy link
Member

timothysc commented Jul 2, 2019

https://github.com/kubernetes/community/blob/master/sig-architecture/api-review-process.md#what-apis-need-to-be-reviewed is the written policy that covers *.k8s.io APIs, regardless of whether they are CRD-based or not.

Then how does this impact the rest of the community that may overlap with that namespace, but have not changed yet? What are the recommendations wrt to naming?

If the current policy does not take that question into account, which it does not, I'd assert it requires refinement.

@BenTheElder
Copy link
Member

This is a really interesting idea... I have some questions / concerns:

What do we do about projects that unknowingly violated this?

How can we make this more discoverable? I had no idea that this review guidelines doc existed, I suspect many others in the project don't either, especially those working on SIG projects.

Can we recommend namespacing guidelines for SIG projects that are unlikely to be core APIs but might want to use CRD storage and might not be the best use of time to review?

EG if I add some CRDs to sigs.k8s.io/slack-infra for some kubernetes.slack.com configuration, or the "component config" style config for sigs.k8s.io/kind... I sort of doubt API reviewing these is the most productive route vs some alternate API group, but I don't know what the correct namespace would be would be.

rainest pushed a commit to Kong/kong-operator that referenced this pull request Oct 13, 2021
* charts(kong): update to kong-2.4.0

generator command: 'kong-2.4.0'

generator command version: 2b9dc2a

* release(v0.9.0) update operator metadata and docs

Release 0.9.0 and change the kongs CRD API group from charts.helm.k8s.io
to charts.konghq.com.

The new group is not in one of the protected groups established by
kubernetes/enhancements#1111. This operator CRD
should not use a protected group as it is not a core part of the
Kubernetes project.

This change makes the CRD compatible with Kubernetes >=1.22. However, it
breaks compatibility with previous versions of the operator. As such,
0.9.0 has no replace version: it requires a fresh operator install and a
fresh set of Kong CRs.

* test: update microk8s to 1.22

* test: update kubectl to 1.22.2

* test: update Ingress API version

* feat: support Ingress v1

* test: remove Ingress waits

Remove the Ingress status waits and add retry configuration to curl when
validating the Ingress configuration.

KIC 2.0+ handles status updates for non-LoadBalancer Services
differently than earlier versions. Previously, KIC would set a status
with a 0-length list of ingresses if the proxy Service was not type
Loadbalancer, e.g.

status:
  loadBalancer:
    ingress:
    - {}

As of KIC 2.0, no status is set if the Service is not type LoadBalancer,
e.g.

status:
  loadBalancer: {}

This change to the operator tests confirms that Ingress configuration
was successfully applied to the proxy using requests through the proxy
only. These now run immediately after the upstream Deployment becomes
available, however, so they may run before the controller has ingested
Ingress configuration or observed Endpoint updates. To account for this,
the curl checks are now wrapped in wait_for to allow a reasonable amount
of time for the controller to update configuration.

* fix: update ingress example in README.md to v1

* feat: update OLM maintainer info

Co-authored-by: Shane Utt <[email protected]>
Co-authored-by: Michał Flendrich <[email protected]>
michaelmdresser added a commit to kubecost/cluster-turndown that referenced this pull request May 27, 2022
apiextensions.k8s.io/v1beta1 is removed as of K8s v1.22 [1], so all CRDs
have to be updated to apiextensions.k8s.io/v1. This commit does the
upgrade for the turndownschedule CRD.

As part of the API updates, K8s is enforcing things grouped under
*.k8s.io to be approved [2] because they are actually supposed to be
Kuberenetes community-managed APIs [3]. So this commit also changes the
CRD from:

turndownschedules.kubecost.k8s.io
to
turndownschedules.kubecost.com

This is in-line with K8s rules and links to our main domain.

Tested by applying cluster-turndown-full.yaml and example-schedule.yaml
successfully.

[1] https://cloud.google.com/kubernetes-engine/docs/deprecations/apis-1-22
[2] kubernetes/enhancements#1111
[3] kubernetes/enhancements#1111 (comment)
@lowang-bh
Copy link
Member

lowang-bh commented Mar 28, 2023

Hi, I have an old crd with group "scheduling.incubator.k8s.io", how can i pass the check and work for me? I add annotation api-approved.kubernetes.io: "unapproved, experimental-only", is it ok?

following is my part of crd definations

apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
  name: queues.scheduling.incubator.k8s.io
  annotations:
    api-approved.kubernetes.io: "unapproved, experimental-only"
spec:
  group: scheduling.incubator.k8s.io
  names:
    kind: Queue
    listKind: QueueList
    plural: queues
    shortNames:
    - q
    singular: queue
  scope: Cluster

MikeSpreitzer added a commit to MikeSpreitzer/kcp-edge-mc that referenced this pull request Jun 16, 2023
Not exporting the cluster-scoped part yet, not sure about ability to
support it.

The CRDs and APIExports were produced as follows.

These were produced by the following bashery.

The following function converts a `kubectl api-resources` listing into
a listing of arguments to the kcp crd-puller.

```bash
function rejigger() {
    if [[ $# -eq 4 ]]
    then gv="$2"
    else gv="$3"
    fi

    case "$gv" in
	(*/*) group=.$(echo "$gv" | cut -f1 -d/) ;;
	(*)   group=""
    esac

    echo "${1}$group"
}
```

With `kubectl` configured to manipulate a kcp workspace, the following
command captures the listing of resources built into that kcp
workspace.

```bash
kubectl api-resources | grep -v APIVERSION | while read line; do rejigger $line; done > /tmp/kcp-rgs.txt
```

With `kubectl` configured to manipulate a kind cluster, the following
commands capture the resource listing split into namespaced and
cluster-scoped.

```bash
kubectl api-resources | grep -v APIVERSION | grep -w true | while read line; do rejigger $line; done > /tmp/kind-ns-rgs.txt
kubectl api-resources | grep -v APIVERSION | grep -w false | while read line; do rejigger $line; done > /tmp/kind-cs-rgs.txt
```

With CWD=config/kube/exports/namespaced,

```bash
crd-puller --kubeconfig $KUBECONFIG $(grep -v -f /tmp/kcp-rgs.txt /tmp/kind-ns-rgs.txt)
```

With CWD=config/kube/exports/cluster-scoped,

```bash
crd-puller --kubeconfig $KUBECONFIG $(grep -v -f /tmp/kcp-rgs.txt /tmp/kind-cs-rgs.txt)
```

Sadly, kubernetes/kubernetes#118698 is a thing.
So I manually hacked the CRD for jobs.

Sadly, the filenames produced by the crd-puller are not loved by
apigen.  The following function renames one file as needed.

```bash
function fixname() {
    rg=${1%%.yaml}
    case $rg in
	(*.*)
	    g=$(echo $rg | cut -d. -f2-)
	    r=$(echo $rg | cut -d. -f1);;
	(*)
	    g=core.k8s.io
	    r=$rg;;
    esac
    mv ${rg}.yaml ${g}_${r}.yaml
}
```

In each of those CRD directories,

```bash
for fn in *.yaml; do fixname $fn; done
```

Penultimately, with CWD=config/kube,

```bash
../../hack/tools/apigen --input-dir crds/namespaced --output-dir exports/namespaced
../../hack/tools/apigen --input-dir crds/cluster-scoped --output-dir exports/cluster-scoped
```

Finally, kubernetes/enhancements#1111 applies
to APIExport/APIBinding as well as to CRDs.  And the CRD puller does
not know anything about this (not that it would help?).  I manually
hacked the namespaced APIResource files that needed it to have an
`api-approved.kubernetes.io` annotation.  It turns out that the
checking in the apiserver only requires that the annotation's value
parse as a URL (any URL will do).

Signed-off-by: Mike Spreitzer <[email protected]>
MikeSpreitzer added a commit to MikeSpreitzer/kcp-edge-mc that referenced this pull request Jun 16, 2023
Not exporting the cluster-scoped part yet, not sure about ability to
support it.

Updated example1 to exercise this by switching the common workload
from a Deployment object to a ReplicaSet object.

The CRDs and APIExports were produced as follows.

These were produced by the following bashery.

The following function converts a `kubectl api-resources` listing into
a listing of arguments to the kcp crd-puller.

```bash
function rejigger() {
    if [[ $# -eq 4 ]]
    then gv="$2"
    else gv="$3"
    fi

    case "$gv" in
	(*/*) group=.$(echo "$gv" | cut -f1 -d/) ;;
	(*)   group=""
    esac

    echo "${1}$group"
}
```

With `kubectl` configured to manipulate a kcp workspace, the following
command captures the listing of resources built into that kcp
workspace.

```bash
kubectl api-resources | grep -v APIVERSION | while read line; do rejigger $line; done > /tmp/kcp-rgs.txt
```

With `kubectl` configured to manipulate a kind cluster, the following
commands capture the resource listing split into namespaced and
cluster-scoped.

```bash
kubectl api-resources | grep -v APIVERSION | grep -w true | while read line; do rejigger $line; done > /tmp/kind-ns-rgs.txt
kubectl api-resources | grep -v APIVERSION | grep -w false | while read line; do rejigger $line; done > /tmp/kind-cs-rgs.txt
```

With CWD=config/kube/exports/namespaced,

```bash
crd-puller --kubeconfig $KUBECONFIG $(grep -v -f /tmp/kcp-rgs.txt /tmp/kind-ns-rgs.txt)
```

With CWD=config/kube/exports/cluster-scoped,

```bash
crd-puller --kubeconfig $KUBECONFIG $(grep -v -f /tmp/kcp-rgs.txt /tmp/kind-cs-rgs.txt)
```

Sadly, kubernetes/kubernetes#118698 is a thing.
So I manually hacked the CRD for jobs.

Sadly, the filenames produced by the crd-puller are not loved by
apigen.  The following function renames one file as needed.

```bash
function fixname() {
    rg=${1%%.yaml}
    case $rg in
	(*.*)
	    g=$(echo $rg | cut -d. -f2-)
	    r=$(echo $rg | cut -d. -f1);;
	(*)
	    g=core.k8s.io
	    r=$rg;;
    esac
    mv ${rg}.yaml ${g}_${r}.yaml
}
```

In each of those CRD directories,

```bash
for fn in *.yaml; do fixname $fn; done
```

Penultimately, with CWD=config/kube,

```bash
../../hack/tools/apigen --input-dir crds/namespaced --output-dir exports/namespaced
../../hack/tools/apigen --input-dir crds/cluster-scoped --output-dir exports/cluster-scoped
```

Finally, kubernetes/enhancements#1111 applies
to APIExport/APIBinding as well as to CRDs.  And the CRD puller does
not know anything about this (not that it would help?).  I manually
hacked the namespaced APIResource files that needed it to have an
`api-approved.kubernetes.io` annotation.  It turns out that the
checking in the apiserver only requires that the annotation's value
parse as a URL (any URL will do).

Signed-off-by: Mike Spreitzer <[email protected]>
MikeSpreitzer added a commit to MikeSpreitzer/kcp-edge-mc that referenced this pull request Jun 16, 2023
Not exporting the cluster-scoped part yet, not sure about ability to
support it.

Updated example1 to exercise this by switching the common workload
from a Deployment object to a ReplicaSet object.

The CRDs and APIExports were produced as follows.

These were produced by the following bashery.

The following function converts a `kubectl api-resources` listing into
a listing of arguments to the kcp crd-puller.

```bash
function rejigger() {
    if [[ $# -eq 4 ]]
    then gv="$2"
    else gv="$3"
    fi

    case "$gv" in
	(*/*) group=.$(echo "$gv" | cut -f1 -d/) ;;
	(*)   group=""
    esac

    echo "${1}$group"
}
```

With `kubectl` configured to manipulate a kcp workspace, the following
command captures the listing of resources built into that kcp
workspace.

```bash
kubectl api-resources | grep -v APIVERSION | while read line; do rejigger $line; done > /tmp/kcp-rgs.txt
```

With `kubectl` configured to manipulate a kind cluster, the following
commands capture the resource listing split into namespaced and
cluster-scoped.

```bash
kubectl api-resources | grep -v APIVERSION | grep -w true | while read line; do rejigger $line; done > /tmp/kind-ns-rgs.txt
kubectl api-resources | grep -v APIVERSION | grep -w false | while read line; do rejigger $line; done > /tmp/kind-cs-rgs.txt
```

With CWD=config/kube/exports/namespaced,

```bash
crd-puller --kubeconfig $KUBECONFIG $(grep -v -f /tmp/kcp-rgs.txt /tmp/kind-ns-rgs.txt)
```

With CWD=config/kube/exports/cluster-scoped,

```bash
crd-puller --kubeconfig $KUBECONFIG $(grep -v -f /tmp/kcp-rgs.txt /tmp/kind-cs-rgs.txt)
```

Sadly, kubernetes/kubernetes#118698 is a thing.
So I manually hacked the CRD for jobs.

Sadly, the filenames produced by the crd-puller are not loved by
apigen.  The following function renames one file as needed.

```bash
function fixname() {
    rg=${1%%.yaml}
    case $rg in
	(*.*)
	    g=$(echo $rg | cut -d. -f2-)
	    r=$(echo $rg | cut -d. -f1);;
	(*)
	    g=core.k8s.io
	    r=$rg;;
    esac
    mv ${rg}.yaml ${g}_${r}.yaml
}
```

In each of those CRD directories,

```bash
for fn in *.yaml; do fixname $fn; done
```

Penultimately, with CWD=config/kube,

```bash
../../hack/tools/apigen --input-dir crds/namespaced --output-dir exports/namespaced
../../hack/tools/apigen --input-dir crds/cluster-scoped --output-dir exports/cluster-scoped
```

Finally, kubernetes/enhancements#1111 applies
to APIExport/APIBinding as well as to CRDs.  And the CRD puller does
not know anything about this (not that it would help?).  I manually
hacked the namespaced APIResource files that needed it to have an
`api-approved.kubernetes.io` annotation.  It turns out that the
checking in the apiserver only requires that the annotation's value
parse as a URL (any URL will do).

Signed-off-by: Mike Spreitzer <[email protected]>
MikeSpreitzer added a commit to MikeSpreitzer/kcp-edge-mc that referenced this pull request Jun 16, 2023
Not exporting the cluster-scoped part yet, not sure about ability to
support it.

Updated example1 to exercise this by switching the common workload
from a Deployment object to a ReplicaSet object.

Also updated example1 to use `kubestellar init` because that now does
a lot more than just create one workspace and do one `kubectl apply`.

The CRDs and APIExports were produced as follows.

These were produced by the following bashery.

The following function converts a `kubectl api-resources` listing into
a listing of arguments to the kcp crd-puller.

```bash
function rejigger() {
    if [[ $# -eq 4 ]]
    then gv="$2"
    else gv="$3"
    fi

    case "$gv" in
	(*/*) group=.$(echo "$gv" | cut -f1 -d/) ;;
	(*)   group=""
    esac

    echo "${1}$group"
}
```

With `kubectl` configured to manipulate a kcp workspace, the following
command captures the listing of resources built into that kcp
workspace.

```bash
kubectl api-resources | grep -v APIVERSION | while read line; do rejigger $line; done > /tmp/kcp-rgs.txt
```

With `kubectl` configured to manipulate a kind cluster, the following
commands capture the resource listing split into namespaced and
cluster-scoped.

```bash
kubectl api-resources | grep -v APIVERSION | grep -w true | while read line; do rejigger $line; done > /tmp/kind-ns-rgs.txt
kubectl api-resources | grep -v APIVERSION | grep -w false | while read line; do rejigger $line; done > /tmp/kind-cs-rgs.txt
```

With CWD=config/kube/exports/namespaced,

```bash
crd-puller --kubeconfig $KUBECONFIG $(grep -v -f /tmp/kcp-rgs.txt /tmp/kind-ns-rgs.txt)
```

With CWD=config/kube/exports/cluster-scoped,

```bash
crd-puller --kubeconfig $KUBECONFIG $(grep -v -f /tmp/kcp-rgs.txt /tmp/kind-cs-rgs.txt)
```

I manually deleted the four CRDs from https://github.com/kcp-dev/kcp/tree/v0.11.0/config/rootcompute/kube-1.24 .

Sadly, kubernetes/kubernetes#118698 is a thing.
So I manually hacked the CRD for jobs.

Sadly, the filenames produced by the crd-puller are not loved by
apigen.  The following function renames one file as needed.

```bash
function fixname() {
    rg=${1%%.yaml}
    case $rg in
	(*.*)
	    g=$(echo $rg | cut -d. -f2-)
	    r=$(echo $rg | cut -d. -f1);;
	(*)
	    g=core.k8s.io
	    r=$rg;;
    esac
    mv ${rg}.yaml ${g}_${r}.yaml
}
```

In each of those CRD directories,

```bash
for fn in *.yaml; do fixname $fn; done
```

Penultimately, with CWD=config/kube,

```bash
../../hack/tools/apigen --input-dir crds/namespaced --output-dir exports/namespaced
../../hack/tools/apigen --input-dir crds/cluster-scoped --output-dir exports/cluster-scoped
```

Finally, kubernetes/enhancements#1111 applies
to APIExport/APIBinding as well as to CRDs.  And the CRD puller does
not know anything about this (not that it would help?).  I manually
hacked the namespaced APIResource files that needed it to have an
`api-approved.kubernetes.io` annotation.  It turns out that the
checking in the apiserver only requires that the annotation's value
parse as a URL (any URL will do).

Signed-off-by: Mike Spreitzer <[email protected]>
MikeSpreitzer added a commit to MikeSpreitzer/kcp-edge-mc that referenced this pull request Jun 20, 2023
Not exporting the cluster-scoped part yet, not sure about ability to
support it.

Updated example1 to exercise this by switching the common workload
from a Deployment object to a ReplicaSet object.

Also updated example1 to use `kubestellar init` because that now does
a lot more than just create one workspace and do one `kubectl apply`.

The CRDs and APIExports were produced as follows.

These were produced by the following bashery.

The following function converts a `kubectl api-resources` listing into
a listing of arguments to the kcp crd-puller.

```bash
function rejigger() {
    if [[ $# -eq 4 ]]
    then gv="$2"
    else gv="$3"
    fi

    case "$gv" in
	(*/*) group=.$(echo "$gv" | cut -f1 -d/) ;;
	(*)   group=""
    esac

    echo "${1}$group"
}
```

With `kubectl` configured to manipulate a kcp workspace, the following
command captures the listing of resources built into that kcp
workspace.

```bash
kubectl api-resources | grep -v APIVERSION | while read line; do rejigger $line; done > /tmp/kcp-rgs.txt
```

With `kubectl` configured to manipulate a kind cluster, the following
commands capture the resource listing split into namespaced and
cluster-scoped.

```bash
kubectl api-resources | grep -v APIVERSION | grep -w true | while read line; do rejigger $line; done > /tmp/kind-ns-rgs.txt
kubectl api-resources | grep -v APIVERSION | grep -w false | while read line; do rejigger $line; done > /tmp/kind-cs-rgs.txt
```

With CWD=config/kube/exports/namespaced,

```bash
crd-puller --kubeconfig $KUBECONFIG $(grep -v -f /tmp/kcp-rgs.txt /tmp/kind-ns-rgs.txt)
```

With CWD=config/kube/exports/cluster-scoped,

```bash
crd-puller --kubeconfig $KUBECONFIG $(grep -v -f /tmp/kcp-rgs.txt /tmp/kind-cs-rgs.txt)
```

I manually deleted the four CRDs from https://github.com/kcp-dev/kcp/tree/v0.11.0/config/rootcompute/kube-1.24 .

Sadly, kubernetes/kubernetes#118698 is a thing.
So I manually hacked the CRD for jobs.

Sadly, the filenames produced by the crd-puller are not loved by
apigen.  The following function renames one file as needed.

```bash
function fixname() {
    rg=${1%%.yaml}
    case $rg in
	(*.*)
	    g=$(echo $rg | cut -d. -f2-)
	    r=$(echo $rg | cut -d. -f1);;
	(*)
	    g=core.k8s.io
	    r=$rg;;
    esac
    mv ${rg}.yaml ${g}_${r}.yaml
}
```

In each of those CRD directories,

```bash
for fn in *.yaml; do fixname $fn; done
```

Penultimately, with CWD=config/kube,

```bash
../../hack/tools/apigen --input-dir crds/namespaced --output-dir exports/namespaced
../../hack/tools/apigen --input-dir crds/cluster-scoped --output-dir exports/cluster-scoped
```

Finally, kubernetes/enhancements#1111 applies
to APIExport/APIBinding as well as to CRDs.  And the CRD puller does
not know anything about this (not that it would help?).  I manually
hacked the namespaced APIResource files that needed it to have an
`api-approved.kubernetes.io` annotation.  It turns out that the
checking in the apiserver only requires that the annotation's value
parse as a URL (any URL will do).

Signed-off-by: Mike Spreitzer <[email protected]>
MikeSpreitzer added a commit to MikeSpreitzer/kcp-edge-mc that referenced this pull request Jun 20, 2023
Not exporting the cluster-scoped part yet, not sure about ability to
support it.

Updated example1 to exercise this by switching the common workload
from a Deployment object to a ReplicaSet object.

Also updated example1 to use `kubestellar init` because that now does
a lot more than just create one workspace and do one `kubectl apply`.

The CRDs and APIExports were produced as follows.

These were produced by the following bashery.

The following function converts a `kubectl api-resources` listing into
a listing of arguments to the kcp crd-puller.

```bash
function rejigger() {
    if [[ $# -eq 4 ]]
    then gv="$2"
    else gv="$3"
    fi

    case "$gv" in
	(*/*) group=.$(echo "$gv" | cut -f1 -d/) ;;
	(*)   group=""
    esac

    echo "${1}$group"
}
```

With `kubectl` configured to manipulate a kcp workspace, the following
command captures the listing of resources built into that kcp
workspace.

```bash
kubectl api-resources | grep -v APIVERSION | while read line; do rejigger $line; done > /tmp/kcp-rgs.txt
```

With `kubectl` configured to manipulate a kind cluster, the following
commands capture the resource listing split into namespaced and
cluster-scoped.

```bash
kubectl api-resources | grep -v APIVERSION | grep -w true | while read line; do rejigger $line; done > /tmp/kind-ns-rgs.txt
kubectl api-resources | grep -v APIVERSION | grep -w false | while read line; do rejigger $line; done > /tmp/kind-cs-rgs.txt
```

With CWD=config/kube/exports/namespaced,

```bash
crd-puller --kubeconfig $KUBECONFIG $(grep -v -f /tmp/kcp-rgs.txt /tmp/kind-ns-rgs.txt)
```

With CWD=config/kube/exports/cluster-scoped,

```bash
crd-puller --kubeconfig $KUBECONFIG $(grep -v -f /tmp/kcp-rgs.txt /tmp/kind-cs-rgs.txt)
```

I manually deleted the four CRDs from https://github.com/kcp-dev/kcp/tree/v0.11.0/config/rootcompute/kube-1.24 .

Sadly, kubernetes/kubernetes#118698 is a thing.
So I manually hacked the CRD for jobs.

Sadly, the filenames produced by the crd-puller are not loved by
apigen.  The following function renames one file as needed.

```bash
function fixname() {
    rg=${1%%.yaml}
    case $rg in
	(*.*)
	    g=$(echo $rg | cut -d. -f2-)
	    r=$(echo $rg | cut -d. -f1);;
	(*)
	    g=core.k8s.io
	    r=$rg;;
    esac
    mv ${rg}.yaml ${g}_${r}.yaml
}
```

In each of those CRD directories,

```bash
for fn in *.yaml; do fixname $fn; done
```

Penultimately, with CWD=config/kube,

```bash
../../hack/tools/apigen --input-dir crds/namespaced --output-dir exports/namespaced
../../hack/tools/apigen --input-dir crds/cluster-scoped --output-dir exports/cluster-scoped
```

Finally, kubernetes/enhancements#1111 applies
to APIExport/APIBinding as well as to CRDs.  And the CRD puller does
not know anything about this (not that it would help?).  I manually
hacked the namespaced APIResource files that needed it to have an
`api-approved.kubernetes.io` annotation.  It turns out that the
checking in the apiserver only requires that the annotation's value
parse as a URL (any URL will do).

Signed-off-by: Mike Spreitzer <[email protected]>
@palnabarun
Copy link
Member

jbhalodia-slack pushed a commit to jbhalodia-slack/spark-operator that referenced this pull request Oct 4, 2024
…ow#1298)

* Migrate Spark CRDs from v1beta1 to v1

Signed-off-by: Daniel AguadoAraujo <[email protected]>

* Add extra printer columns for CRDs. Bump chart version

Signed-off-by: Daniel AguadoAraujo <[email protected]>

* Update CRDs definition files on app manifest

Signed-off-by: Daniel AguadoAraujo <[email protected]>

* Add annotation on CRDs to ignore the new policy to not use CRD groups k8s.io or kubernetes.io kubernetes/enhancements#1111

Signed-off-by: Daniel AguadoAraujo <[email protected]>
Rakshith-R added a commit to Rakshith-R/external-snapshot-metadata that referenced this pull request Nov 19, 2024
This commit adds the following annotation
`api-approved.kubernetes.io: \
"https://github.com/kubernetes-csi/external-snapshot-metadata/pull/2"`.

Refer to kubernetes/enhancements#1111
for more details.

Signed-off-by: Rakshith R <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/api-change Categorizes issue or PR as related to adding, removing, or otherwise changing an API kind/kep Categorizes KEP tracking issues and PRs modifying the KEP directory lgtm "Looks good to me", indicates that a PR is ready to be merged. sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. sig/architecture Categorizes an issue or PR as relevant to SIG Architecture. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.