diff --git a/keps/sig-cli/3659-kubectl-apply-prune/README.md b/keps/sig-cli/3659-kubectl-apply-prune/README.md
new file mode 100644
index 00000000000..563a8e1045d
--- /dev/null
+++ b/keps/sig-cli/3659-kubectl-apply-prune/README.md
@@ -0,0 +1,896 @@
+
+# KEP-3659: kubectl apply --prune redesign and graduation strategy
+
+
+
+
+- [Release Signoff Checklist](#release-signoff-checklist)
+- [Summary](#summary)
+- [Motivation](#motivation)
+ - [Goals](#goals)
+ - [Non-Goals](#non-goals)
+- [Background](#background)
+ - [Use case](#use-case)
+ - [Feature history](#feature-history)
+ - [Current implementation](#current-implementation)
+ - [Problems with the current implementation](#problems-with-the-current-implementation)
+ - [Correctness: object leakage](#correctness-object-leakage)
+ - [Scalability](#scalability)
+ - [UX: easy to trigger inadvertent over-selection](#ux-easy-to-trigger-inadvertent-over-selection)
+ - [UX: flag changes affect correctness](#ux-flag-changes-affect-correctness)
+ - [UX: difficult to use with custom resources](#ux-difficult-to-use-with-custom-resources)
+ - [Sustainability: incompatibility with server-side apply](#sustainability-incompatibility-with-server-side-apply)
+- [Proposal](#proposal)
+ - [User Stories (Optional)](#user-stories-optional)
+ - [Story 1](#story-1)
+ - [Story 2](#story-2)
+ - [Notes/Constraints/Caveats (Optional)](#notesconstraintscaveats-optional)
+ - [Risks and Mitigations](#risks-and-mitigations)
+- [Design Details](#design-details)
+ - [Test Plan](#test-plan)
+ - [Prerequisite testing updates](#prerequisite-testing-updates)
+ - [Unit tests](#unit-tests)
+ - [Integration tests](#integration-tests)
+ - [e2e tests](#e2e-tests)
+ - [Graduation Criteria](#graduation-criteria)
+ - [Upgrade / Downgrade Strategy](#upgrade--downgrade-strategy)
+ - [Version Skew Strategy](#version-skew-strategy)
+- [Production Readiness Review Questionnaire](#production-readiness-review-questionnaire)
+ - [Feature Enablement and Rollback](#feature-enablement-and-rollback)
+ - [Rollout, Upgrade and Rollback Planning](#rollout-upgrade-and-rollback-planning)
+ - [Monitoring Requirements](#monitoring-requirements)
+ - [Dependencies](#dependencies)
+ - [Scalability](#scalability-1)
+ - [Troubleshooting](#troubleshooting)
+- [Implementation History](#implementation-history)
+- [Drawbacks](#drawbacks)
+- [Alternatives](#alternatives)
+- [Infrastructure Needed (Optional)](#infrastructure-needed-optional)
+
+
+## Release Signoff Checklist
+
+Items marked with (R) are required *prior to targeting to a milestone / release*.
+
+- [ ] (R) Enhancement issue in release milestone, which links to KEP dir in [kubernetes/enhancements] (not the initial KEP PR)
+- [ ] (R) KEP approvers have approved the KEP status as `implementable`
+- [ ] (R) Design details are appropriately documented
+- [ ] (R) Test plan is in place, giving consideration to SIG Architecture and SIG Testing input (including test refactors)
+ - [ ] e2e Tests for all Beta API Operations (endpoints)
+ - [ ] (R) Ensure GA e2e tests meet requirements for [Conformance Tests](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/conformance-tests.md)
+ - [ ] (R) Minimum Two Week Window for GA e2e tests to prove flake free
+- [ ] (R) Graduation criteria is in place
+ - [ ] (R) [all GA Endpoints](https://github.com/kubernetes/community/pull/1806) must be hit by [Conformance Tests](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/conformance-tests.md)
+- [ ] (R) Production readiness review completed
+- [ ] (R) Production readiness review approved
+- [ ] "Implementation History" section is up-to-date for milestone
+- [ ] User-facing documentation has been created in [kubernetes/website], for publication to [kubernetes.io]
+- [ ] Supporting documentation—e.g., additional design documents, links to mailing list discussions/SIG meetings, relevant PRs/issues, release notes
+
+
+
+[kubernetes.io]: https://kubernetes.io/
+[kubernetes/enhancements]: https://git.k8s.io/enhancements
+[kubernetes/kubernetes]: https://git.k8s.io/kubernetes
+[kubernetes/website]: https://git.k8s.io/website
+
+## Summary
+
+
+
+When creating objects with `kubectl apply`, it is frequently desired to make changes to the config that remove objects and then re-apply and have those objects deleted. Since Kubernetes v1.5, an alpha-stage `--prune` flag exists to support this workflow: it deletes objects previously applied that no longer exist in the source config. However, the current implementation has fundamental design flaws that limit its performance and lead to surprising behaviours. This KEP proposes a safer and more performant implementation for this feature, along with a plan that will enable it to progress out of alpha while continuing to satisfy the needs of the users who have come to depend on it over the past 20+ releases.
+
+
+## Motivation
+
+### Goals
+
+- MUST use a pruning set identification algorithm that remains accurate regardless of what has changed between the previous and current sets
+- MUST use a pruning set identification algorithm that scales to thousands of resources across hundreds of types
+- MUST natively support custom resources
+- MUST provide a way to accurately preview which objects will be deleted
+- MUST support namespaced and non-namespaced resources; SHOULD support them within the same operation
+
+
+### Non-Goals
+
+- MUST NOT formalize the grouping of objects under management (i.e. it is a just a set of objects, not an "application" or other high-level construct) or require the user to do so to use the feature
+- MUST NOT require server-side API changes
+- MUST NOT require third-party CRDs to be installed
+- MAY still have limited performance when used to manage thousands of resources of hundreds of types in a single operation (MUST NOT be expected to overcome performance limitations of issuing many individual deletion requests, for example)
+
+## Background
+
+### Use case
+
+The pruning feature enables kubectl to automatically clean up previously applied objects that have been removed from the current configuration set.
+
+Adding the `--prune` flag to kubectl apply adds a deletion step after objects are applied, removing all objects that were previously applied AND are not currently being applied: `{objects to prune (delete)} = {previously applied objects} - {currently applied objects}`.
+
+In the illustration below, we initially apply a configuration set containing two objects: Object A and Object B. Then, we remove Object A from our configuration and add Object C. When we re-apply our configuration with pruning enabled, we expect Object A to be deleted (pruned), Object B to be updated, and Object C to be created. This basic use case works as expected today.
+
+
+
+
+### Feature history
+
+The `--prune` flag (and dependent `--prune-whitelist` and `--all` flags) were added to `kubectl apply` back in [Kubernetes v1.5](https://github.com/kubernetes/kubernetes/commit/56a22f925f6f1fd774ad1ae9e04bcf8d75bbde63). Twenty releases later, this feature is still in alpha, as documented in `kubectl apply -h` (though interestingly not on the flag doc string itself, or during usage):
+
+
+Relevant portion of `kubectl apply -h`
+
+```shell
+$ kubectl version --client --short
+Client Version: v1.25.2
+
+$ kubectl apply -h
+Apply a configuration to a resource by file name or stdin. The resource name must be specified. This resource will be
+created if it doesn't exist yet. To use 'apply', always create the resource initially with either 'apply' or 'create
+--save-config'.
+
+ JSON and YAML formats are accepted.
+
+ Alpha Disclaimer: the --prune functionality is not yet complete. Do not use unless you are aware of what the current
+state is. See https://issues.k8s.io/34274.
+
+Examples:
+ # Note: --prune is still in Alpha
+ # Apply the configuration in manifest.yaml that matches label app=nginx and delete all other resources that are not in
+the file and match label app=nginx
+ kubectl apply --prune -f manifest.yaml -l app=nginx
+
+ # Apply the configuration in manifest.yaml and delete all the other config maps that are not in the file
+ kubectl apply --prune -f manifest.yaml --all --prune-whitelist=core/v1/ConfigMap
+
+Options:
+ --all=false:
+ Select all resources in the namespace of the specified resource types.
+ --prune=false:
+ Automatically delete resource objects, that do not appear in the configs and are created by either apply or
+ create --save-config. Should be used with either -l or --all.
+ --prune-whitelist=[]:
+ Overwrite the default whitelist with for --prune
+ -l, --selector='':
+ Selector (label query) to filter on, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2). Matching
+ objects must satisfy all of the specified label constraints.
+```
+
+
+The reason for this stagnation is that the implementation has fundamental limitations that limit performance and cause unexpected behaviours.
+
+Acknowledging that pruning could not be progressed out of alpha in its current form, SIG CLI created a proof of concept for an alternative implmentation in the [cli-utils](https://github.com/kubernetes-sigs/cli-utils) repo in 2019 (initially [moved over](https://github.com/kubernetes-sigs/cli-utils/pull/1) from [cli-experimental#13](https://github.com/kubernetes-sigs/cli-experimental/pull/13)). This implementation was proposed in [KEP 810](https://github.com/kubernetes/enhancements/pull/810/files), which did not reach consensus and was ultimately closed. In the subsequent three years, work continued on the proof of concept, and other ecosystem tools (notably `kpt live apply`) have been using it successfully while the canoncial implementation in k/k has continued to stagnate.
+
+### Current implementation
+
+The implementation of this feature is not as simple as the illustration above might suggest at first glance. The core of the reason is that the previously applied set is not specifically encoded anywhere by the previous apply operation, and therefore that set needs to be dynamically discovered.
+
+Several different factors are used to select the set of objects to be pruned:
+
+1. **GVK allowlist**: A user-provided ( via `--prune-whitelist` until v1.26, `--prune-allowlist` in v1.26+) or defaulted list of GVK strings identifying which resources kubectl will consider for pruning. The default list is hardcoded. [[code](https://github.com/kubernetes/kubernetes/blob/e39a0af5ce0a836b30bd3cce237778fb4557f0cb/staging/src/k8s.io/kubectl/pkg/util/prune/prune.go#L28-L50)]
+1. **namespace** (for namespaced resources): `kubectl` keeps track of which namespaces it has "visited" during the apply operation and considers both them and the objects they contain for pruning. [[code](https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/kubectl/pkg/cmd/apply/prune.go#L78)]
+1. **the `kubectl.kubernetes.io/last-applied-configuration` annotation**: kubectl uses this as the signal that the object was created with `apply` as opposed to by another kubectl command or entity. Only objects created by apply are considered for pruning. [[code](https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/kubectl/pkg/cmd/apply/prune.go#L117-L120)]
+1. **labels**: pruning forces users to specify either `--all` or `-l/--selector`, and in the latter case, the query selecting resources for pruning will be constrained by the provided labels (note that this flag also constrains the resources applied in the main operation) [[code](https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/kubectl/pkg/cmd/apply/prune.go#L99)]
+
+For a more detailed walkthrough of the implementation along with examples, please see [kubectl apply/prune: implementation and limitations](https://docs.google.com/document/d/1y747qL3GYMDDYHR8SqJ-6iWjQEdCJwK_n0z_KHcwa3I/edit#) by @seans3.
+
+### Problems with the current implementation
+
+#### Correctness: object leakage
+
+ If an object is supposed to be pruned, but it is not, then it is leaked. This situation occurs when the set of previously applied objects selected is incomplete. There are two main ways this can happen:
+ - **GVK allowlist mismatch**: the allowlist is hardcoded (either by kubectl or by the user) and as such it is not tied in any way to the list of kinds we actually need to manage to prune effectively. For example, the default allowlist will never prune PDBs, regardless of whether current or previous operations created them.
+ - **namespace mismatch**: the namespace list is constructed dynamically from the _current_ set of objects, which causes object leakage when the current operation touches fewer namespaces than the previous one did. For example, if the initial operation touched namespaces A and B, and the second touched only B, nothing in namespace A will be pruned.
+
+ TODO: link issues
+
+#### UX: flag changes affect correctness
+
+If the user changes the `--prune-allowlist` or `--selector` flags used with the apply command, this may radically change the scoping of the pruning operation, causing it over- or under-select resources. For example, if they add a new label to all their resources and adjust the `--selector` accordingly, this will have the side-effect of leaking ALL resources that should have been deleted during the operation (nothing will be pruned). On the contrary, if `--prune-allowlist` is expanded to include additional types or `--selector` is made more general, any objects that have been manually applied by other actors in the system may automatically get scoped in.
+
+TODO: link issues
+
+#### Scalability
+
+To discover the set of resources to be pruned, kubectl makes a LIST query to every GVR on the allowlist, for every namespace (if applicable): `GVR(namespaced)*Ns + GVR(global)`. For example, with the default list and one target namespace, this is 14 requests; with the default list and two namespaces, it jumps to 26. An obvious fix for some of the correctness issues described would be to get the full list of GVRs from discovery and query ALL of them, ensuring all previous objects are discovered. Indeed some tools do this, and pass the resulting list to kubectl's allowlist. But this strategy is clearly not performant, and many of the additional queries are wasted, as the GVRs in question are extremely unlikely to have resources managed via kubectl.
+
+A related issue is that the identifier of ownership for pruning is the last-applied annotation, which is not something that can be queried on. This means we cannot avoid retrieving irrelevant resources in the LIST requests we make.
+
+TODO: link issues
+
+#### UX: easy to trigger inadvertent over-selection
+
+The default allowlist, in addition to being incomplete, is unintuitive. Notably, it includes the cluster-scoped Namespace and PersistentVolume resources, and will prune these resources even if the `--namespace` flag is used. Given that Namespace deletion cascades to all the contents of the namespaces, this is particularly catastropic.
+
+Because every `apply` operation uses the same identity for the purposes of pruning (i.e. has the same last-applied annotation), it is easy to make a small change to the scoping of the command that will inadvertantly cover resources managed by other operations, with potentially disasterous effects.
+
+TODO: link issues
+
+#### UX: difficult to use with custom resources
+
+Because the default allowlist is hard-coded in the kubectl codebase, it inherently does not include any custom resources. Users who want to prune custom resources necessarily need to specify their own allowlist and keep it up to date.
+
+TODO: link issues
+
+#### Sustainability: incompatibility with server-side apply
+
+While it is not disabled, pruning does not work correctly with server-side apply today. If the objects being managed were created with server-side apply, or were migrated to server-side apply using a custom field manager, they will never be pruned. If they were created with client-side apply and migrated to server-side using the default field manager, they will be pruned as needed. The worst case is that the managed set includes objects in multiple of these states, leading to inconsistent behaviour.
+
+One solution to this would be to use the presence of the current field manager as the indicator of eligibility for pruning. However, field managers cannot be queried on any more than annotations can, so are not a great for an identifier we want to select on. It can also be considered problematic that the default state for server-side applied objects includes at least two field managers, which are then all taken to be object owners for the purposes of pruning, regardless of their intent to use this power. In other words, we end up introducing the possibilty of multiple owners without the possiblity of conflict detection.
+
+## Proposal
+
+
+
+### User Stories (Optional)
+
+
+
+#### Story 1
+
+#### Story 2
+
+### Notes/Constraints/Caveats (Optional)
+
+
+
+### Risks and Mitigations
+
+
+
+## Design Details
+
+
+
+### Test Plan
+
+
+
+[ ] I/we understand the owners of the involved components may require updates to
+existing tests to make this code solid enough prior to committing the changes necessary
+to implement this enhancement.
+
+##### Prerequisite testing updates
+
+
+
+##### Unit tests
+
+
+
+
+
+- ``: `` - ``
+
+##### Integration tests
+
+
+
+- :
+
+##### e2e tests
+
+
+
+- :
+
+### Graduation Criteria
+
+
+
+### Upgrade / Downgrade Strategy
+
+
+
+### Version Skew Strategy
+
+
+
+## Production Readiness Review Questionnaire
+
+
+
+### Feature Enablement and Rollback
+
+
+
+###### How can this feature be enabled / disabled in a live cluster?
+
+
+
+- [ ] Feature gate (also fill in values in `kep.yaml`)
+ - Feature gate name:
+ - Components depending on the feature gate:
+- [ ] Other
+ - Describe the mechanism:
+ - Will enabling / disabling the feature require downtime of the control
+ plane?
+ - Will enabling / disabling the feature require downtime or reprovisioning
+ of a node? (Do not assume `Dynamic Kubelet Config` feature is enabled).
+
+###### Does enabling the feature change any default behavior?
+
+
+
+###### Can the feature be disabled once it has been enabled (i.e. can we roll back the enablement)?
+
+
+
+###### What happens if we reenable the feature if it was previously rolled back?
+
+###### Are there any tests for feature enablement/disablement?
+
+
+
+### Rollout, Upgrade and Rollback Planning
+
+
+
+###### How can a rollout or rollback fail? Can it impact already running workloads?
+
+
+
+###### What specific metrics should inform a rollback?
+
+
+
+###### Were upgrade and rollback tested? Was the upgrade->downgrade->upgrade path tested?
+
+
+
+###### Is the rollout accompanied by any deprecations and/or removals of features, APIs, fields of API types, flags, etc.?
+
+
+
+### Monitoring Requirements
+
+
+
+###### How can an operator determine if the feature is in use by workloads?
+
+
+
+###### How can someone using this feature know that it is working for their instance?
+
+
+
+- [ ] Events
+ - Event Reason:
+- [ ] API .status
+ - Condition name:
+ - Other field:
+- [ ] Other (treat as last resort)
+ - Details:
+
+###### What are the reasonable SLOs (Service Level Objectives) for the enhancement?
+
+
+
+###### What are the SLIs (Service Level Indicators) an operator can use to determine the health of the service?
+
+
+
+- [ ] Metrics
+ - Metric name:
+ - [Optional] Aggregation method:
+ - Components exposing the metric:
+- [ ] Other (treat as last resort)
+ - Details:
+
+###### Are there any missing metrics that would be useful to have to improve observability of this feature?
+
+
+
+### Dependencies
+
+
+
+###### Does this feature depend on any specific services running in the cluster?
+
+
+
+### Scalability
+
+
+
+###### Will enabling / using this feature result in any new API calls?
+
+
+
+###### Will enabling / using this feature result in introducing new API types?
+
+
+
+###### Will enabling / using this feature result in any new calls to the cloud provider?
+
+
+
+###### Will enabling / using this feature result in increasing size or count of the existing API objects?
+
+
+
+###### Will enabling / using this feature result in increasing time taken by any operations covered by existing SLIs/SLOs?
+
+
+
+###### Will enabling / using this feature result in non-negligible increase of resource usage (CPU, RAM, disk, IO, ...) in any components?
+
+
+
+### Troubleshooting
+
+
+
+###### How does this feature react if the API server and/or etcd is unavailable?
+
+###### What are other known failure modes?
+
+
+
+###### What steps should be taken if SLOs are not being met to determine the problem?
+
+## Implementation History
+
+
+
+## Drawbacks
+
+
+
+## Alternatives
+
+
+
+## Infrastructure Needed (Optional)
+
+
diff --git a/keps/sig-cli/3659-kubectl-apply-prune/initial-apply.png b/keps/sig-cli/3659-kubectl-apply-prune/initial-apply.png
new file mode 100644
index 00000000000..43469c5bf9c
Binary files /dev/null and b/keps/sig-cli/3659-kubectl-apply-prune/initial-apply.png differ
diff --git a/keps/sig-cli/3659-kubectl-apply-prune/kep.yaml b/keps/sig-cli/3659-kubectl-apply-prune/kep.yaml
new file mode 100644
index 00000000000..86b18f4613a
--- /dev/null
+++ b/keps/sig-cli/3659-kubectl-apply-prune/kep.yaml
@@ -0,0 +1,40 @@
+title: KEP Template
+kep-number: 3659
+authors:
+ - "@KnVerey"
+owning-sig: sig-cli
+participating-sigs: []
+status: provisional
+creation-date: 2022-11-15
+reviewers:
+ - @soltysh
+ - @seans3
+ - @eddiezane
+approvers:
+ - @soltysh
+
+see-also:
+ - "https://github.com/kubernetes/enhancements/issues/128"
+ - "https://github.com/kubernetes/enhancements/pull/810"
+
+# The target maturity stage in the current dev cycle for this KEP.
+stage: alpha
+
+# The most recent milestone for which work toward delivery of this KEP has been
+# done. This can be the current (upcoming) milestone, if it is being actively
+# worked on.
+latest-milestone: "v1.27"
+
+# The milestone at which this feature was, or is targeted to be, at each stage.
+milestone:
+ alpha: "v1.27"
+ beta: "TBD"
+ stable: "TBD"
+
+# The following PRR answers are required at alpha release
+# List the feature gate name and the components for which it must be enabled
+feature-gates: []
+disable-supported: true
+
+# The following PRR answers are required at beta release
+metrics: []
diff --git a/keps/sig-cli/3659-kubectl-apply-prune/subsequent-apply.png b/keps/sig-cli/3659-kubectl-apply-prune/subsequent-apply.png
new file mode 100644
index 00000000000..713e7f0a754
Binary files /dev/null and b/keps/sig-cli/3659-kubectl-apply-prune/subsequent-apply.png differ