Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

KEP-3786: Migrate Shared Kube-Proxy code into staging #3788

Closed
wants to merge 21 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
@@ -0,0 +1,301 @@
<!--
**Note:** When your KEP is complete, all of these comment blocks should be removed.

To get started with this template:

- [ ] **Pick a hosting SIG.**
Make sure that the problem space is something the SIG is interested in taking
up. KEPs should not be checked in without a sponsoring SIG.
- [ ] **Create an issue in kubernetes/enhancements**
When filing an enhancement tracking issue, please make sure to complete all
fields in that template. One of the fields asks for a link to the KEP. You
can leave that blank until this KEP is filed, and then go back to the
enhancement and add the link.
- [ ] **Make a copy of this template directory.**
Copy this template into the owning SIG's directory and name it
`NNNN-short-descriptive-title`, where `NNNN` is the issue number (with no
leading-zero padding) assigned to your enhancement above.
- [ ] **Fill out as much of the kep.yaml file as you can.**
At minimum, you should fill in the "Title", "Authors", "Owning-sig",
"Status", and date-related fields.
- [ ] **Fill out this file as best you can.**
At minimum, you should fill in the "Summary" and "Motivation" sections.
These should be easy if you've preflighted the idea of the KEP with the
appropriate SIG(s).
- [ ] **Create a PR for this KEP.**
Assign it to people in the SIG who are sponsoring this process.
- [ ] **Merge early and iterate.**
Avoid getting hung up on specific details and instead aim to get the goals of
the KEP clarified and merged quickly. The best way to do this is to just
start with the high-level sections and fill out details incrementally in
subsequent PRs.

Just because a KEP is merged does not mean it is complete or approved. Any KEP
marked as `provisional` is a working document and subject to change. You can
denote sections that are under active debate as follows:

```
<<[UNRESOLVED optional short context or usernames ]>>
Stuff that is being argued.
<<[/UNRESOLVED]>>
```

When editing KEPS, aim for tightly-scoped, single-topic PRs to keep discussions
focused. If you disagree with what is already in a document, open a new PR
with suggested changes.

One KEP corresponds to one "feature" or "enhancement" for its whole lifecycle.
You do not need a new KEP to move from beta to GA, for example. If
new details emerge that belong in the KEP, edit the KEP. Once a feature has become
"implemented", major changes should get new KEPs.

The canonical place for the latest set of instructions (and the likely source
of this file) is [here](/keps/NNNN-kep-template/README.md).

**Note:** Any PRs to move a KEP to `implementable`, or significant changes once
it is marked `implementable`, must be approved by each of the KEP approvers.
If none of those approvers are still appropriate, then changes to that list
should be approved by the remaining approvers and/or the owning SIG (or
SIG Architecture for cross-cutting KEPs).
-->
# KEP-3786: Migrate Shared kube-proxy Code into Staging
# Index
<!--
A table of contents is helpful for quickly jumping to sections of a KEP and for
highlighting any additional information provided beyond the standard KEP
template.

Ensure the TOC is wrapped with
<code>&lt;!-- toc --&rt;&lt;!-- /toc --&rt;</code>
tags, and then generate with `hack/update-toc.sh`.
-->

<!-- toc -->
- [Release Signoff Checklist](#release-signoff-checklist)
- [Summary](#summary)
- [Motivation](#motivation)
- [Goals](#goals)
- [Non-Goals](#non-goals)
- [Proposal](#proposal)
- [User Stories (Optional)](#user-stories-optional)
- [Story 1](#story-1)
- [Story 2](#story-2)
- [Story 3](#story-3)
- [Notes/Constraints/Caveats (Optional)](#notesconstraintscaveats-optional)
- [Risks and Mitigations](#risks-and-mitigations)
- [Design Details](#design-details)
- [Test Plan](#test-plan)
- [Unit tests](#unit-tests)
- [Integration tests](#integration-tests)
- [e2e tests](#e2e-tests)
- [Graduation Criteria](#graduation-criteria)
- [Production Readiness Review Questionnaire](#production-readiness-review-questionnaire)
- [Dependencies](#dependencies)
- [Scalability](#scalability)
- [Troubleshooting](#troubleshooting)
- [Implementation History](#implementation-history)
- [Drawbacks](#drawbacks)
- [Alternatives](#alternatives)
- [Infrastructure Needed (Optional)](#infrastructure-needed-optional)
<!-- /toc -->

## Release Signoff Checklist

<!--
**ACTION REQUIRED:** In order to merge code into a release, there must be an
issue in [kubernetes/enhancements] referencing this KEP and targeting a release
milestone **before the [Enhancement Freeze](https://git.k8s.io/sig-release/releases)
of the targeted release**.

For enhancements that make changes to code or processes/procedures in core
Kubernetes—i.e., [kubernetes/kubernetes], we require the following Release
Signoff checklist to be completed.

Check these off as they are completed for the Release Team to track. These
checklist items _must_ be updated for the enhancement to be released.
-->

Items marked with (R) are required *prior to targeting to a milestone / release*.

- [ ] (R) Enhancement issue in release milestone, which links to KEP dir in [kubernetes/enhancements] (not the initial KEP PR)
- [ ] (R) KEP approvers have approved the KEP status as `implementable`
- [ ] (R) Design details are appropriately documented
- [ ] (R) Test plan is in place, giving consideration to SIG Architecture and SIG Testing input (including test refactors)
- [ ] e2e Tests for all Beta API Operations (endpoints)
- [ ] (R) Ensure GA e2e tests for meet requirements for [Conformance Tests](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/conformance-tests.md)
- [ ] (R) Minimum Two Week Window for GA e2e tests to prove flake free
- [ ] (R) Graduation criteria is in place
- [ ] (R) [all GA Endpoints](https://github.com/kubernetes/community/pull/1806) must be hit by [Conformance Tests](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/conformance-tests.md)
- [ ] (R) Production readiness review completed
- [ ] (R) Production readiness review approved
- [ ] "Implementation History" section is up-to-date for milestone
- [ ] User-facing documentation has been created in [kubernetes/website], for publication to [kubernetes.io]
- [ ] Supporting documentation—e.g., additional design documents, links to mailing list discussions/SIG meetings, relevant PRs/issues, release notes

<!--
**Note:** This checklist is iterative and should be reviewed and updated every time this enhancement is being considered for a milestone.
-->

[kubernetes.io]: https://kubernetes.io/
[kubernetes/enhancements]: https://git.k8s.io/enhancements
[kubernetes/kubernetes]: https://git.k8s.io/kubernetes
[kubernetes/website]: https://git.k8s.io/website

## Summary

After about a year and a half of testing a [new service proxy implementation](https://github.com/kubernetes-sigs/kpng/), and
collecting sig-network and community feedback, it is clear that a shared library (referred to as the kube-proxy staging library in this document) designed to make building new service proxies easier is needed. Specifically, it is desired by members of the Kubernetes networking community who are attempting to build specialized networking tools. However, in order to prevent the community from having to maintain
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it is clear that a shared library (referred to as the kube-proxy staging library in this document) designed to make building new service proxies easier is needed.

There are multiple service proxies already out there, calico, cilium, ovn-kubernetes, ... can we expand why this need is clear and what is the intended audience?

Copy link
Contributor

@danwinship danwinship Jan 30, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So a lot of Antonio's comments basically boil down to "this doesn't make sense if you haven't been paying attention to kpng". (So the KEP needs to be keeping that in mind more.)

Answering this question directly: people writing alternative proxy implementations need to reimplement a lot of kube-proxy's logic, and there's really no good reason for this. Kube-proxy already has shared abstractions that are used by iptables, ipvs, and winkernel. We could split those out so that other implementations can use them too.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

people writing alternative proxy implementations need to reimplement a lot of kube-proxy's logic, and there's really no good reason for this.

Indeed, this seems to be one of the more clear and obvious reasons for this redux. @astoycos it would definitely seem that in the summary here we should either add more specificity about the specific projects which have had to "work around" kube-proxy, or at least point out that we will have later sections in this document which will talk about those and create them, so that this is more clear to readers of the KEP.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So a lot of Antonio's comments basically boil down to "this doesn't make sense if you haven't been paying attention to kpng".

Right I definitely wanted to make this kep take the learnings from KPNG and play them forward rather than leaving them all behind, let me see if I can add more context to help.

it would definitely seem that in the summary here we should either add more specificity about the specific projects which have had to "work around" kube-proxy, or at least point out that we will have later sections in this document which will talk about those and create them, so that this is more clear to readers of the KEP.

Agreed let me see what I can do here, I think it will be super informative for me to dig in and see if any existing service proxies have custom solutions for some of this shared functionality.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Answering this question directly: people writing alternative proxy implementations need to reimplement a lot of kube-proxy's logic,

@danwinship this is the part I struggle, what part? the backends? because that is very specific of the technologie, iptables, ipvs, ovs are never going to be fully compatible, we have a considerable number of cases that ipvs and iptables behave different just because of how they work

We could split those out so that other implementations can use them too.

If we talk about the control-plane logic, that was the intention of the kep that kicked out kpng, I'm 100% onboard with that project, we should have a control plane that people can use, I always put the envoy control plane as an example of what we should do https://github.com/envoyproxy/go-control-plane

Also, do you think that current cache logic should be shared? it doesn't make use of the informer cache per example, and builds another cache on top, with complex apply and merge operations ... I think that can be much simpler https://github.com/aojea/networking-controllers/blob/main/pkg/services/services_controller.go

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what part?

pkg/proxy, bits of pkg/proxy/healthcheck and pkg/proxy/utils, ...

  1. change tracking / caching / EndpointSlice merging
  2. topology handling
  3. HealthCheckNodePort implementation
  4. --nodeport-address handling
  5. conntrack cleanup
  6. ...

Also, do you think that current cache logic should be shared?

We would start with the current code because that's what we have that works, but we should definitely look at improving it too. There are definitely problems with it.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great, it seems we are on agreement on what are the problems.

I still think that building a new library is much better than carrying with existing code that has a lot of assumptions about iptables details, I liked @thockin thoughts he presented in https://docs.google.com/presentation/d/1Y-tZ4fFC9L2NvtBeiIXD1MiJ0ieg_zg4Q6SlFmIax8w/edit?usp=drivesdk&resourcekey=0-SFhIGTpnJT5fo6ZSzQC57g

two separate sets of code (the existing kube-proxy code and the aforementioned new library) while also ensuring the stability of the existing proxies, it makes sense to work incrementally towards this goal in the kube-proxy
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is just the opposite, maintaining current code is straightforward, there are no external consumers, only a few persons are doing this right now and it works because there is no much churn on the code.

However, maintaining a new library that does the same that current code is much more difficult and it will be not be able to be maintained with current model, we'll need much more people involved, this will be need to set up new infra, deal with library users bugs, versioning and backwards compatibility, ...

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The idea (or my idea anyway) is something like apimachinery or component-helpers. It is not a fully-independent library that is expected to go off and live its own life and gain independent usage. It is just a part of k/k that we wanted to make easier for some people to vendor.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

something like apimachinery or component-helpers

Or perhaps more relevantly, #3686

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@aojea my take on your concerns is that you don't want to see this library becoming some common library in a large number of implementations, is that correct? If that is correct: it is my general understanding with KPNG that the main thing it wants to provide to third party implementations are interfaces and common tools such that software like Cilium can provide it's own implementation without being so disparate from upstream. Do you feel it may be possible to strike a balance here that you'd be comfortable with if we very explicitly defined what aspects of the library are intended for outside usage, so that we can provide the essentials and keep it at that? Do you think that would help?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The idea (or my idea anyway) is something like apimachinery or component-helpers. It is not a fully-independent library that is expected to go off and live its own life and gain independent usage. It is just a part of k/k that we wanted to make easier for some people to vendor.

@aojea This is exactly what I was thinking, and even more-so for this kep the first and most important goal would simply be moving of some of the shared proxy code to staging (without any alterations) very similar to #3686 :)

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is proposed in kubernetes/kubernetes#92369, which is kind of the birth place for kpng. Check the first comment from @mcluseau a bit down 😃

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So is this shared code going to be developed in lock-step with Kubernetes, i.e. do you need to make changes for the code and Kubernetes in a single commit? That would be a strong argument that it needs to be a staging repo.

If nothing in Kubernetes uses the code, then a separate repo is clearly a better solution.

If Kubernetes uses the code, but could vendor new releases, then it's less clear. E2E testing can also be set up for the code when it is in a separate repo, it just might be a bit more work.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The initial proposal is to move it to staging. As long as k/k continues to build a kube-proxy binary, it is likely that things will stay in staging.

staging library.

This KEP describes the **first step** towards the ultimate end goal of providing
a shared library of core kube-proxy functionality which can be easily consumed by the community.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This can be done out of tree and it will be easier to consume

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tend to agree with @aojea here. My sense is there is desire for an in-tree implementation as there is some sense that being out-of-tree may lead to the decline of kpng and therefore loss of value, is that accurate @astoycos, or am I misinterpreting?

Copy link
Contributor Author

@astoycos astoycos Jan 30, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So KPNG makes it simple-er to 1. write new backends and 2. deploy in some really smart/diverse scenarios however it locks you into a BUNCH of added complex machinery to do so, and as we've seen it is difficult to maintain.

IMO what folks really want is just 1. (e.g the ability to write new service proxies quickly) and the core bits to that (k8s client / client side caching, etc) are already there being used by the core proxies internally.

My sense is there is desire for an in-tree implementation as there is some sense that being out-of-tree may lead to the decline of kpng and therefore loss of value

Partly :) I don't think KPNG will "decline" but it's definitely not production-ready, where-as the core in-tree logic IS battle proven and production ready as is, so why not open it up for external consumption while also opening up the door for future improvements as well?

Ultimately if the answer is "The in-tree proxy code is stable and we intend to freeze/ maintain is AS-IS" I'm fine with that :), but if we want to pave a way for it to iterate into the future we need to start somewhere.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for the details. I believe I can personally be persuaded here, I would just like to see more of the motivation detailed out in the KEP.

Specifically, it will work to move the existing [shared kube-proxy code](https://github.com/kubernetes/kubernetes/tree/master/pkg/proxy) and associated network utilities into [staging](https://github.com/kubernetes/kube-proxy).

This initial work will open the door for future kube-proxy code improvements to be made in both
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What will be these improvements and in which areas?
kube-proxy intree has iptables and ipvs, ipvs is in maintenance mode since a long time with open issues that are not solved, thanks to Lars that stepped up is usable these days.
The "control-plane" logic of kube-proxy is practically the same for years.
The only "big improvements" in kube-proxy were iptables optimizations, most of them delivered by DanWinship.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So this is where I would plan on leaning heavily on the learning's from KPNG and it's community, I fore-see some possible optimizations in some of the following areas at this time:

  • The client-side caching mechanisms (e.g the serviceChangeTracker) were essentially converted to be B-Tree based in KPNG which could lead to some performance benefits (following much much more testing)

  • The ability to "artificially" test large cluster workloads in KPNG using static yaml files (dumped from real world clusters), if we could incorporate this for in-tree proxies it would help us test larger scale scenarios much more cheaply and easily.

  • Any optimizations it easier for the community to write new proxies with these shared bits (this could take many different paths) I know that's a bit of a "fake-one" but definitely relevant for the future.

I'm sure there's some I'm missing I guess the larger question really is do we want to iterate in-tree at all?

a safe an incremental way.

## Motivation

There have been several presentations, issues, and projects dedicated to reusing kube-proxy logic while extending it to embrace
different backend technologies (i.e. nftables, eBPF, Open vSwitch, and so on). This KEP attempts to work towards making a library which will facilitate
this type of work ultimately making it much easier to write and maintain new proxy implementations.

A previous attempt at a broad solution to this problem was explored in the [KPNG project](https://github.com/kubernetes-sigs/kpng/), which exhibits many properties that allow for such goals to be accomplished. However, because it introduced many new features and would result in large breaking changes if it were
to be incorporated back in tree, it became clear we needed to decompose this large task into smaller pieces. Therefore, we've decided to keep things simple and start by moving the existing shared kube-proxy code into staging where it can be iterated on and augmented to in an safe, consumable and incremental manner.
Comment on lines +164 to +165
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we explain why this doesn't work using kpng as library for and why it will work moving this in-tree, where there are much more strict policies?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can tell other people to use kpng as a library, but we can't make the in-tree proxies use it. (Because of the "much more strict policies". None of the kpng code has been reviewed (by sig-network-reviewers), and it has not seen nearly as much real-world usage as the existing kube-proxy code, so even if we could manage to review all of it, we couldn't be confident that it was sufficiently bug-free at scale.)

So if we want to help third-party proxy implementations, the options are: (1) continue to divide SIG Network resources between writing two separate proxy codebases, and tell third parties to use the one that we are not willing to use ourselves, or (2) try to make it possible for third parties to use our proxy base.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed, more explanation of the motivation here would be helpful please 🙇

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So if we want to help third-party proxy implementations

this is the claim I can't see a real evidence, are we saying that ovn-kubernetes, cilium, calico, antrea are going to use this library?

can we get some idea of "who" are these third-party implementations?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think @danwinship Hit the nail on the head here, let me try to get some of his feedback in here.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is the claim I can't see a real evidence, are we saying that ovn-kubernetes, cilium, calico, antrea are going to use this library?

The initial request came from Antrea and Calico are importing from k8s.io/kubernetes/pkg/proxy so yes, I think so.

But not if the library is out-of-tree!

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh, thanks, that is something important, can we add this to the KEP please?


### Goals

- Move the [shared kube-proxy code k/k/pkg/proxy](https://github.com/kubernetes/kubernetes/tree/master/pkg/proxy), and relevant
networking utilities (i.e `pkg/util/iptables`) into the kube-proxy [staging repository](https://github.com/kubernetes/kube-proxy).
- Ensure all existing kube-proxy unit and e2e coverage remains the same or is improved.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

very important goal to emphasize 👍

- Ensure the shared code can be easily vendored by external users to help write new out of tree service proxies.
- Write documentation detailing how external consumers can utilize the kube-proxy staging library.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍


### Non-Goals

- Write any more new "in-tree" service proxies.
- Make any incompatible architectural or deployment changes to the existing kube-proxy implementations.
- Tackle any complex new deployment scenarios (This is solved by [KPNG](https://github.com/kubernetes-sigs/kpng/))

## Proposal

We propose to build a new library in the [kube-proxy staging repository](https://github.com/kubernetes/kube-proxy). This repository will be vendored by the core implementations and developers who want to build new service proxy implementations, providing them with:

- A vendorable golang library that defines a few interfaces which can be easily implemented by a new service proxy, that responds to EndpointSlice and Service changes.
- Documentation on how to build a kube proxy with the library, based on [So You Want To Write A Service Proxy...](https://github.com/kubernetes-sigs/kpng/blob/master/doc/service-proxy.md) and other similar resources.

Not only will this make writing new backends easier, but through incremental changes and optimizations to the new library we hope to also improve the existing proxies, making [legacy bugs](https://github.com/kubernetes/kubernetes/issues/112604) easier to fix in the future.

### User Stories (Optional)

#### Story 1

As a networking technology startup I want to easily make a new service proxy implementation without maintaining the logic of talking to the APIServer, caching its data, or calculating an abbreviated/proxy-focused representation of the Kubernetes networking state space. I'd like a wholesale framework I can simply plug my custom dataplane oriented logic into.

#### Story 2

As a service proxy maintainer, I don't want to have to understand the in-tree internals of a networking backend in order to simulate or write core updates to the logic of the kube-proxy locally.

#### Story 3

As a Kubernetes developer I want to make maintaining the shared proxy code easier, and allow for updates to that code to be completed in a more incremental and well tested way.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maintaining code and versioning in kubernetes/kubernetes is not easier, just the contrary.

There is also a misbelief that the testing comes for free, and that is not true, each SIG is responsible for maintaining, adding, monitoring and fixing its own tests, and this require a constant and periodic work, it can not be done sporadilly.

All the sig-network testing is maintained only by myself during last years, I tried to onboard more people, doing presentations, mentoring people 1on1s, but nobody sticks and people comes and goes leaving half baked tests, features or dashboards.
I'd like to see a plan and evidence that there will be more people involved on this work to avoid leaving the project in an undesired state

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All the sig-network testing is maintained only by myself during last years, I tried to onboard more people, doing presentations, mentoring people 1on1s, but nobody sticks and people comes and goes leaving half baked tests, features or dashboards.

+1000 First of all thank YOU for all of this work :) I think everyone who is stable in the community is feeling some bit of burnout/ trouble with this (I know I am in the network-policy group)

So initially nothing would change with this move except for the location of the code. Unless I'm missing something....we wouldn't be providing any new versioning guarantees, and we'll run all of the existing tests against the same code (maybe this will require some new hacks?)

However now at least these bits can be used by external consumers, which IMO would hopefully lead to more involvement.

I'd like to see a plan and evidence that there will be more people involved on this work to avoid leaving the project in an undesired state

I totally agree maybe we can work on a plan here? At the root of it IMO if we expose functionality previously only exposed to core implementations, it would mean more users, which hopefully would also mean more folks involved.

Copy link
Member

@aojea aojea Jan 30, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

t would mean more users, which hopefully would also mean more folks involved.

heh, I never seen that happen but I love to stand corrected here

we expose functionality previously only exposed to core implementations

that is my main fear, so we are going to have more features demands for niche use cases or to fix bug for implementation X that is not compatible with implementation Y


### Notes/Constraints/Caveats (Optional)

TBD

### Risks and Mitigations

Since this KEP involves the movement of core code bits there are some obvious risks, however they will be mitigated by ensuring
all existing unit and e2e test coverage is kept stable and/or improved throughout the process.

## Design Details

**NOTE: This section is still under active development please comment with any further ideas**

The implementation of this kep will begin by moving the various networking utilities (i.e `pkg/util/iptables`, `pkg/util/conntrack`,
`pkg/util/ipset`, `pkg/util/ipvs`) used by `pkg/proxy` to the staging repo using @danwinship [previous attempt](https://github.com/kubernetes/kubernetes/pull/112886) as a guide. Additionally we will need to [re-look](https://github.com/kubernetes/utils/pull/165) into moving `pkg/util/async` out of `pkg/util/async` and into [`k8s/utils`](https://github.com/kubernetes/utils).

Following this initial work, the [shared kube-proxy code](https://github.com/kubernetes/kubernetes/tree/master/pkg/proxy) as it stands today will be moved into the kube-proxy staging repo. Throughout this process it's crucial that all unit and e2e test coverage
is either maintained or improved to ensure stability of the existing in-tree proxies.

In conclusion, documentation will be written to help users consume the now vendorable kube-proxy code.

Additional steps (possibly described in further detail with a follow-up kep) will include:

- Building up more tooling around testing and use of the library for external consumers.
- Analysis of possible improvements and updates to the shared library code, using the POC done in [KPNG](https://github.com/kubernetes-sigs/kpng) as a reference, to make writing new out of tree service proxy implementations easier.

### Test Plan

##### Unit tests

All existing kube-proxy and associated library unit test coverage **MUST** be maintained or improved.

##### Integration tests

All existing kube-proxy and associated library integration test coverage **MUST** be maintained or improved.

##### e2e tests

All existing kube-proxy and associated library e2e test coverage **MUST** be maintained or improved.


### Graduation Criteria

N/A

## Production Readiness Review Questionnaire

### Dependencies

N/A

### Scalability

N/A the core functionality will remain the same

###### Will enabling / using this feature result in any new API calls?

No

###### Will enabling / using this feature result in introducing new API types?

No, it wont result in new K8s APIs.

###### Will enabling / using this feature result in any new calls to the cloud provider?

No

###### Will enabling / using this feature result in non-negligible increase of resource usage (CPU, RAM, disk, IO, ...) in any components?

No

### Troubleshooting

###### How does this feature react if the API server and/or etcd is unavailable?

The APIServer going down will prevent this library from generally working as would be expected in normal cases, where all incoming
Kubernetes networking data is being polled from the APIServer.

###### What are other known failure modes?

###### What steps should be taken if SLOs are not being met to determine the problem?

## Implementation History

- [Add kep to move kubeproxy out of tree (to staging)](https://github.com/kubernetes/enhancements/pull/2635)
- ["Move kube-proxy networking utilities to staging"](https://github.com/kubernetes/kubernetes/pull/112886)
- [Import BoundedFrequencyRunner from k8s.io/kubernetes](https://github.com/kubernetes/utils/pull/165)
- ["Librarification" PR into KPNG](https://github.com/kubernetes-sigs/kpng/pull/389).

## Drawbacks

## Alternatives

We could retain the existing kube proxy shared code in core kubernetes and simply better document the data structures and golang maps used for kubernetes client operations and client side caching. However, that would still require external proxy implementations to copy and paste large amounts of code. The other option is to not tackle this problem in-tree and to move forward with the singular development of external projects like KPNG as the overall framework for solving these problems. The drawbacks to this include the large additional maintenance burden, and that it is opinionated towards a raw GRPC implementation and other users (i.e. XDS) want something more decoupled possibly. This realization has inspired this KEP.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the maintenance of a shared library is much complex in-tree than out-of-tree, see client-go per example

Copy link
Contributor Author

@astoycos astoycos Jan 30, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I completely agree, maybe I used the term library a little bit prematurely here :)

This KEP was supposed to just focus on movement of code... Not really development of a library (yet) however I wanted it to be clear that moving the relevant shared bits to staging would be the first step in exposing the shared bits.

Unlike client-go we would make no true guarantees about this code or provide versioning semantics , it would just be much easier for folks to vendor essentially


## Infrastructure Needed (Optional)

None
Original file line number Diff line number Diff line change
@@ -0,0 +1,59 @@
title: rework kube-proxy architecture
kep-number: 3786
authors:
- "@mcluseau"
- "@rajaskakodar"
- "@astoycos"
- "@jayunit100"
- "@nehalohia"
owning-sig: sig-network
participating-sigs:
- sig-network
status: provisional|implementable|implemented|deferred|rejected|withdrawn|replaced
creation-date: 2020-10-10
reviewers:
- "@thockin"
- "@danwinship"
approvers:
- "@thockin"
- "@danwinship"

##### WARNING !!! ######
# prr-approvers has been moved to its own location
# You should create your own in keps/prod-readiness
# Please make a copy of keps/prod-readiness/template/nnnn.yaml
# to keps/prod-readiness/sig-xxxxx/00000.yaml (replace with kep number)
#prr-approvers:

see-also:
- https://github.com/kubernetes/enhancements/pull/2094
- https://github.com/kubernetes/enhancements/pull/3649

# The target maturity stage in the current dev cycle for this KEP.
stage: stable

# The most recent milestone for which work toward delivery of this KEP has been
# done. This can be the current (upcoming) milestone, if it is being actively
# worked on.
latest-milestone: "v1.28"

# The milestone at which this feature was, or is targeted to be, at each stage.
# (astoycos) This is an internal code change it does not need to follow the release milestones
# milestone:
# alpha: "v1.19"
# beta: "v1.20"
# stable: "v1.22"

# The following PRR answers are required at alpha release
# List the feature gate name and the components for which it must be enabled
# (astoycos) This is an internal code change it does not need to use feature-gates
# feature-gates:
# - name: MyFeature
# components:
# - kube-apiserver
# - kube-controller-manager
# disable-supported: true

# The following PRR answers are required at beta release
# metrics:
# - my_feature_metric