-
Notifications
You must be signed in to change notification settings - Fork 40.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Move pkg/proxy to k8s.io/kube-proxy #92369
Comments
/sig network These SIGs are my best guesses for this issue. Please comment 🤖 I am a bot run by vllry. 👩🔬 |
/triage unresolved Comment 🤖 I am a bot run by vllry. 👩🔬 |
/assign @thockin |
I'm also starting to work on the kube-proxy config changes so I can probably help out here if this is something we want to push forward |
I don't object to this in theory. I expect it will expose a number of deps
to other k/k packages which will need to either be broken or the targets
will have to be moved up-and-out into leaf repos. I also want to be
careful about what we claim is a "supported" API. Today, it's all internal
which gives us a lot of freedom. If we want to support APIs more widely,
we will need to think hard about the shape of the APIs.
…On Thu, Jun 25, 2020 at 2:15 PM cmluciano ***@***.***> wrote:
I'm also starting to work on the kube-proxy config changes so I can
probably help out here if this is something we want to push forward
—
You are receiving this because you were assigned.
Reply to this email directly, view it on GitHub
<#92369 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABKWAVCBSWSG6Q5XV6S2CLDRYO46DANCNFSM4OEKGV3Q>
.
|
As I think extending I think there will be more custom-made stand-alone proxiers in the future living side-by-side with
I think now people copy code from @sbezverk I think have valuable and fresh experience in this area from the work with https://github.com/sbezverk/nfproxy. A missing piece I see immediately is initialization/configuration. Here I would like to allow custom-made stand-alone proxiers to use the same config as the regular
The reasons are that some params may be needed also by custom proxiers (or the proxy library-code) and repeating them is error prone but also this is an indication that custom proxiers is something accepted, like CRI or CNI plugins. I have a possible example/use-case; The current SCTP load-balancing in |
@mcluseau was also looking at some infrastructure to make it easier to write new proxiers |
@thockin thanks for the tag. The draft (https://github.com/mcluseau/kube-proxy2/blob/master/doc/proposal.md) has not changed much since january, and I still have the same numbers. I recently got some time to make progress. The actual minimum amount of code for a proxier is currently that: package main
import (
"fmt"
"os"
"time"
"github.com/mcluseau/kube-proxy2/pkg/api/localnetv1"
"github.com/mcluseau/kube-proxy2/pkg/client"
)
func main() {
client.Run(printState)
}
func printState(items []*localnetv1.ServiceEndpoints) {
fmt.Fprintln(os.Stdout, "#", time.Now())
for _, item := range items {
fmt.Fprintln(os.Stdout, item)
}
} proxiers have a baseline of 13MB (on-disk size). Examples here: https://github.com/mcluseau/kube-proxy2/tree/master/examples |
@mcluseau what framework you proposed buys me as a developer of a flavour of proxy? Does it make it faster? Not sure, Does it make it more reliable? not sure either, does it help to get nftables module into Google's linux distribution? Really doubt. So could you explain what DOES it buy me? |
@sbezverk my approach is to decouple the k8s' business logic from the actual application to a system. So things like computing the topology requested by the user to the targets for the specific node the proxy is running will be done on the proxy side, while applying those computed targets. Thus, developing a proxier implies only writing the code for the subsystem you're targeting, like nftables, but also applications like a local DNS resolver that would gain knowledge of which pods should be targeted from the current node. As the API is made for the local node only, it's also simpler and I expect it to be more stable than the core API that needs to represent cluster policies and resources. The consequence should be (much) less maintenance on proxiers. The API is available as gRPC and can be accessed via local unix socket (ie I'm not a googler and I don't use Google's distribution, so if the kernel modules are missing, I can't do anything on this side. |
After a brief discussion on slack I learned that the problem was not dependencies to other packages as @thockin warned for in #92369 (comment). The main problem was that since this is not an official API it is unstable (naturally). I am thinking of writing a KEP for a "proxier" library since I see a future with many specialized proxiers selected with the @thockin Since writing a KEP would take a lot of time, would it be favorable received by sig/network? To maintain another API is not something that should be taken lightly I guess. |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale I still forsee a future with many specialized kube-proxy implementations selected with |
/assign @rikatz @andrewsykim Seems the staging repo is already there |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
/remove-lifecycle rotten |
@weiqiangt I can see that you already solved the problem by forking the code in your own repo antrea-io/antrea#772 Are you still interested on vendor the code directly? |
Thanks for mentioning, fwd @hongliangl @vicky-liu @tnqn. |
There is also a follow up question, in case that you are interested in vendor kubernetes/proxy, we are interested in knowing more about your use cases and requirements |
We still need the code now. If you guys could move pkg/proxy(not included pkg/proxy/iptables and pkg/proxy/ipvs)to k8s.io/kube-proxy, that will be very helpful, then we don't need to sync some latest changes in kube-proxy manually. For now, we need to fork the latest code in kube-proxy to our project as a third-party code to implement the latest features in Kubernetes. BTW, is there anything we could help? Thanks a lot. @aojea |
@hongliangl feedback is super useful, there is a KEP to move the code to staging and we are trying to understand all the people needs. My next question is if you need the whole implementation to be exported, including the iptables logic too? We are trying to understand if people needs it a library, in that case which parts people needs: control-plane, data-plane, ... or if just needs the whole functionality, and in that case why they embed the code instead of running in a daemonset as a standalone binary |
We don't need For the earlier version of our implementation, we only forked To catch up with the features like TopologyAwareHints and ProxyTerminatingEndpoints in our CNI, we even need to learn the logic in kubernetes/pkg/proxy/topology.go Line 40 in b7ad179
topology.go and corresponding files could be used as a library, we could use the latest feature like TopologyAwareHints or ProxyTerminatingEndpoints by upgrading the library.
|
Features like TopologyAwareHints and ProxyTerminatingEndpoints are used to categorize Endpoints, I think they are always needs by any CNI. |
that is a great feedback, thanks |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
This issue can be considered implemented by kpng but as a separate out-of-tree library. It is also considered a goal for the nftables-based kube-proxy:
This is kind of the birth of kpng. I close this issue, as it has served it's purpose, rather than let it silently rot away. /close |
@uablrek: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
What would you like to be added:
Could you please move pkg/proxy to k8s.io/kube-proxy as an individual module?
Why is this needed:
Some CNIs would like to implement their own kube-proxy like component to gain more performance and thus would like to reuse kube-proxy code. e.g. antrea-io/antrea#772 .
The text was updated successfully, but these errors were encountered: