Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

KEP-4006: Initial iteration of provisional WebSockets KEP #4016

Merged
merged 1 commit into from
Jun 9, 2023

Conversation

seans3
Copy link
Contributor

@seans3 seans3 commented May 17, 2023

  • Initial alpha KEP for transitioning from SPDY to WebSockets
  • Issue link: Transition from SPDY to WebSockets #4006

  • Implements all alpha sections including

    • Summary
    • Motivation
    • Proposal
    • Design Details
    • Test Plan
    • Production Readiness Review Questionnaire
  • Adds initial reviewers and approvers

@k8s-ci-robot k8s-ci-robot added cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/kep Categorizes KEP tracking issues and PRs modifying the KEP directory sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. labels May 17, 2023
@k8s-ci-robot k8s-ci-robot added the size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. label May 17, 2023
@seans3
Copy link
Contributor Author

seans3 commented May 17, 2023

/cc @jpbetz
/cc @deads2k
/assign @aojea
/assign @ardaguclu

3. As releases increment, the probablity of a WebSocket enabled `kubectl` communicating
with an older non-WebSocket enabled API Server decreases.

### Proposal: `v5.channel.k8s.io` subprotocol version
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is required to solve the existing gap in the implementation with half-close stdin, isn't it?
kubernetes/kubernetes#89899

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes--I believe the half-close stdin for WebSockets is a protocol increment. But I believe there may be a use of the v5 subprotocol version which may be more important, and I need your feedback on this.

There are several other WebSocket clients currently communicating with the API Server: 1) the OpenShift browser/javascript client, and 2) the Kubelet using WebSockets for watches, etc. So an important question becomes, how do we distinguish between these legacy WebSocket use cases, and the new kubectl RemoteCommand use cases which will need the new StreamTranslatorProxy ? In our current proof-of-concept PR, we substituted the new StreamTranslatorProxy instead of the UpgradeAwareProxy when handling the exec subresource. I believe this is causing a lot of the e2e test failures. I believe we should integrate these two proxies as follows

<Within tryUpgrade of UpgradeAwareProxy, examining HTTP Request>
if <Upgrade == WebSockets && version == v5.channel.k8s.io && WEBSOCKETS_FEATURE_FLAG >
  delegate request to StreamTranslatorProxy
else
  <continue with legacy code in UpgradeAwareProxy>

Importantly, this allows us to only target the kubectl RemoteCommand for the StreamTranslatorProxy, and the other current WebSocket use cases (as well as other SPDY use cases) are unaffected.

What do you think?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since we are changing bi-directional protocol, I agree that new subprotocol (v5.channel.k8s.io )would be better for the sake of maintainability. It also serves managing the fallback mechanism as said above.

# The following PRR answers are required at alpha release
# List the feature gate name and the components for which it must be enabled
feature-gates:
- name: Websockets
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we should have a more meaningful name, but I'm bad for names KubectlWebsockets or ClientWebsockets ??

Copy link
Contributor

@jpbetz jpbetz May 17, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1. If there is going to be a separate KEP for kubectl port-forward then maybe something like ClientSubcommandWebsockets?

EDIT: Or if we do include port-forward (I've opened a separate comment thread on that), maybe WebsocketClientChannels or something similar?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added ClientRemoteCommandWebsockets.


# The following PRR answers are required at beta release
metrics:
- my_feature_metric
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

an interesting metrics will be to expose the number of failed connections and fallbacks, i.e., how many times the upgrade to websockets has to fall back

Copy link
Contributor Author

@seans3 seans3 Jun 5, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not currently possible in the apiserver, since each request is separate. There is currently no way to know that the second, fallback SPDY request is connected to a first WebSockets request. We can have metrics like num_websocket_requests and num_websocket_failures.

<<[/UNRESOLVED]>>
```

<!-- DISCUSS why we're not using http/2.0. -->
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is http2 does not expose any streaming capabilities so we should have to implement again all the streaming protocols and maintain our low level http2 library

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes. I just put a placeholder to remind myself to flesh this out in a future PR.


- `<package>`: `<date>` - `<test coverage>`

##### Integration tests
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These are going to be important since those can allow us to exercise the fallback mechanism

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed. I intend to submit separate PRs to outline the testing plan as well as the Production Readiness questions within the KEP. Submitting this initial iteration of the provisional KEP will allow others to more easily contribute to unimplemented sections.

# List the feature gate name and the components for which it must be enabled
feature-gates:
- name: Websockets
components:
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@aojea @ardaguclu @jpbetz Hoping to get feedback on the components requiring a feature flag.

  1. Do we need feature flags for both the API Server and kubectl ?
  2. Can we get away with just a feature flag on kubectl ? The code paths in the API Server (StreamTranslatorProxy) should not be reachable without the new kubectl requesting v5 of RemoteCommand subprotocol.
  3. If we have feature flags for both components, can they be the same name ?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think, v5.channel.k8s.io subprotocol is already a some kind of a feature flag in API server. Everything will work as is, as long as clients send requests in current subprotocols(v4..v1) and v5.channel.k8s.io subprotocol will be used in kubectl, only when it's feature flag is enabled.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

summoning @liggitt , he is a natural for these things ;)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added feature flags for both kubectl and the API Server for this alpha KEP. If we change it, I will update the KEP.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

per-binary flags for wherever this feature is implemented sounds right to me. I don't have an opinion on if the binaries can/share share the same flag name or not.. I would probably give them different flag names incase anyone has infra that doesn't allow one flag to be set differently on different binaries, but I'll admit I don't know this stuff that well.

@seans3 seans3 requested a review from aojea May 17, 2023 23:55
we propose creating a `FallbackExecutor` to address client/server version skew. The
`FallbackExecutor` first attempts to upgrade the connection to WebSockets
version five (`v5.channel.k8s.io`) with the `WebSocketExecutor`, then falls back
to the legacy `SPDYExecutor` version four (`v4.channel.k8s.io`), if the upgrade is
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why it certainly falls back to version four (v4.channel.k8s.io)?. Since we are using legacy SPDY executor and if very old API server uses v3.channel.k8s.io, SPDY executor can fall back to version three?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

there is only n-3 versions supported, we should focus only on the supported versions

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So the current SPDY implementation asks for four versions, with v4.channel.k8s.io being the highest priority (v3 next, etc.). The request headers looks like

I0518 16:16:00.390959  633715 round_trippers.go:469] Request Headers:
I0518 16:16:00.390963  633715 round_trippers.go:473]     X-Stream-Protocol-Version: v4.channel.k8s.io
I0518 16:16:00.390967  633715 round_trippers.go:473]     X-Stream-Protocol-Version: v3.channel.k8s.io
I0518 16:16:00.390969  633715 round_trippers.go:473]     X-Stream-Protocol-Version: v2.channel.k8s.io
I0518 16:16:00.390972  633715 round_trippers.go:473]     X-Stream-Protocol-Version: channel.k8s.io

Since it has been so many releases since v4 has been implemented, this is almost always the version selected in the handshake.

I do not think we should change anything about the current SPDY implementation. In other words, I believe it would be fine to keep the request for these four versions (in which it is almost certain the v4 version is chosen).

3. As releases increment, the probablity of a WebSocket enabled `kubectl` communicating
with an older non-WebSocket enabled API Server decreases.

### Proposal: `v5.channel.k8s.io` subprotocol version
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since we are changing bi-directional protocol, I agree that new subprotocol (v5.channel.k8s.io )would be better for the sake of maintainability. It also serves managing the fallback mechanism as said above.

SPDY data stream between `kubectl` and the API Server by conditionally adding a
`StreamTranslatorProxy` at the API Server. If the request is for a WebSocket upgrade
of version `v5.channel.k8s.io`, the `UpgradeAwareProxy` will delegate to the
`StreamTranslatorProxy`. This translation proxy terminates the WebSocket connection,
Copy link
Member

@ardaguclu ardaguclu May 18, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why StreamTranslatorProxy terminates websocket connection?. If the websocket connection is terminated between kubectl - API server, how data transfer will be done between these two components?

Copy link
Contributor Author

@seans3 seans3 May 18, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When I say the connection is terminated in the StreamTranslatorProxy, I do not mean that the data messages stop there. I just mean that the WebSockets data messages are received there by the WebSockets server. These received data messages are then proxied onto a SPDY client to continue upstream (to Kubelet and then the Container Runtime). So the StreamTranslatorProxy proxies the data by translating the streamed data messages from WebSockets to SPDY.

- `StreamTranslatorProxy` successfully integrated into the `UpgradeAwareProxy` behind an API Server feature flag.
- Initial unit tests completed and enabled
- Initial integration tests completed and enabled
- Initial e2e tests completed and enabled
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To assure that preserving feature parity of what SPDY has, we might need to add tests cases not only for websocket but also existing SPDY and I think it deserves being mentioned in alpha.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've added another note to supplement SPDY integration and e2e tests to ensure no regressions. Please let me know what you think.

# List the feature gate name and the components for which it must be enabled
feature-gates:
- name: Websockets
components:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think, v5.channel.k8s.io subprotocol is already a some kind of a feature flag in API server. Everything will work as is, as long as clients send requests in current subprotocols(v4..v1) and v5.channel.k8s.io subprotocol will be used in kubectl, only when it's feature flag is enabled.

Comment on lines 203 to 209
The SPDY streaming protocol has been deprecated for around eight years, and by now
many proxies, gateways, and load-balancers do not support SPDY. Our effort to modernize
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also mention that this is a feature of SPDY that was not adopted by HTTP 2.0?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've updated the Motivation section with a mention of why HTTP/2.0 was not considered. NOTE There is an Alternatives section in the KEP where we intend to more fully explore why HTTP/2.0 is inadequate.

Comment on lines +254 to +263
the current API Server proxy to translate the `kubectl` WebSockets data stream to
a SPDY upstream connection. In this way, the cluster components upstream from the
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are we planning to leave the API Server->Kubelet leg as SPDY forever? I'd strongly prefer that our long term plan be to eliminate SPDY entirely.

Copy link
Member

@aojea aojea May 18, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

indeed we should details the long term plan ... but this implies modifies the container runtimes, last change I remember that depended on those took 2 years

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like there are two remaining legs:

  • API Server -> Kubelet - I'd like a long term plan for this
  • Kubelet -> CRI - ?? I agree that this is likely to leave us supporting two protocols indefinitely. Explaining this clearly would be good enough for me.

Copy link
Contributor Author

@seans3 seans3 May 19, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Definitely, the plan is to incrementally transition each of the SPDY communication legs to WebSockets. We do not intend to support two protocols indefinitely. So I've added a section describing the transitions: Future: Kubelet StreamTranslatorProxy. This new section details the straightforward process to transition the next communication leg, as well as the plan to transition the final leg from Kubelet to the Container Runtimes. Please let me know what you think of this new section.

As a side note, I'm pretty sure @deads2k already agreed to this incremental transition plan during the May 3rd SIG API Machinery meeting. If I recall correctly, he agreed to allow us to initially target the communication leg from kubectl to the API Server, if we will transition the next leg (from the API Server to Kubelet) after the initial leg goes GA. If anyone else has a different recollection of that exchange, please let me know.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the ask here is to have an explicit plan how the other two legs will be handled. Will those be done through separate KEPs (if so I'd suggest opening issues right away and link them in the proposal as next steps). Or what's the rough plan, if other?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The two legs will be straightforward to implement, since both the API Server and the kubelet proxy streaming data using the exact same code (UpgradeAwareProxy). Our plan for implementing a WebSocket communication path is to delegate from the UpgradeAwareProxy to a proxy which translates WebSocket data into upstream SPDY data; this is our StreamTranslatorProxy. BTW, we have a proof-of-concept PR which implements this StreamTranslatorProxy. In order to extend the WebSocket communication path from the API Server to kubelet, we will only have to move the StreamTranslatorProxy delegation code from the API Server to the kubelet. We will not need separate KEP's.

Copy link
Contributor

@jpbetz jpbetz Jun 1, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's great news. From a feature tracking perspective having everything in one KEP is ideal. It's totally fine to break implementation work across releases like @soltysh described. We don't even need to know, up front, what goes into what exact release, we just need to list out all the changes that we want to have in place before we consider beta graduation. Then you're free to tackle as many as you can get to in each release.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's great news, just call it out in the proposal (only high level mention that kube-apiserver -> kubelet will be handled at beta, for example, etc) at which stages you're planning to deal with the other legs, this will allow everyone the scope of changes.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can call it out as 3 stages:

  1. communication between kubectl and kube-apiserver
  2. communication between kube-apisever and kubelet
  3. communication between kubelet and CRI
    Step 1 will be implemented during alpha (name the kubectl commands here) and beta (kubectl port-forward). Step 2 and step 3 will be implemented at beta (or whatever is appropriate).

Remember it's not set in stone, we can always update this document as we go, but it's good to write down the initial plan, it'll be easier for everyone to understand the progress and you can always update to match the actual progress.

Comment on lines 381 to 411
`FallbackExecutor` first attempts to upgrade the connection to WebSockets
version five (`v5.channel.k8s.io`) with the `WebSocketExecutor`, then falls back
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What if we end up needing a new channel version for SPDY? I'm a bit uncomfortable sharing the version numbers across protocols. Could we instead have something more like X-Stream-Protocol-Version: v1.websocket-channel.k8s.io ?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Or just v1.websocket.k8s.io?

`FallbackExecutor` first attempts to upgrade the connection to WebSockets
version five (`v5.channel.k8s.io`) with the `WebSocketExecutor`, then falls back
to the legacy `SPDYExecutor` version four (`v4.channel.k8s.io`), if the upgrade is
unsuccessful. Note that this mechanism can require two request/response trips instead
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why can't content negotiation handle this in a single request? Client could send:

POST /api/v1/…/pods/nginx/exec?command=<CMD>... HTTP/1.1
Connection: Upgrade
Upgrade: SPDY/3.1
X-Stream-Protocol-Version: v5.channel.k8s.io (or v1.websocket-channel.k8s.io)
X-Stream-Protocol-Version: v4.channel.k8s.io
...

Server responds with the upgrade path it can support, which could be either:

HTTP/1.1 101 Switching Protocols
Connection: Upgrade
Upgrade: websocket
...

or

HTTP/1.1 101 Switching Protocols
Connection: Upgrade
Upgrade: SPDY/3.1
X-Stream-Protocol-Version: v4.channel.k8s.io

?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah I see the below reasons. I think it's fine if clients choose to make two round trips. The server should be able to handle an upgrade request for both protocols and then pick the preferred protocol.

Comment on lines 232 to 240
2. We do not initially intend to modify *any* of the SPDY communication paths between
other cluster components other than `kubectl` and the API Server. In other words,
the current SPDY communication paths between the API Server and Kubelet, or Kubelet
and the Container Runtime will not change.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd like enough design detail to show that we will be able to switch kubectl port-forward to websockets in the future. Incremental progress toward websockets is great. Having two protocols indefinitely is an outcome I want to avoid.

Comment on lines 227 to 235
1. We do not intend to initially transition the current SPDY bi-directional streaming
protocol for `kubectl port-forward`. This command requires a different subprotocol
than the previously stated three `kubectl` commands (which use the `RemoteCommand`
subprotocol). We intend to transition `kubectl port-forward` in a future release.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Whay are we excluding this? Eliminating all SPDY on one connection leg seems like a good minimum for this KEP because it allows clients and intermediaries to stop caring about SPDY.

Copy link
Contributor Author

@seans3 seans3 May 19, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is an excellent question. Simply put, we are excluding port-forward from this release in order to reduce the scope of our effort and finally make some tangible progress. This issue has been outstanding for over eight years (since it was first identified in 2015). We have confidence that kubectl exec, cp, and attach (which use the RemoteCommand subprotocol) can be completed for alpha this release because we've already mostly proven it's possible with a proof-of-concept PR: kubernetes/kubernetes#116778. But progress on port-forward has been more difficult to achieve. Clayton Coleman abandoned and closed an attempt to implement port-forward over WebSockets back in 2017: kubernetes/kubernetes#50428. It appears he realized the scope of that effort would be much larger than he initially thought. It is doubtful that we would be able to implement both subprotocols over WebSockets this release. To be clear, it is definitely our intent to implement both subprotocols over WebSockets. But our strategy is to not let the perfect be the enemy of the good. We believe getting 3 out of 4 kubectl streaming commands out sooner is better than attempting to get out all 4 later. If we do not agree on this point, I think we should add an UNRESOLVED tag about this issue to the KEP, so we can still merge this initial provisional KEP. Please let me know what you think.

Copy link

@fedebongio fedebongio May 24, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm leaning to start with the initial proposal to avoid scope creeping: in the end it's Alpha, we could decide the minimum set of functionality we want right? Especially if Sean thinks that port-forward could delay this to the point of not making it in the next release. It's been 8 years, I'd prefer to be able to add incremental value step by step, rather than waiting more to have it complete.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll second Fede and Joe here, the doc can point out all of them, but implementation can be spread across several releases. We can explicitly point out that alpha1 will be 3 commands, alpah2 will be port-forward, and only once we have all of them we'll transition to beta. Alternatively, we can say that port-forward will block GA of this feature. This is fine as well, imo.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In order to get 3 out of 4 commands into users hands sooner rather than later, I think your second proposal is better; we can say that port-forward blocks GA (but not beta).

Copy link
Contributor Author

@seans3 seans3 Jun 5, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've added a section about how we intend to iterate (and the maturity levels) for the PortFoward as well as RemoteCommand subprotocols. This section is titled: Future: Port Forward Subprotocol. Please let me know what you think.

@seans3 seans3 force-pushed the websockets-kep branch 4 times, most recently from 8c9de08 to 6ccdf9e Compare May 19, 2023 03:44
@seans3 seans3 requested a review from soltysh June 6, 2023 07:36
Copy link
Contributor

@soltysh soltysh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same minor nits, but overall this looks great for alpha, thx @seans3 for patience 😅

/lgtm
from sig-cli pov

`Anthos Connect Gateway` to communicate with (some) Google clusters, users must
run `gcloud` specific commands which update the `kubeconfig` file to point to the
gateway. Afterwards, users can run streaming commands such as `kubectl exec ...`,
and the commands will transparently use the now supported WebSockets protocol.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Jun 6, 2023
@k8s-ci-robot k8s-ci-robot removed the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Jun 7, 2023
@seans3 seans3 force-pushed the websockets-kep branch 3 times, most recently from a9a75ca to 9fbc7ec Compare June 7, 2023 18:19
@jpbetz
Copy link
Contributor

jpbetz commented Jun 7, 2023

@deads2k PRR LGTM for alpha.

for `kubectl port-forward` for the communication leg between `kubectl` and the API
Server (alpha in v1.29).

3. Extend the WebSockets communication leg from the API Server to Kubelet
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Given detection in the kube-apiserver that is similar to kubectl, the kube-apiserver to kubelet connection could switch using the same mechanism. The auto-detection is important since kubelets can skew heavily, but that detection needs to happen regardless.

Is there a reason similar detection and patterns cannot work for kube-apiserver to kubelet cannot work?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have modified the KEP to schedule extending WebSocket to the Kubelet from Post-GA to Pre-GA. Please let me know if this addresses your concern.

You are correct that the mechanism for delegating to the StreamTranslatorProxy (which implements the WebSockets) is the same in the API Server as it is in the Kubelet, since both the API Server and the Kubelet are using the same UpgradeAwareProxy currently. Extending WebSockets to Kubelet will be 1) adding the same conditional delegation to StreamTranslatorProxy into the Kubelet, and 2) adding a step in the API Server to see if the WebSockets upgrade succeeded (101 Switching Protocols) signaling Kubelet supports the StreamTranslatorProxy. I think this step should be pretty straightforward.

@jpbetz
Copy link
Contributor

jpbetz commented Jun 8, 2023

/lgtm
/approve

/hold
For @deads2k to review apiserver->kubelet pre-GA requirements and approve PRR

@k8s-ci-robot k8s-ci-robot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Jun 8, 2023
@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Jun 8, 2023
@seans3 seans3 requested a review from deads2k June 8, 2023 06:38
@aojea
Copy link
Member

aojea commented Jun 8, 2023

/lgtm

Some context about the decision for the Header name, copied from the slack discussion

I’m rereading the RFC https://www.rfc-editor.org/rfc/rfc6455#section-1.9 and it seems we can use Sec-WebSocket-Protocol or Sec-WebSocket-Extensions,
the problem is that is clear that any value used for those headers MUST be registered in the IANA register, and I don’t think we should go into those business, I think we can propose the addition of a new header based on the spirit of the Sec-WebSocket-Protocol
Optionally, a |Sec-WebSocket-Protocol| header field, with a list
of values indicating which protocols the client would like to
speak, ordered by preference.
https://www.rfc-editor.org/rfc/rfc6455#section-1.9 (edited)

Something as: K8s-Websocket-Protocol is a new header field with the same semantics that Sec-WebSocket-Protocol defined in https://www.rfc-editor.org/rfc/rfc6455#section-1.9, that allows the client to specify the subprotocol used by the server, in this case stream-translate will indicate the apiserver to translate the stream to spdy …

Also, independently of the KEP we shoud merge the websocket protocol fixes, since we'll need them to be revendored by the runtimes, so the sooner the better

kubernetes/kubernetes@6ae9ab3
kubernetes/kubernetes@93ab8d1

@deads2k
Copy link
Contributor

deads2k commented Jun 9, 2023

I have modified the KEP to schedule extending WebSocket to the Kubelet from Post-GA to Pre-GA. Please let me know if this addresses your concern.

Excellent. No other concerns.

/approve
/hold cancel

@k8s-ci-robot k8s-ci-robot removed the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Jun 9, 2023
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: deads2k, jpbetz, seans3, soltysh

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Jun 9, 2023
@k8s-ci-robot k8s-ci-robot merged commit e2f4ea5 into kubernetes:master Jun 9, 2023
@k8s-ci-robot k8s-ci-robot added this to the v1.28 milestone Jun 9, 2023
@seans3 seans3 mentioned this pull request Jun 9, 2023
8 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/kep Categorizes KEP tracking issues and PRs modifying the KEP directory lgtm "Looks good to me", indicates that a PR is ready to be merged. sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants