-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
KEP-4006: Initial iteration of provisional WebSockets KEP #4016
Conversation
/cc @jpbetz |
3. As releases increment, the probablity of a WebSocket enabled `kubectl` communicating | ||
with an older non-WebSocket enabled API Server decreases. | ||
|
||
### Proposal: `v5.channel.k8s.io` subprotocol version |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is required to solve the existing gap in the implementation with half-close stdin
, isn't it?
kubernetes/kubernetes#89899
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes--I believe the half-close stdin
for WebSockets is a protocol increment. But I believe there may be a use of the v5 subprotocol version which may be more important, and I need your feedback on this.
There are several other WebSocket clients currently communicating with the API Server: 1) the OpenShift browser/javascript client, and 2) the Kubelet using WebSockets for watches, etc. So an important question becomes, how do we distinguish between these legacy WebSocket use cases, and the new kubectl
RemoteCommand
use cases which will need the new StreamTranslatorProxy
? In our current proof-of-concept PR, we substituted the new StreamTranslatorProxy
instead of the UpgradeAwareProxy
when handling the exec
subresource. I believe this is causing a lot of the e2e test failures. I believe we should integrate these two proxies as follows
<Within tryUpgrade of UpgradeAwareProxy, examining HTTP Request>
if <Upgrade == WebSockets && version == v5.channel.k8s.io && WEBSOCKETS_FEATURE_FLAG >
delegate request to StreamTranslatorProxy
else
<continue with legacy code in UpgradeAwareProxy>
Importantly, this allows us to only target the kubectl
RemoteCommand
for the StreamTranslatorProxy
, and the other current WebSocket use cases (as well as other SPDY use cases) are unaffected.
What do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since we are changing bi-directional protocol, I agree that new subprotocol (v5.channel.k8s.io
)would be better for the sake of maintainability. It also serves managing the fallback mechanism as said above.
# The following PRR answers are required at alpha release | ||
# List the feature gate name and the components for which it must be enabled | ||
feature-gates: | ||
- name: Websockets |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we should have a more meaningful name, but I'm bad for names KubectlWebsockets
or ClientWebsockets
??
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1. If there is going to be a separate KEP for kubectl port-forward
then maybe something like ClientSubcommandWebsockets
?
EDIT: Or if we do include port-forward (I've opened a separate comment thread on that), maybe WebsocketClientChannels
or something similar?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added ClientRemoteCommandWebsockets
.
|
||
# The following PRR answers are required at beta release | ||
metrics: | ||
- my_feature_metric |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
an interesting metrics will be to expose the number of failed connections and fallbacks, i.e., how many times the upgrade to websockets has to fall back
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is not currently possible in the apiserver, since each request is separate. There is currently no way to know that the second, fallback SPDY request is connected to a first WebSockets request. We can have metrics like num_websocket_requests
and num_websocket_failures
.
<<[/UNRESOLVED]>> | ||
``` | ||
|
||
<!-- DISCUSS why we're not using http/2.0. --> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is http2 does not expose any streaming capabilities so we should have to implement again all the streaming protocols and maintain our low level http2 library
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes. I just put a placeholder to remind myself to flesh this out in a future PR.
|
||
- `<package>`: `<date>` - `<test coverage>` | ||
|
||
##### Integration tests |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These are going to be important since those can allow us to exercise the fallback mechanism
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed. I intend to submit separate PRs to outline the testing plan as well as the Production Readiness questions within the KEP. Submitting this initial iteration of the provisional KEP will allow others to more easily contribute to unimplemented sections.
# List the feature gate name and the components for which it must be enabled | ||
feature-gates: | ||
- name: Websockets | ||
components: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@aojea @ardaguclu @jpbetz Hoping to get feedback on the components requiring a feature flag.
- Do we need feature flags for both the API Server and
kubectl
? - Can we get away with just a feature flag on
kubectl
? The code paths in the API Server (StreamTranslatorProxy
) should not be reachable without the newkubectl
requesting v5 ofRemoteCommand
subprotocol. - If we have feature flags for both components, can they be the same name ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think, v5.channel.k8s.io
subprotocol is already a some kind of a feature flag in API server. Everything will work as is, as long as clients send requests in current subprotocols(v4..v1) and v5.channel.k8s.io
subprotocol will be used in kubectl, only when it's feature flag is enabled.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
summoning @liggitt , he is a natural for these things ;)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added feature flags for both kubectl
and the API Server for this alpha KEP. If we change it, I will update the KEP.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
per-binary flags for wherever this feature is implemented sounds right to me. I don't have an opinion on if the binaries can/share share the same flag name or not.. I would probably give them different flag names incase anyone has infra that doesn't allow one flag to be set differently on different binaries, but I'll admit I don't know this stuff that well.
we propose creating a `FallbackExecutor` to address client/server version skew. The | ||
`FallbackExecutor` first attempts to upgrade the connection to WebSockets | ||
version five (`v5.channel.k8s.io`) with the `WebSocketExecutor`, then falls back | ||
to the legacy `SPDYExecutor` version four (`v4.channel.k8s.io`), if the upgrade is |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why it certainly falls back to version four (v4.channel.k8s.io
)?. Since we are using legacy SPDY executor and if very old API server uses v3.channel.k8s.io
, SPDY executor can fall back to version three?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
there is only n-3 versions supported, we should focus only on the supported versions
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So the current SPDY implementation asks for four versions, with v4.channel.k8s.io
being the highest priority (v3
next, etc.). The request headers looks like
I0518 16:16:00.390959 633715 round_trippers.go:469] Request Headers:
I0518 16:16:00.390963 633715 round_trippers.go:473] X-Stream-Protocol-Version: v4.channel.k8s.io
I0518 16:16:00.390967 633715 round_trippers.go:473] X-Stream-Protocol-Version: v3.channel.k8s.io
I0518 16:16:00.390969 633715 round_trippers.go:473] X-Stream-Protocol-Version: v2.channel.k8s.io
I0518 16:16:00.390972 633715 round_trippers.go:473] X-Stream-Protocol-Version: channel.k8s.io
Since it has been so many releases since v4
has been implemented, this is almost always the version selected in the handshake.
I do not think we should change anything about the current SPDY implementation. In other words, I believe it would be fine to keep the request for these four versions (in which it is almost certain the v4
version is chosen).
3. As releases increment, the probablity of a WebSocket enabled `kubectl` communicating | ||
with an older non-WebSocket enabled API Server decreases. | ||
|
||
### Proposal: `v5.channel.k8s.io` subprotocol version |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since we are changing bi-directional protocol, I agree that new subprotocol (v5.channel.k8s.io
)would be better for the sake of maintainability. It also serves managing the fallback mechanism as said above.
SPDY data stream between `kubectl` and the API Server by conditionally adding a | ||
`StreamTranslatorProxy` at the API Server. If the request is for a WebSocket upgrade | ||
of version `v5.channel.k8s.io`, the `UpgradeAwareProxy` will delegate to the | ||
`StreamTranslatorProxy`. This translation proxy terminates the WebSocket connection, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why StreamTranslatorProxy
terminates websocket connection?. If the websocket connection is terminated between kubectl - API server, how data transfer will be done between these two components?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When I say the connection is terminated in the StreamTranslatorProxy
, I do not mean that the data messages stop there. I just mean that the WebSockets data messages are received there by the WebSockets server. These received data messages are then proxied onto a SPDY client to continue upstream (to Kubelet and then the Container Runtime). So the StreamTranslatorProxy
proxies the data by translating the streamed data messages from WebSockets to SPDY.
- `StreamTranslatorProxy` successfully integrated into the `UpgradeAwareProxy` behind an API Server feature flag. | ||
- Initial unit tests completed and enabled | ||
- Initial integration tests completed and enabled | ||
- Initial e2e tests completed and enabled |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To assure that preserving feature parity of what SPDY has, we might need to add tests cases not only for websocket but also existing SPDY and I think it deserves being mentioned in alpha.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've added another note to supplement SPDY integration and e2e tests to ensure no regressions. Please let me know what you think.
# List the feature gate name and the components for which it must be enabled | ||
feature-gates: | ||
- name: Websockets | ||
components: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think, v5.channel.k8s.io
subprotocol is already a some kind of a feature flag in API server. Everything will work as is, as long as clients send requests in current subprotocols(v4..v1) and v5.channel.k8s.io
subprotocol will be used in kubectl, only when it's feature flag is enabled.
The SPDY streaming protocol has been deprecated for around eight years, and by now | ||
many proxies, gateways, and load-balancers do not support SPDY. Our effort to modernize |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also mention that this is a feature of SPDY that was not adopted by HTTP 2.0?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've updated the Motivation
section with a mention of why HTTP/2.0
was not considered. NOTE There is an Alternatives
section in the KEP where we intend to more fully explore why HTTP/2.0
is inadequate.
keps/sig-api-machinery/4006-transition-spdy-to-websockets/README.md
Outdated
Show resolved
Hide resolved
the current API Server proxy to translate the `kubectl` WebSockets data stream to | ||
a SPDY upstream connection. In this way, the cluster components upstream from the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are we planning to leave the API Server->Kubelet leg as SPDY forever? I'd strongly prefer that our long term plan be to eliminate SPDY entirely.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
indeed we should details the long term plan ... but this implies modifies the container runtimes, last change I remember that depended on those took 2 years
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks like there are two remaining legs:
- API Server -> Kubelet - I'd like a long term plan for this
- Kubelet -> CRI - ?? I agree that this is likely to leave us supporting two protocols indefinitely. Explaining this clearly would be good enough for me.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Definitely, the plan is to incrementally transition each of the SPDY communication legs to WebSockets. We do not intend to support two protocols indefinitely. So I've added a section describing the transitions: Future: Kubelet StreamTranslatorProxy
. This new section details the straightforward process to transition the next communication leg, as well as the plan to transition the final leg from Kubelet to the Container Runtimes. Please let me know what you think of this new section.
As a side note, I'm pretty sure @deads2k already agreed to this incremental transition plan during the May 3rd SIG API Machinery meeting. If I recall correctly, he agreed to allow us to initially target the communication leg from kubectl
to the API Server, if we will transition the next leg (from the API Server to Kubelet) after the initial leg goes GA. If anyone else has a different recollection of that exchange, please let me know.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the ask here is to have an explicit plan how the other two legs will be handled. Will those be done through separate KEPs (if so I'd suggest opening issues right away and link them in the proposal as next steps). Or what's the rough plan, if other?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The two legs will be straightforward to implement, since both the API Server and the kubelet proxy streaming data using the exact same code (UpgradeAwareProxy
). Our plan for implementing a WebSocket communication path is to delegate from the UpgradeAwareProxy
to a proxy which translates WebSocket data into upstream SPDY data; this is our StreamTranslatorProxy
. BTW, we have a proof-of-concept PR which implements this StreamTranslatorProxy
. In order to extend the WebSocket communication path from the API Server to kubelet, we will only have to move the StreamTranslatorProxy
delegation code from the API Server to the kubelet. We will not need separate KEP's.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's great news. From a feature tracking perspective having everything in one KEP is ideal. It's totally fine to break implementation work across releases like @soltysh described. We don't even need to know, up front, what goes into what exact release, we just need to list out all the changes that we want to have in place before we consider beta graduation. Then you're free to tackle as many as you can get to in each release.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's great news, just call it out in the proposal (only high level mention that kube-apiserver -> kubelet will be handled at beta, for example, etc) at which stages you're planning to deal with the other legs, this will allow everyone the scope of changes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can call it out as 3 stages:
- communication between kubectl and kube-apiserver
- communication between kube-apisever and kubelet
- communication between kubelet and CRI
Step 1 will be implemented during alpha (name the kubectl commands here) and beta (kubectl port-forward). Step 2 and step 3 will be implemented at beta (or whatever is appropriate).
Remember it's not set in stone, we can always update this document as we go, but it's good to write down the initial plan, it'll be easier for everyone to understand the progress and you can always update to match the actual progress.
`FallbackExecutor` first attempts to upgrade the connection to WebSockets | ||
version five (`v5.channel.k8s.io`) with the `WebSocketExecutor`, then falls back |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What if we end up needing a new channel version for SPDY? I'm a bit uncomfortable sharing the version numbers across protocols. Could we instead have something more like X-Stream-Protocol-Version: v1.websocket-channel.k8s.io
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Or just v1.websocket.k8s.io
?
`FallbackExecutor` first attempts to upgrade the connection to WebSockets | ||
version five (`v5.channel.k8s.io`) with the `WebSocketExecutor`, then falls back | ||
to the legacy `SPDYExecutor` version four (`v4.channel.k8s.io`), if the upgrade is | ||
unsuccessful. Note that this mechanism can require two request/response trips instead |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why can't content negotiation handle this in a single request? Client could send:
POST /api/v1/…/pods/nginx/exec?command=<CMD>... HTTP/1.1
Connection: Upgrade
Upgrade: SPDY/3.1
X-Stream-Protocol-Version: v5.channel.k8s.io (or v1.websocket-channel.k8s.io)
X-Stream-Protocol-Version: v4.channel.k8s.io
...
Server responds with the upgrade path it can support, which could be either:
HTTP/1.1 101 Switching Protocols
Connection: Upgrade
Upgrade: websocket
...
or
HTTP/1.1 101 Switching Protocols
Connection: Upgrade
Upgrade: SPDY/3.1
X-Stream-Protocol-Version: v4.channel.k8s.io
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah I see the below reasons. I think it's fine if clients choose to make two round trips. The server should be able to handle an upgrade request for both protocols and then pick the preferred protocol.
2. We do not initially intend to modify *any* of the SPDY communication paths between | ||
other cluster components other than `kubectl` and the API Server. In other words, | ||
the current SPDY communication paths between the API Server and Kubelet, or Kubelet | ||
and the Container Runtime will not change. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd like enough design detail to show that we will be able to switch kubectl port-forward
to websockets in the future. Incremental progress toward websockets is great. Having two protocols indefinitely is an outcome I want to avoid.
1. We do not intend to initially transition the current SPDY bi-directional streaming | ||
protocol for `kubectl port-forward`. This command requires a different subprotocol | ||
than the previously stated three `kubectl` commands (which use the `RemoteCommand` | ||
subprotocol). We intend to transition `kubectl port-forward` in a future release. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Whay are we excluding this? Eliminating all SPDY on one connection leg seems like a good minimum for this KEP because it allows clients and intermediaries to stop caring about SPDY.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is an excellent question. Simply put, we are excluding port-forward
from this release in order to reduce the scope of our effort and finally make some tangible progress. This issue has been outstanding for over eight years (since it was first identified in 2015). We have confidence that kubectl exec
, cp
, and attach
(which use the RemoteCommand
subprotocol) can be completed for alpha this release because we've already mostly proven it's possible with a proof-of-concept PR: kubernetes/kubernetes#116778. But progress on port-forward
has been more difficult to achieve. Clayton Coleman abandoned and closed an attempt to implement port-forward
over WebSockets back in 2017: kubernetes/kubernetes#50428. It appears he realized the scope of that effort would be much larger than he initially thought. It is doubtful that we would be able to implement both subprotocols over WebSockets this release. To be clear, it is definitely our intent to implement both subprotocols over WebSockets. But our strategy is to not let the perfect be the enemy of the good
. We believe getting 3 out of 4 kubectl
streaming commands out sooner is better than attempting to get out all 4 later. If we do not agree on this point, I think we should add an UNRESOLVED
tag about this issue to the KEP, so we can still merge this initial provisional KEP. Please let me know what you think.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm leaning to start with the initial proposal to avoid scope creeping: in the end it's Alpha, we could decide the minimum set of functionality we want right? Especially if Sean thinks that port-forward could delay this to the point of not making it in the next release. It's been 8 years, I'd prefer to be able to add incremental value step by step, rather than waiting more to have it complete.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll second Fede and Joe here, the doc can point out all of them, but implementation can be spread across several releases. We can explicitly point out that alpha1 will be 3 commands, alpah2 will be port-forward, and only once we have all of them we'll transition to beta. Alternatively, we can say that port-forward will block GA of this feature. This is fine as well, imo.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In order to get 3 out of 4 commands into users hands sooner rather than later, I think your second proposal is better; we can say that port-forward blocks GA (but not beta).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've added a section about how we intend to iterate (and the maturity levels) for the PortFoward
as well as RemoteCommand
subprotocols. This section is titled: Future: Port Forward Subprotocol
. Please let me know what you think.
8c9de08
to
6ccdf9e
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same minor nits, but overall this looks great for alpha, thx @seans3 for patience 😅
/lgtm
from sig-cli pov
keps/sig-api-machinery/4006-transition-spdy-to-websockets/kep.yaml
Outdated
Show resolved
Hide resolved
keps/sig-api-machinery/4006-transition-spdy-to-websockets/README.md
Outdated
Show resolved
Hide resolved
`Anthos Connect Gateway` to communicate with (some) Google clusters, users must | ||
run `gcloud` specific commands which update the `kubeconfig` file to point to the | ||
gateway. Afterwards, users can run streaming commands such as `kubectl exec ...`, | ||
and the commands will transparently use the now supported WebSockets protocol. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
a9a75ca
to
9fbc7ec
Compare
@deads2k PRR LGTM for alpha. |
for `kubectl port-forward` for the communication leg between `kubectl` and the API | ||
Server (alpha in v1.29). | ||
|
||
3. Extend the WebSockets communication leg from the API Server to Kubelet |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Given detection in the kube-apiserver that is similar to kubectl, the kube-apiserver to kubelet connection could switch using the same mechanism. The auto-detection is important since kubelets can skew heavily, but that detection needs to happen regardless.
Is there a reason similar detection and patterns cannot work for kube-apiserver to kubelet cannot work?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have modified the KEP to schedule extending WebSocket to the Kubelet from Post-GA
to Pre-GA
. Please let me know if this addresses your concern.
You are correct that the mechanism for delegating to the StreamTranslatorProxy
(which implements the WebSockets) is the same in the API Server as it is in the Kubelet, since both the API Server and the Kubelet are using the same UpgradeAwareProxy
currently. Extending WebSockets to Kubelet will be 1) adding the same conditional delegation to StreamTranslatorProxy
into the Kubelet, and 2) adding a step in the API Server to see if the WebSockets upgrade succeeded (101 Switching Protocols) signaling Kubelet supports the StreamTranslatorProxy
. I think this step should be pretty straightforward.
/lgtm /hold |
/lgtm Some context about the decision for the Header name, copied from the slack discussion
Also, independently of the KEP we shoud merge the websocket protocol fixes, since we'll need them to be revendored by the runtimes, so the sooner the better |
Excellent. No other concerns. /approve |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: deads2k, jpbetz, seans3, soltysh The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Issue link: Transition from SPDY to WebSockets #4006
Implements all alpha sections including
Adds initial reviewers and approvers