-
-
Notifications
You must be signed in to change notification settings - Fork 326
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Proxy #837
Add Proxy #837
Conversation
Codecov Report
@@ Coverage Diff @@
## master #837 +/- ##
==========================================
- Coverage 69.97% 67.88% -2.10%
==========================================
Files 59 61 +2
Lines 4140 4269 +129
==========================================
+ Hits 2897 2898 +1
- Misses 1243 1371 +128
Continue to review full report at Codecov.
|
Signed-off-by: kazk <[email protected]>
394dace
to
6187987
Compare
oof, yeah, this is pretty complicated! thanks for raising what you have here. the general plan here sounds reasonable; the default client should use something like this if the i am maintaing a local test setup for this (same as in #438 (comment)) which - if it's of interest - currently fails with the following: logs
new example with default features and this diff against linked setup: diff --git examples/custom_client_proxy.rs examples/custom_client_proxy.rs
index 7917612..b74b870 100644
--- examples/custom_client_proxy.rs
+++ examples/custom_client_proxy.rs
@@ -50,7 +50,7 @@ async fn main() -> anyhow::Result<()> {
let tls = config.native_tls_connector()?;
let mut http = HttpConnector::new();
http.enforce_http(false);
- let proxy_url = "http://localhost:5000".parse::<Uri>().unwrap();
+ let proxy_url = config.proxy_url.clone().expect("needs a proxy_url set in kubeconfig");
ProxyConnector::native_tls(proxy_url, http, tls)
};
2022-02-20T16:45:13.229918Z TRACE tower::buffer::worker: worker polling for next message
2022-02-20T16:45:13.229916Z TRACE tower::buffer::service: sending request to buffer worker
2022-02-20T16:45:13.230081Z TRACE tower::buffer::worker: worker polling for next message
2022-02-20T16:45:13.230109Z TRACE tower::buffer::worker: processing new request
2022-02-20T16:45:13.230130Z TRACE tower::buffer::worker: resumed=false worker received request; waiting for service readiness
2022-02-20T16:45:13.230158Z DEBUG tower::buffer::worker: service.ready=true processing request
2022-02-20T16:45:13.230286Z DEBUG request{method=GET uri=https://127.0.0.1:6443/api/v1/namespaces/default/configmaps? version=HTTP/1.1}: tower_http::trace::on_request: started processing request
2022-02-20T16:45:13.230352Z TRACE tower::buffer::worker: returning response future
2022-02-20T16:45:13.230380Z TRACE tower::buffer::worker: worker polling for next message
2022-02-20T16:45:13.230432Z TRACE request{method=GET uri=https://127.0.0.1:6443/api/v1/namespaces/default/configmaps? version=HTTP/1.1}: hyper::client::pool: checkout waiting for idle connection: ("https", 127.0.0.1:6443)
2022-02-20T16:45:13.230509Z DEBUG request{method=GET uri=https://127.0.0.1:6443/api/v1/namespaces/default/configmaps? version=HTTP/1.1}: kube_client::client::proxy: starting new connection: https://127.0.0.1:6443/
2022-02-20T16:45:13.230610Z TRACE request{method=GET uri=https://127.0.0.1:6443/api/v1/namespaces/default/configmaps? version=HTTP/1.1}: hyper::client::connect::http: Http::connect; scheme=Some("socks5"), host=Some("0.0.0.0"), port=Some(Port(3126))
2022-02-20T16:45:13.230685Z DEBUG request{method=GET uri=https://127.0.0.1:6443/api/v1/namespaces/default/configmaps? version=HTTP/1.1}: hyper::client::connect::http: connecting to 0.0.0.0:3126
2022-02-20T16:45:13.230815Z TRACE request{method=GET uri=https://127.0.0.1:6443/api/v1/namespaces/default/configmaps? version=HTTP/1.1}: mio::poll: registering event source with poller: token=Token(1), interests=READABLE | WRITABLE
2022-02-20T16:45:13.230938Z DEBUG request{method=GET uri=https://127.0.0.1:6443/api/v1/namespaces/default/configmaps? version=HTTP/1.1}: hyper::client::connect::http: connected to 0.0.0.0:3126
2022-02-20T16:45:13.230985Z TRACE request{method=GET uri=https://127.0.0.1:6443/api/v1/namespaces/default/configmaps? version=HTTP/1.1}: kube_client::client::proxy: tunneling HTTPS over proxy
2022-02-20T16:45:13.231137Z TRACE request{method=GET uri=https://127.0.0.1:6443/api/v1/namespaces/default/configmaps? version=HTTP/1.1}: mio::poll: deregistering event source from poller
2022-02-20T16:45:13.231216Z TRACE request{method=GET uri=https://127.0.0.1:6443/api/v1/namespaces/default/configmaps? version=HTTP/1.1}: hyper::client::pool: checkout dropped for ("https", 127.0.0.1:6443)
2022-02-20T16:45:13.231304Z ERROR request{method=GET uri=https://127.0.0.1:6443/api/v1/namespaces/default/configmaps? version=HTTP/1.1}: tower_http::trace::on_failure: response failed classification=Error: error trying to connect: unexpected eof while tunneling latency=1 ms
2022-02-20T16:45:13.231432Z TRACE tower::buffer::worker: buffer already closed
Error: HyperError: error trying to connect: unexpected eof while tunneling
Caused by:
0: error trying to connect: unexpected eof while tunneling
1: unexpected eof while tunneling ..this sounds expected atm with outstanding tasks (k3s cluster with standard client certs). at any rate, happy to help out as much as i can here. |
Your test setup is SOCKS proxy, right? That's not implemented at all yet. By the way, I just tried $ HTTPS_PROXY=:5000 kubectl get namespaces --insecure-skip-tls-verify
error: You must be logged in to the server (Unauthorized) So, starting |
SOCKS5 works with custom client: let mut config = Config::infer().await?;
// config.accept_invalid_certs = true;
let connector = {
let mut http = HttpConnector::new();
http.enforce_http(false);
let proxy = hyper_socks2::SocksConnector {
proxy_addr: Uri::from_static("socks5://localhost:5000"),
auth: None,
connector: http,
};
let tls = tokio_native_tls::TlsConnector::from(config.native_tls_connector()?);
hyper_tls::HttpsConnector::from((proxy, tls))
};
let service = ServiceBuilder::new()
.layer(config.base_uri_layer())
.service(hyper::Client::builder().build(connector));
let client = Client::new(service, config.default_namespace); We can provide an optional layer for the proxy, like: let socks_proxy_layer = config.proxy_url.as_ref().and_then(|proxy_url| {
(proxy_url.scheme_str() == Some("socks5")).then(|| {
tower::layer::layer_fn(|connector| hyper_socks2::SocksConnector {
proxy_addr: proxy_url.clone(),
auth: None,
connector,
})
})
}); To make this work for the default client, we'll need to clean up the internals to stack let connector = ServiceBuilder::new()
.layer(https_layer)
.option_layer(socks_proxy_layer)
.service(http_connector);
// hyper_tls::HttpsConnector<
// tower::util::Either<
// hyper_socks2::SocksConnector<hyper::client::HttpConnector>,
// hyper::client::HttpConnector
// >
// > This can then be used to build let service = ServiceBuilder::new()
.layer(config.base_uri_layer())
.service(hyper::Client::builder().build(connector));
// kube::client::middleware::BaseUri<
// hyper::Client<
// hyper_tls::HttpsConnector<
// tower::util::Either<
// hyper_socks2::SocksConnector<hyper::client::HttpConnector>,
// hyper::client::HttpConnector
// >
// >,
// _
// >
// >
let client = Client::new(service, config.default_namespace); |
Ah, this is great! Yeah, ssh dynamic forward is a SOCKS5 proxy, and forgot about this. So only I guess Exposing an either would make the EDIT: I guess the |
Closing this experimental PR because:
We just need to make the default client use |
Motivation
Add proxy support. The default client should support enabling proxy with environment variables (
HTTPS_PROXY
,HTTP_PROXY
) orconfig.proxy_url
. Custom clients should be able to do the same.Solution
Provide
ProxyConnector
to add this functionality, and change the default client to optionally use it.Status
PoC. Borrowing lots of code from
reqwest
to experiment with. I don't remember the details, but I couldn't makehyper-proxy
work. If this turns out to be general enough, it might make sense to make it a crate.Using it with
mitmproxy
kind of works. See the comment in the included example.TODO
hyper-proxy
againFigure out how to handle client certs. Currently startingAuthenticating with client certs requires startingmitmproxy
with--set client_certs=$(pwd)/client-certs.pem
. See the included example. The goal is to allow usingkube
withmitmproxy
like in Inspecting kubectl traffic with mitmproxymitmproxy
with--set cllient_certs
forkubectl
as well.config.proxy_url