-
-
Notifications
You must be signed in to change notification settings - Fork 326
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pod exits with error 139 In-cluster #869
Comments
Hey there. I think the configuration you have supplied is insufficient to run the event_watcher example: kube = { path = "../kube", version = "^0.70.0", default-features = false, features = ["admission"] } you have turned off default-features, which is ok if you are building an admission controller without the need for a kube client or runtime, but not if you are using the watch api with watcher (which uses both those feature). try adding |
That should just cause the build to fail though, not a segfault or other suicide once the pod is up and running... |
@timdesi Can you post the Dockerfile? |
Hi, Dockerfile is like below;
|
I also try to debug/trace a code, found that application passes kube_client/src/config/mod.rs - line 178 from_cluster_env() function but after that pod exits with segfault and could not trace any more.
May help, ENV variables that are injected to pods, extracted from another pod at same namespace.
|
This looks very similar to the problem encountered in #331. I.e. alpine struggling with openssl. #331 (comment) . The workaround might work or try a different build environment. Not sure how many of us use alpine these days. |
@timdesi Did you get to the bottom of this? |
Unfortunately not as expected. Expectation was to use minimal possible container, i mean scratch, busybox, alpine. Found same work arounds with other build environments as proposed at previous comment. Thx. |
Thanks for getting back. Yeah, compiling from alpine is problematic. You should still be able to use a minimal possible container for your production environment (scratch, busybox, alpine, distroless) if you use some cross-compiling builder as part of your CI process, but that builder image will not be as small as the output production image. |
Current and expected behavior
I am running event_watcher example from source code from latest master.
Out of cluster example runs as expected but in-cluster pod exits with error code 139.
I have checked, all ENV variables are available inside pod. Pod is running with default service account that have all admin permissions on my cluster. Also check with go-client to be sure that everything is fine in cluster.
Possible solution
No response
Additional context
Out of cluster
In-cluster
no any logs ...
Environment
k8s with minikube
❯ kubectl version --short
Client Version: v1.22.4
Server Version: v1.23.3
Dockerfile OS : busybox or busybox42/alpine-pod
Configuration and features
kube = { path = "../kube", version = "^0.70.0", default-features = false, features = ["admission"] }
kube-derive = { path = "../kube-derive", version = "^0.70.0", default-features = false } # only needed to opt out of schema
k8s-openapi = { version = "0.14.0", default-features = false }
❯ cargo tree | grep kube
kube v0.70.0 (/prgs/rust/kube-rs/kube)
├── kube-client v0.70.0 (/prgs/rust/kube-rs/kube-client)
│ ├── kube-core v0.70.0 (/prgs/rust/kube-rs/kube-core)
└── kube-core v0.70.0 (/prgs/rust/kube-rs/kube-core) (*)
Affected crates
No response
Would you like to work on fixing this bug?
No response
The text was updated successfully, but these errors were encountered: