Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kubernetes Client Exception : Failure executing: GET #21

Closed
johnwfinigan opened this issue Nov 2, 2021 · 5 comments
Closed

Kubernetes Client Exception : Failure executing: GET #21

johnwfinigan opened this issue Nov 2, 2021 · 5 comments
Labels
bug Something isn't working

Comments

@johnwfinigan
Copy link

Hello,

Thank you for your work on the operator! I am running commit 7122072 and have tried deploying against k8s 1.19.5 and 1.21.5, both deployed as RKE by Rancher.

In both cases, the kustomize deploy of 1-namespaced-hpa succeeds without errors, but the shinyproxy-operator pod ends up in CrashLoopBackOff status permanently. I get the following logs for it. Any tips on what to try next?

12:27:30.701 [main ] DEBUG io.fa.ku.cl.Config - Trying to configure client from Kubernetes config...
12:27:30.708 [main ] DEBUG io.fa.ku.cl.Config - Did not find Kubernetes config at: [/home/shinyproxy-operator/.kube/config]. Ignoring.
12:27:30.708 [main ] DEBUG io.fa.ku.cl.Config - Trying to configure client from service account...
12:27:30.709 [main ] DEBUG io.fa.ku.cl.Config - Found service account host and port: 10.43.0.1:443
12:27:30.709 [main ] DEBUG io.fa.ku.cl.Config - Found service account ca cert at: [/var/run/secrets/kubernetes.io/serviceaccount/ca.crt].
12:27:30.709 [main ] DEBUG io.fa.ku.cl.Config - Found service account token at: [/var/run/secrets/kubernetes.io/serviceaccount/token].
12:27:30.710 [main ] DEBUG io.fa.ku.cl.Config - Trying to configure client namespace from Kubernetes service account namespace path...
12:27:30.710 [main ] DEBUG io.fa.ku.cl.Config - Found service account namespace at: [/var/run/secrets/kubernetes.io/serviceaccount/namespace].
12:27:31.620 [main ] INFO eu.op.sh.Operator - Using NAMESPACED for property SPO_MODE
12:27:31.621 [main ] INFO eu.op.sh.Operator - Using false for property SPO_DISABLE_SECURE_COOKIES
12:27:31.623 [main ] INFO eu.op.sh.Operator - Using 0 for property SPO_PROBE_INITIAL_DELAY
12:27:31.703 [main ] INFO eu.op.sh.Operator - Using 0 for property SPO_PROBE_FAILURE_THRESHOLD
12:27:31.704 [main ] INFO eu.op.sh.Operator - Using 3 for property SPO_PROBE_TIMEOUT
12:27:31.705 [main ] INFO eu.op.sh.Operator - Using 60 for property SPO_STARTUP_PROBE_INITIAL_DELAY
12:27:31.706 [main ] INFO eu.op.sh.Operator - Using -1 for property SPO_PROCESS_MAX_LIFETIME
12:27:31.706 [main ] INFO eu.op.sh.Operator - Using DEBUG for property SPO_LOG_LEVEL
12:27:31.709 [main ] INFO eu.op.sh.Operator - Running in NAMESPACED mode
12:27:31.710 [main ] INFO eu.op.sh.Operator - Using namespace : shinyproxy
12:27:31.916 [main ] INFO eu.op.sh.Operator - Starting background processes of ShinyProxy Operator

Warning: could not check whether ShinyProxy CRD exits.
This is normal when the ServiceAccount of the operator does not have permission to access CRDs (at cluster scope).
If you get an unexpected error after this message, make sure that the CRD exists.

io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: GET at: https://10.43.0.1/apis/openanalytics.eu/v1/namespaces/shinyproxy/shinyproxies. Message: 404 page not found
.
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure(OperationSupport.java:686)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.assertResponseCode(OperationSupport.java:625)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:565)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:526)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:509)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.listRequestHelper(BaseOperation.java:137)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.list(BaseOperation.java:524)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.list(BaseOperation.java:513)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.list(BaseOperation.java:88)
at io.fabric8.kubernetes.client.informers.cache.Reflector.getList(Reflector.java:53)
12:27:32.914 [main ] WARN eu.op.sh.Main - Kubernetes Client Exception : Failure executing: GET at: https://10.43.0.1/apis/openanalytics.eu/v1/namespaces/shinyproxy/shinyproxies. Message: 404 page not found
.
at io.fabric8.kubernetes.client.informers.cache.Reflector.listSyncAndWatch(Reflector.java:77)
at io.fabric8.kubernetes.client.informers.impl.DefaultSharedIndexInformer.run(DefaultSharedIndexInformer.java:146)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.inform(BaseOperation.java:1043)
at eu.openanalytics.shinyproxyoperator.controller.ShinyProxyListener.start(ShinyProxyListener.kt:40)
at eu.openanalytics.shinyproxyoperator.Operator.prepare(Operator.kt:192)
at eu.openanalytics.shinyproxyoperator.MainKt.main(main.kt:38)
at eu.openanalytics.shinyproxyoperator.MainKt$main$3.invoke(main.kt)
at eu.openanalytics.shinyproxyoperator.MainKt$main$3.invoke(main.kt)
at kotlin.coroutines.intrinsics.IntrinsicsKt__IntrinsicsJvmKt$createCoroutineUnintercepted$$inlined$createCoroutineFromSuspendFunction$IntrinsicsKt__IntrinsicsJvmKt$1.invokeSuspend(IntrinsicsJvm.kt:205)
at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33)
at kotlin.coroutines.ContinuationKt.startCoroutine(Continuation.kt:115)
at kotlin.coroutines.jvm.internal.RunSuspendKt.runSuspend(RunSuspend.kt:19)
at eu.openanalytics.shinyproxyoperator.MainKt.main(main.kt)

@LEDfan
Copy link
Member

LEDfan commented Nov 2, 2021

Hi

Are you sure that the CRD was successfully created? You can check by running:

kubectl  get crds

it should return something like:

shinyproxies.openanalytics.eu   2021-10-29T08:51:39Z

If not, you can try re-running the kustomize command. Usually it takes two attempts to create all resource (see #19)

@johnwfinigan
Copy link
Author

Thank you! The CRD does seem to have been created:

john@rancher 1-namespaced-hpa]$ kubectl get crds
NAME CREATED AT

shinyproxies.openanalytics.eu 2021-11-02T12:10:38Z

Here's the output of my latest try, done right before opening this issue. This was on k8s 1.19.5. I had deleted the shinyproxy namespace and deleted the CRD prior to this run, to start fresh:

[john@rancher 1-namespaced-hpa]$ kustomize build . | kubectl apply -f -
namespace/shinyproxy created
Warning: apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
customresourcedefinition.apiextensions.k8s.io/shinyproxies.openanalytics.eu created
serviceaccount/shinyproxy-operator-sa created
serviceaccount/shinyproxy-sa created
serviceaccount/skipper-ingress created
role.rbac.authorization.k8s.io/shinyproxy-operator-role created
role.rbac.authorization.k8s.io/shinyproxy-sa-role created
role.rbac.authorization.k8s.io/skipper-ingress created
rolebinding.rbac.authorization.k8s.io/shinyproxy-operator-rolebinding created
rolebinding.rbac.authorization.k8s.io/shinyproxy-sa-rolebinding created
rolebinding.rbac.authorization.k8s.io/skipper-ingress created
secret/redis-password created
service/redis created
service/skipper-ingress created
deployment.apps/redis created
deployment.apps/shinyproxy-operator created
deployment.apps/skipper-ingress created
horizontalpodautoscaler.autoscaling/skipper-ingress created
ingress.networking.k8s.io/ngingx-to-skipper-ingress created
shinyproxy.openanalytics.eu/shinyproxy created

And here's the output of the first rerun:

[john@rancher 1-namespaced-hpa]$ kustomize build . | kubectl apply -f -
namespace/shinyproxy unchanged
Warning: apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
customresourcedefinition.apiextensions.k8s.io/shinyproxies.openanalytics.eu unchanged
serviceaccount/shinyproxy-operator-sa unchanged
serviceaccount/shinyproxy-sa unchanged
serviceaccount/skipper-ingress unchanged
role.rbac.authorization.k8s.io/shinyproxy-operator-role unchanged
role.rbac.authorization.k8s.io/shinyproxy-sa-role unchanged
role.rbac.authorization.k8s.io/skipper-ingress unchanged
rolebinding.rbac.authorization.k8s.io/shinyproxy-operator-rolebinding unchanged
rolebinding.rbac.authorization.k8s.io/shinyproxy-sa-rolebinding unchanged
rolebinding.rbac.authorization.k8s.io/skipper-ingress unchanged
secret/redis-password unchanged
service/redis unchanged
service/skipper-ingress unchanged
deployment.apps/redis unchanged
deployment.apps/shinyproxy-operator unchanged
deployment.apps/skipper-ingress configured
horizontalpodautoscaler.autoscaling/skipper-ingress configured
ingress.networking.k8s.io/ngingx-to-skipper-ingress unchanged

I have also tried deleting the shinyproxy-operator pod and letting k8s recreate it, however I end up with the same error logs.

@LEDfan
Copy link
Member

LEDfan commented Nov 2, 2021

Are you using our docker container (i.e. https://hub.docker.com/r/openanalytics/shinyproxy-snapshot/tags?page=1&ordering=last_updated)?
In that case, this docker image is more up to date than the code/docs in this repo. The reason is that we are currently in the progress of releasing an official version of this project. Please use an image with the tag 0.1.0-SNAPSHOT-20210923.115703 for now. Once we release the final version, you can of course use that version again.
The tag must be changed here https://github.com/openanalytics/shinyproxy-operator/blob/develop/docs/deployment/bases/namespaced/operator/deployment.yaml#L22 and here https://github.com/openanalytics/shinyproxy-operator/blob/develop/docs/deployment/bases/clustered/operator/deployment.yaml#L22

@johnwfinigan
Copy link
Author

Success! I updated to

image: openanalytics/shinyproxy-operator-snapshot:0.1.0-SNAPSHOT-20210923.115703

and now have this:

[john@rancher 1-namespaced-hpa]$ kubectl get pods --namespace=shinyproxy
NAME READY STATUS RESTARTS AGE
redis-74958d9457-vfk4w 1/1 Running 0 79m
shinyproxy-operator-678949457f-swkpv 1/1 Running 0 3m49s
skipper-ingress-7cd874b6f5-wst7n 1/1 Running 0 79m
skipper-ingress-7cd874b6f5-zwgp8 1/1 Running 0 78m
sp-shinyproxy-rs-8c0ff461314becb6b7d80a73a01568f417d422c6-2fxzp 1/1 Running 0 3m28s

Thank you so much for the very quick assistance!

@LEDfan
Copy link
Member

LEDfan commented Nov 8, 2021

With the release of version 1.0.0 this is fixed and you should be able to use the 1.0.0 docker image with the documentation in the master branch.

@LEDfan LEDfan closed this as completed Nov 8, 2021
@LEDfan LEDfan added the bug Something isn't working label Nov 8, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants