-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Knative service status to "Unknown" and "Uninitialized" #15753
Comments
Hi @hyde404 ,
The externalname should point to istio it is used for different purposes e.g. traffic splitting. I haven't checked all the details yet but is that external name being exposed on the AWS lb directly somehow (due to your ingresses) or Istio is not picking up changes? Could you try a more standard approach as in Knative docs as a smoke test?
This the reason you see the loadbalancer not being ready. I am wondering why https is used, what is the istio mode you use mtls? Note: Unfortunately I dont have an AWS cluster to test, so I am guessing. |
Hi @skonto, Thanks for your reply ! By removing proxy.istio.io/config: |
{
"gatewayTopology": {
"proxyProtocol": {}
}
} from It seems like, a probe, maybe from net-istio has issues. |
having the exact same issue, when testing |
I finally managed to make it work to use these annotations
(you can also add and this envoyfilter as well
So I totally got rid of "proxyProtocol": {} |
Observed the same behavior with knative service in a Kubeflow 1.9.1 deployment (on-prem). Restart istio-system/istio-ingressgateway deployment will make the knative service accessible and also the knative service ExternalName changes to "knative-local-gateway.istio-system.svc.cluster.local". Also observed in istio-ingressgateway log ' "GET /healthz HTTP/1.1" 404 NR route_not_found - "-" 0 0 0 - "192.168.0.238" "Knative-Ingress-Probe"' until restart the pod. Easily reproducible on my cluster by running the Kserve sklearn-iris inferenceservice example. |
What version of Knative?
1.17.0
net-istio: 1.17.0
istio: 1.24.2
Expected Behavior
The knative service status should be “Ready” instead of hanging in “Unknown” state.
Actual Behavior
When I deploy a knative service, in a EKS cluster, it remains in “Unknown” status until the istio ingress-controllers are restarted, even if the application can be reached.
It then switches to “Ready”, and the next application deployed will be in “Unknown” status, and so on.
The application is exposed with a loadbalancer and reachable
Here are the details of the knative service status:
So, for the load-balancing I use an AWS NLB, and everything seems to be ok, all the targets (15021, 443, 80) are healthy.
I also noticed a couple of logs, probably related to the issue.
and
I'd also like to point out that I looked at the route and the ingress, the outputs of which are as follows
route
ingress
strange findings
Logs from
istiod
Services before ingress-controller restart
Services after ingress-controller restart
The external-ip from
ExternalName
turned fromtest-eb7d5189.serverless-dev.xyz.crashcourse.com
toknative-local-gateway.istio-system.svc.cluster.local
.Some tests
From a "Ready" service
From the "Unknown" service
Steps to Reproduce the Problem
Ingress controllers
My setup has some particularities. I use 3 different ingress controllers configured with the helm values as below:
In case you wonder, I use proxy config for matching source IPs and use it in AuthorizationPolicy afterwards.
Knative
I deploy knative using the knative-operator as follow :
domain-template
is linked to an operator we have, so nvm that.Knative Service
Same for the annotations/labels, is it linked to the operator
The text was updated successfully, but these errors were encountered: