-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Disabled HttpLoadBalancing, unable to create Ingress with glbc:0.9.1 #267
Comments
Most errors of
|
Quotas:
|
Is gke-tony-test-default-pool-42a243ce-kst4 a node in your current kubernetes cluster that shows up in kubectl get node? does it show up in the output of gcloud compute instances list? what zone is it in? |
|
I can post the full log from ingress controller start if that helps. |
Does the node in question (kst4) have the right failure-domain label? you should see something like |
|
Ah, so it does appear to have the right label. Can you ssh into that node and try to retrieve the instance via gcloud? (i.e gcloud compute instances describe gke-tony-test-default-pool-42a243ce-kst4, and gcloud compute backend-services list. I'm trying to confirm that your nodes have the right oauth_scopes as shown here: https://github.com/kubernetes/ingress/blob/master/controllers/gce/BETA_LIMITATIONS.md) |
Yep, I ssh-ed into the node:
|
On |
Ah, sorry for the delay, I suspect what's happening is the node that's running the controller is only able to view nodes in the same zone. We pipe a bunch of metadata into the controller via a volume mount on the master, when run on the node, this metadata doesn't exist, and so the lookup fails cross-region. I believe the faulty lookup is embedded in the cloudprovider library that we vendor from upstream, that's where you see the "Failed to retrieve " error message from. It's trying to get hosts, to compute host tags, to use in the firewall rule (https://github.com/kubernetes/kubernetes/blob/master/pkg/cloudprovider/providers/gce/gce.go#L1578, https://github.com/kubernetes/kubernetes/blob/master/pkg/cloudprovider/providers/gce/gce.go#L1163). On the master, we have a config file that supplies the node tags: https://github.com/kubernetes/kubernetes/blob/master/pkg/cloudprovider/providers/gce/gce.go#L1170 This is mounted into https://github.com/kubernetes/kubernetes/blob/master/cluster/saltbase/salt/l7-gcp/glbc.manifest#L29 and is basically a simple config file containing:
I can dig up some docs around that config file, but can you please try mounting it when you have time? |
Thanks for the guidance, I follow your logic. However after including this file into the 3 nodes of the cluster, it is still showing the same error. This is the file I set (same for all three) as a result of the grep:
Confirmed that it exists with the right contents inside the |
This is the modified
|
Got some progress when I added
This warning seems interesting though: |
That actually solves the Is there a way to run my fork of the ingress controller without having to set this file on the instance manually (aka should this be a fix for kubernetes/GKE)? |
This issue was moved to kubernetes/ingress-gce#29 |
Update
I was able to create the Ingress after this comment: #267 (comment)
Does this mean in order to run your own GCE Ingress, you have to always set this file? No information is provided about this in the docs.
Original Issue
Then
kubectl apply -f rc.yaml
this: https://github.com/kubernetes/ingress/blob/master/controllers/gce/rc.yamlThen I apply the following config:
kubectl describe ingress echo-app-tls
:I can let it wait for >1 hour and it is the same.
The text was updated successfully, but these errors were encountered: