Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Disabled HttpLoadBalancing, unable to create Ingress with glbc:0.9.1 #29

Closed
bowei opened this issue Oct 11, 2017 · 17 comments
Closed

Disabled HttpLoadBalancing, unable to create Ingress with glbc:0.9.1 #29

bowei opened this issue Oct 11, 2017 · 17 comments

Comments

@bowei
Copy link
Member

bowei commented Oct 11, 2017

From @tonglil on February 14, 2017 1:42

Update

I was able to create the Ingress after this comment: kubernetes/ingress-nginx#267 (comment)

Does this mean in order to run your own GCE Ingress, you have to always set this file? No information is provided about this in the docs.

Original Issue

  1. I disabled the cluster's addon via:
gcloud container clusters update tony-test --update-addons HttpLoadBalancing=DISABLED
  1. Then kubectl apply -f rc.yaml this: https://github.com/kubernetes/ingress/blob/master/controllers/gce/rc.yaml

  2. Then I apply the following config:

---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  labels:
    app: echo-app
  name: echo-app
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: echo-app
    spec:
      containers:
      - image: gcr.io/google_containers/echoserver:1.4
        name: echo-app
        ports:
        - containerPort: 8080
          protocol: TCP
        readinessProbe:
          failureThreshold: 10
          httpGet:
            path: /healthz
            port: 8080
            scheme: HTTP
          periodSeconds: 1
          successThreshold: 1
          timeoutSeconds: 1
        resources: {}
        terminationMessagePath: /dev/termination-log
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: echo-app-tls
  annotations:
    kubernetes.io/ingress.allow-http: "false"
spec:
  backend:
    serviceName: echo-app
    servicePort: 88
---
apiVersion: v1
kind: Service
metadata:
  name: echo-app
spec:
  type: NodePort
  selector:
    app: echo-app
  ports:
    - name: http
      port: 88
      protocol: TCP
      targetPort: 8080
  1. What I get when kubectl describe ingress echo-app-tls:
Name:                   echo-app-tls
Namespace:              default
Address:
Default backend:        echo-app:88 (10.254.33.15:8080)
Rules:
  Host  Path    Backends
  ----  ----    --------
  *     *       echo-app:88 (10.254.33.15:8080)
Annotations:
  backends:     {"k8s-be-30659--d785be79bbf6d463":"UNHEALTHY"}
  url-map:      k8s-um-default-echo-app-tls--d785be79bbf6d463
Events:
  FirstSeen     LastSeen        Count   From                            SubObjectPath   Type            Reason  Message
  ---------     --------        -----   ----                            -------------   --------        ------  -------
  2m            2m              1       {loadbalancer-controller }                      Normal          ADD     default/echo-app-tls
  1m            <invalid>       16      {loadbalancer-controller }                      Warning         GCE     instance not found
  1m            <invalid>       16      {loadbalancer-controller }                      Normal          Service default backend set to echo-app:30659

I can let it wait for >1 hour and it is the same.

Copied from original issue: kubernetes/ingress-nginx#267

@bowei
Copy link
Member Author

bowei commented Oct 11, 2017

From @tonglil on February 14, 2017 1:43

Most errors of kubectl logs l7-lb-controller-b86hs l7-lb-controller are:

...
E0214 01:41:12.350167       1 utils.go:151] Requeuing default/echo-app-tls, err instance not found
I0214 01:41:13.643896       1 firewalls.go:62] Creating global l7 firewall rule k8s-fw-l7--d785be79bbf6d463
E0214 01:41:13.778479       1 gce.go:2896] Failed to retrieve instance: "gke-tony-test-default-pool-42a243ce-kst4"
I0214 01:41:13.778922       1 event.go:217] Event(api.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"echo-app-tls", UID:"5132ea49-f256-11e6-a3e9-42010a800021", APIVersion:"extensions", ResourceVersion:"2747082", FieldPath:""}): type: 'Warning' reason: 'GCE' instance not found
I0214 01:41:13.859773       1 event.go:217] Event(api.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"echo-app-tls", UID:"5132ea49-f256-11e6-a3e9-42010a800021", APIVersion:"extensions", ResourceVersion:"2747082", FieldPath:""}): type: 'Normal' reason: 'Service' default backend set to echo-app:30659
I0214 01:41:13.918341       1 loadbalancers.go:732] UrlMap for l7 default-echo-app-tls--d785be79bbf6d463 is unchanged
E0214 01:41:14.289968       1 utils.go:151] Requeuing default/echo-app-tls, err instance not found
...

@bowei
Copy link
Member Author

bowei commented Oct 11, 2017

From @tonglil on February 14, 2017 1:43

kubectl version:

Client Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.2", GitCommit:"08e099554f3c31f6e6f07b448ab3ed78d0520507", GitTreeState:"clean", BuildDate:"2017-01-12T04:57:25Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.2", GitCommit:"08e099554f3c31f6e6f07b448ab3ed78d0520507", GitTreeState:"clean", BuildDate:"2017-01-12T04:52:34Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}

Quotas:

quotas:
- limit: 25000.0
  metric: SNAPSHOTS
  usage: 1.0
- limit: 50.0
  metric: NETWORKS
  usage: 2.0
- limit: 500.0
  metric: FIREWALLS
  usage: 49.0
- limit: 10000.0
  metric: IMAGES
  usage: 0.0
- limit: 700.0
  metric: STATIC_ADDRESSES
  usage: 1.0
- limit: 300.0
  metric: ROUTES
  usage: 24.0
- limit: 375.0
  metric: FORWARDING_RULES
  usage: 21.0
- limit: 1250.0
  metric: TARGET_POOLS
  usage: 9.0
- limit: 1250.0
  metric: HEALTH_CHECKS
  usage: 15.0
- limit: 2300.0
  metric: IN_USE_ADDRESSES
  usage: 20.0
- limit: 1250.0
  metric: TARGET_INSTANCES
  usage: 0.0
- limit: 250.0
  metric: TARGET_HTTP_PROXIES
  usage: 5.0
- limit: 250.0
  metric: URL_MAPS
  usage: 6.0
- limit: 75.0
  metric: BACKEND_SERVICES
  usage: 10.0
- limit: 2500.0
  metric: INSTANCE_TEMPLATES
  usage: 13.0
- limit: 125.0
  metric: TARGET_VPN_GATEWAYS
  usage: 2.0
- limit: 250.0
  metric: VPN_TUNNELS
  usage: 2.0
- limit: 20.0
  metric: ROUTERS
  usage: 0.0
- limit: 250.0
  metric: TARGET_SSL_PROXIES
  usage: 0.0
- limit: 250.0
  metric: TARGET_HTTPS_PROXIES
  usage: 0.0
- limit: 250.0
  metric: SSL_CERTIFICATES
  usage: 4.0
- limit: 275.0
  metric: SUBNETWORKS
  usage: 2.0

@bowei
Copy link
Member Author

bowei commented Oct 11, 2017

From @bprashanth on February 14, 2017 1:45

Is gke-tony-test-default-pool-42a243ce-kst4 a node in your current kubernetes cluster that shows up in kubectl get node? does it show up in the output of gcloud compute instances list? what zone is it in?

@bowei
Copy link
Member Author

bowei commented Oct 11, 2017

From @tonglil on February 14, 2017 1:47

kubectl get node :

NAME                                       STATUS    AGE
gke-tony-test-default-pool-42a243ce-kst4   Ready     24d
gke-tony-test-default-pool-995fe96e-slt7   Ready     24d
gke-tony-test-default-pool-cfc042b6-6w44   Ready     24d

gcloud compute instances list:

NAME                                          ZONE           MACHINE_TYPE   PREEMPTIBLE  INTERNAL_IP    EXTERNAL_IP      STATUS
gke-tony-test-default-pool-cfc042b6-6w44      us-central1-f  n1-standard-1               10.224.190.2   XXX  RUNNING
gke-tony-test-default-pool-42a243ce-kst4      us-central1-c  n1-standard-1               10.224.190.4   XXX   RUNNING
gke-tony-test-default-pool-995fe96e-slt7      us-central1-b  n1-standard-1               10.224.190.6   XXX   RUNNING

@bowei
Copy link
Member Author

bowei commented Oct 11, 2017

From @tonglil on February 14, 2017 1:49

I can post the full log from ingress controller start if that helps.

@bowei
Copy link
Member Author

bowei commented Oct 11, 2017

From @bprashanth on February 14, 2017 1:51

Does the node in question (kst4) have the right failure-domain label? you should see something like failure-domain.beta.kubernetes.io/zone=us-central1-b in the output of kubectl get node --show-labels

@bowei
Copy link
Member Author

bowei commented Oct 11, 2017

From @tonglil on February 14, 2017 2:25

kubectl get node --show-labels:

NAME                                       STATUS    AGE       LABELS
gke-tony-test-default-pool-42a243ce-kst4   Ready     24d       beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=n1-standard-1,beta.kubernetes.io/os=linux,cloud.google.com/gke-nodepool=default-pool,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-c,kubernetes.io/hostname=gke-tony-test-default-pool-42a243ce-kst4
gke-tony-test-default-pool-995fe96e-slt7   Ready     24d       beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=n1-standard-1,beta.kubernetes.io/os=linux,cloud.google.com/gke-nodepool=default-pool,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-b,kubernetes.io/hostname=gke-tony-test-default-pool-995fe96e-slt7
gke-tony-test-default-pool-cfc042b6-6w44   Ready     24d       beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=n1-standard-1,beta.kubernetes.io/os=linux,cloud.google.com/gke-nodepool=default-pool,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-f,kubernetes.io/hostname=gke-tony-test-default-pool-cfc042b6-6w44

@bowei
Copy link
Member Author

bowei commented Oct 11, 2017

From @bprashanth on February 14, 2017 18:44

Ah, so it does appear to have the right label. Can you ssh into that node and try to retrieve the instance via gcloud? (i.e gcloud compute instances describe gke-tony-test-default-pool-42a243ce-kst4, and gcloud compute backend-services list. I'm trying to confirm that your nodes have the right oauth_scopes as shown here: https://github.com/kubernetes/ingress/blob/master/controllers/gce/BETA_LIMITATIONS.md)

@bowei
Copy link
Member Author

bowei commented Oct 11, 2017

From @tonglil on February 14, 2017 22:11

Yep, I ssh-ed into the node:

root@gke-tony-test-default-pool-42a243ce-kst4:/# toolbox bash  
root@gke-tony-test-default-pool-42a243ce-kst4:/# export PATH="/google-cloud-sdk/bin:$PATH"
root@gke-tony-test-default-pool-42a243ce-kst4:/# gcloud compute ssh tony@gke-tony-test-default-pool-42a243ce-kst4 --zone us-central1-c
WARNING: The private SSH key file for Google Compute Engine does not exist.
WARNING: You do not have an SSH key for Google Compute Engine.
WARNING: [/usr/bin/ssh-keygen] will be executed to generate a key.
This tool needs to create the directory [/root/.ssh] before being able
 to generate SSH keys.

Do you want to continue (Y/n)?  Y

Generating public/private rsa key pair.
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /root/.ssh/google_compute_engine.
Your public key has been saved in /root/.ssh/google_compute_engine.pub.
The key fingerprint is:
c3:5e:a7:68:ed:73:b7:cc:ad:8b:22:2e:d1:f2:dd:11 root@gke-tony-test-default-pool-42a243ce-kst4
The key's randomart image is:
+---[RSA 2048]----+
|                 |
|                 |
|                 |
|       .    E    |
|       .S . ..   |
|      o..= o.    |
|       ++.o. .   |
|      ..o.+ o+.. |
|       o...+..*+.|
+-----------------+
Updating project ssh metadata...\Updated [https://www.googleapis.com/compute/v1/projects/PROJECT].                                                                                                                                       
Updating project ssh metadata...done.                                                                                                                                                                                                         
Warning: Permanently added 'compute.8105398880955260933' (RSA) to the list of known hosts.

Welcome to Kubernetes v1.5.2!

You can find documentation for Kubernetes at:
  http://docs.kubernetes.io/

The source for this release can be found at:
  /home/kubernetes/kubernetes-src.tar.gz
Or you can download it at:
  https://storage.googleapis.com/kubernetes-release/release/v1.5.2/kubernetes-src.tar.gz

It is based on the Kubernetes source at:
  https://github.com/kubernetes/kubernetes/tree/v1.5.2

For Kubernetes copyright and licensing information, see:
  /home/kubernetes/LICENSES

tony@gke-tony-test-default-pool-42a243ce-kst4 ~ $ exit

...
...
...

root@gke-tony-test-default-pool-42a243ce-kst4:/# gcloud compute instances describe gke-tony-test-default-pool-42a243ce-kst4
Did you mean zone [us-central1-c] for instances: 
[['[gke-tony-test-default-pool-42a243ce-kst4]']]?

Do you want to continue (Y/n)?  Y

canIpForward: true
cpuPlatform: Intel Haswell
creationTimestamp: '2017-01-20T11:03:06.432-08:00'
disks:
- autoDelete: true
  boot: true
  deviceName: persistent-disk-0
  index: 0
  interface: SCSI
  kind: compute#attachedDisk
  licenses:
  - https://www.googleapis.com/compute/v1/projects/google-containers/global/licenses/gci-public
  - https://www.googleapis.com/compute/v1/projects/gke-node-images/global/licenses/gke-node
  mode: READ_WRITE
  source: https://www.googleapis.com/compute/v1/projects/PROJECT/zones/us-central1-c/disks/gke-tony-test-default-pool-42a243ce-kst4
  type: PERSISTENT

...
...
...

root@gke-tony-test-default-pool-42a243ce-kst4:/# gcloud compute backend-services list
NAME                            BACKENDS                                                                                                                                                           PROTOCOL
k8s-be-30369--08f690ea8928d99b  us-central1-b/instanceGroups/k8s-ig--08f690ea8928d99b,us-central1-c/instanceGroups/k8s-ig--08f690ea8928d99b,us-central1-f/instanceGroups/k8s-ig--08f690ea8928d99b  HTTP
k8s-be-30659--d785be79bbf6d463  us-central1-b/instanceGroups/k8s-ig--d785be79bbf6d463,us-central1-c/instanceGroups/k8s-ig--d785be79bbf6d463,us-central1-f/instanceGroups/k8s-ig--d785be79bbf6d463  HTTP
k8s-be-30713--fb2867414c05b1af  us-east1-b/instanceGroups/k8s-ig--fb2867414c05b1af,us-east1-c/instanceGroups/k8s-ig--fb2867414c05b1af,us-east1-d/instanceGroups/k8s-ig--fb2867414c05b1af           HTTP
k8s-be-31026--ddd906426cac669e  us-central1-f/instanceGroups/k8s-ig--ddd906426cac669e                                                                                                              HTTP
k8s-be-31335--b23e2922d4b0dbbd  us-east1-b/instanceGroups/k8s-ig--b23e2922d4b0dbbd,us-east1-c/instanceGroups/k8s-ig--b23e2922d4b0dbbd,us-east1-d/instanceGroups/k8s-ig--b23e2922d4b0dbbd           HTTP
k8s-be-31440--fb2867414c05b1af  us-east1-b/instanceGroups/k8s-ig--fb2867414c05b1af,us-east1-c/instanceGroups/k8s-ig--fb2867414c05b1af,us-east1-d/instanceGroups/k8s-ig--fb2867414c05b1af           HTTP
k8s-be-31974--d785be79bbf6d463  us-central1-b/instanceGroups/k8s-ig--d785be79bbf6d463,us-central1-c/instanceGroups/k8s-ig--d785be79bbf6d463,us-central1-f/instanceGroups/k8s-ig--d785be79bbf6d463  HTTP
k8s-be-32000--b23e2922d4b0dbbd  us-east1-b/instanceGroups/k8s-ig--b23e2922d4b0dbbd,us-east1-c/instanceGroups/k8s-ig--b23e2922d4b0dbbd,us-east1-d/instanceGroups/k8s-ig--b23e2922d4b0dbbd           HTTP
k8s-be-32674--f21c736b0d1773c9  us-east1-d/instanceGroups/k8s-ig--f21c736b0d1773c9                                                                                                                 HTTP

@bowei
Copy link
Member Author

bowei commented Oct 11, 2017

From @tonglil on February 14, 2017 22:12

This is a node in a GKE cluster with the following scopes (1 pool only):
image

@bowei
Copy link
Member Author

bowei commented Oct 11, 2017

From @tonglil on February 14, 2017 22:24

207383@Tony-Li-8834: ~/Code/dv-gcp-guide/test-tls (test-tls u= origin/test-tls)
kubectl describe po/l7-lb-controller-b86hs
Name:		l7-lb-controller-b86hs
Namespace:	default
Node:		gke-tony-test-default-pool-995fe96e-slt7/10.224.190.6
Start Time:	Mon, 13 Feb 2017 17:37:53 -0800
...
Containers:
  ...
  l7-lb-controller:
    Container ID:	docker://48c23a36f9c2e43429a32f67d77658437b9d4c6057b2155b34b1f2053e69ee29
    Image:		gcr.io/google_containers/glbc:0.9.1
    ...

On gke-tony-test-default-pool-995fe96e-slt7, where the GLBC is, I can;
gcloud compute instances describe gke-tony-test-default-pool-995fe96e-slt7 --zone us-central1-b,
gcloud compute instances describe gke-tony-test-default-pool-cfc042b6-6w44 --zone us-central1-f, and
gcloud compute instances describe gke-tony-test-default-pool-42a243ce-kst4 --zone us-central1-c.

@bowei
Copy link
Member Author

bowei commented Oct 11, 2017

From @bprashanth on February 16, 2017 18:42

Ah, sorry for the delay, I suspect what's happening is the node that's running the controller is only able to view nodes in the same zone. We pipe a bunch of metadata into the controller via a volume mount on the master, when run on the node, this metadata doesn't exist, and so the lookup fails cross-region.

I believe the faulty lookup is embedded in the cloudprovider library that we vendor from upstream, that's where you see the "Failed to retrieve " error message from. It's trying to get hosts, to compute host tags, to use in the firewall rule (https://github.com/kubernetes/kubernetes/blob/master/pkg/cloudprovider/providers/gce/gce.go#L1578, https://github.com/kubernetes/kubernetes/blob/master/pkg/cloudprovider/providers/gce/gce.go#L1163).

On the master, we have a config file that supplies the node tags: https://github.com/kubernetes/kubernetes/blob/master/pkg/cloudprovider/providers/gce/gce.go#L1170

This is mounted into https://github.com/kubernetes/kubernetes/blob/master/cluster/saltbase/salt/l7-gcp/glbc.manifest#L29 and is basically a simple config file containing:

[global]
node-tags = <nodetags, eg: gcloud compute instances describe e2e-test-beeps-minion-group-dn5m | grep -i tag>
node-instance-prefix = <kubernetes instance prefix, eg: gke-tony-test>

I can dig up some docs around that config file, but can you please try mounting it when you have time?

@bowei
Copy link
Member Author

bowei commented Oct 11, 2017

From @tonglil on February 21, 2017 17:36

Thanks for the guidance, I follow your logic. However after including this file into the 3 nodes of the cluster, it is still showing the same error.

This is the file I set (same for all three) as a result of the grep:

gke-tony-test-default-pool-995fe96e-slt7 tony # cat /etc/gce.conf 
[global]
node-tags = gke-tony-test-80f4335c-node goog-gke-node
node-instance-prefix = gke-tony-test

Confirmed that it exists with the right contents inside the l7-lb-controller container in the pod.

@bowei
Copy link
Member Author

bowei commented Oct 11, 2017

From @tonglil on February 21, 2017 17:41

This is the modified rc.yaml I used to mount the conf file:

---
apiVersion: v1
kind: ReplicationController
metadata:
  name: l7-lb-controller
  labels:
    k8s-app: glbc
    version: v0.9.0
spec:
  # There should never be more than 1 controller alive simultaneously.
  replicas: 1
  selector:
    k8s-app: glbc
    version: v0.9.0
  template:
    metadata:
      labels:
        k8s-app: glbc
        version: v0.9.0
        name: glbc
    spec:
      terminationGracePeriodSeconds: 600
      containers:
        - name: default-http-backend
          ...
        - image: gcr.io/google_containers/glbc:0.9.1
          livenessProbe:
            httpGet:
              path: /healthz
              port: 8081
              scheme: HTTP
            initialDelaySeconds: 30
            timeoutSeconds: 5
          name: l7-lb-controller
          volumeMounts:
          - mountPath: /etc/gce.conf
            name: cloudconfig
            readOnly: true
          resources:
            limits:
              cpu: 100m
              memory: 100Mi
            requests:
              cpu: 10m
              memory: 50Mi
          args:
            - --default-backend-service=default/default-http-backend
            - --sync-period=300s
            - --verbose
            - --config-file-path=/etc/gce.conf
            - --running-in-cluster=false
            - --use-real-cloud=true
      volumes:
      - hostPath:
          path: /etc/gce.conf
        name: cloudconfig

@bowei
Copy link
Member Author

bowei commented Oct 11, 2017

From @tonglil on February 21, 2017 19:27

Got some progress when I added multizone = true (from this SO answer) to the /etc/gce.conf file and changed the node-tags field to be only 1 tag (instead of 2), however it is unable to get an IP address.

I0221 19:15:17.182900       1 pools.go:90] Replenishing pool
I0221 19:15:24.282815       1 utils.go:149] Syncing default/echo-app-tls
I0221 19:15:24.283039       1 controller.go:291] Syncing default/echo-app-tls
I0221 19:15:24.283546       1 backends.go:309] Sync: backends [30659]
I0221 19:15:24.283983       1 event.go:217] Event(api.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"echo-app-tls", UID:"187ba481-f86a-11e6-a3e9-42010a800021", APIVersion:"extensions", ResourceVersion:"3612852", FieldPath:""}): type: 'Normal' reason: 'ADD' default/echo-app-tls
I0221 19:15:24.345713       1 instances.go:76] Creating instance group k8s-ig--d785be79bbf6d463 in zone us-central1-b
I0221 19:15:27.848532       1 gce.go:2084] Adding port 30659 to instance group k8s-ig--d785be79bbf6d463 with 0 ports
I0221 19:15:28.194825       1 instances.go:76] Creating instance group k8s-ig--d785be79bbf6d463 in zone us-central1-c
I0221 19:15:31.909567       1 gce.go:2084] Adding port 30659 to instance group k8s-ig--d785be79bbf6d463 with 0 ports
I0221 19:15:32.383554       1 instances.go:76] Creating instance group k8s-ig--d785be79bbf6d463 in zone us-central1-f
I0221 19:15:35.936316       1 gce.go:2084] Adding port 30659 to instance group k8s-ig--d785be79bbf6d463 with 0 ports
I0221 19:15:36.266141       1 backends.go:214] Creating backend for 3 instance groups, port 30659 named port &{port30659 30659 []}
I0221 19:15:36.266842       1 utils.go:508] Found custom health check for Service echo-app nodeport 30659: /healthz
W0221 19:15:36.266932       1 utils.go:513] Failed to list ingresses for service echo-app
I0221 19:15:36.353942       1 healthchecks.go:62] Creating health check k8s-be-30659--d785be79bbf6d463
I0221 19:15:43.988908       1 instances.go:202] Syncing nodes [gke-tony-test-default-pool-42a243ce-kst4 gke-tony-test-default-pool-995fe96e-slt7 gke-tony-test-default-pool-cfc042b6-6w44]
I0221 19:15:44.282628       1 instances.go:240] Adding nodes to IG: []
I0221 19:15:44.282920       1 instances.go:174] Adding nodes [gke-tony-test-default-pool-42a243ce-kst4] to k8s-ig--d785be79bbf6d463 in zone us-central1-c
I0221 19:15:45.065424       1 instances.go:174] Adding nodes [gke-tony-test-default-pool-995fe96e-slt7] to k8s-ig--d785be79bbf6d463 in zone us-central1-b
I0221 19:15:45.342261       1 instances.go:174] Adding nodes [gke-tony-test-default-pool-cfc042b6-6w44] to k8s-ig--d785be79bbf6d463 in zone us-central1-f
I0221 19:15:45.682675       1 loadbalancers.go:165] Creating loadbalancers [0xc4208d5580]
I0221 19:15:45.752331       1 gce.go:2084] Adding port 31974 to instance group k8s-ig--d785be79bbf6d463 with 1 ports
I0221 19:15:46.099890       1 gce.go:2084] Adding port 31974 to instance group k8s-ig--d785be79bbf6d463 with 1 ports
I0221 19:15:46.743095       1 gce.go:2084] Adding port 31974 to instance group k8s-ig--d785be79bbf6d463 with 1 ports
I0221 19:15:47.040736       1 backends.go:214] Creating backend for 3 instance groups, port 31974 named port &{port31974 31974 []}
I0221 19:15:47.041277       1 utils.go:458] Pod l7-lb-controller-4zx9d matching service selectors map[k8s-app:glbc] (targetport {Type:0 IntVal:8080 StrVal:}): lacks a matching HTTP probe for use in health checks.
I0221 19:15:47.041387       1 utils.go:497] No pod in service default-http-backend with node port 31974 has declared a matching readiness probe for health checks.
I0221 19:15:47.142741       1 healthchecks.go:62] Creating health check k8s-be-31974--d785be79bbf6d463
I0221 19:15:47.301497       1 pools.go:90] Replenishing pool
I0221 19:15:55.518209       1 loadbalancers.go:121] Creating l7 default-echo-app-tls--d785be79bbf6d463
I0221 19:15:55.682528       1 loadbalancers.go:303] Creating url map k8s-um-default-echo-app-tls--d785be79bbf6d463 for backend k8s-be-31974--d785be79bbf6d463
I0221 19:15:59.361617       1 firewalls.go:62] Creating global l7 firewall rule k8s-fw-l7--d785be79bbf6d463
I0221 19:16:17.519815       1 pools.go:90] Replenishing pool
I0221 19:16:24.578878       1 loadbalancers.go:685] Updating urlmap for l7 default-echo-app-tls--d785be79bbf6d463
I0221 19:16:24.580040       1 event.go:217] Event(api.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"echo-app-tls", UID:"187ba481-f86a-11e6-a3e9-42010a800021", APIVersion:"extensions", ResourceVersion:"3612852", FieldPath:""}): type: 'Normal' reason: 'Service' default backend set to echo-app:30659
I0221 19:16:24.657742       1 loadbalancers.go:735] Updating url map: 
I0221 19:16:28.612349       1 controller.go:406] Updating annotations of default/echo-app-tls
I0221 19:16:28.619336       1 controller.go:133] Ingress echo-app-tls changed, syncing
I0221 19:16:28.620101       1 controller.go:327] Finished syncing default/echo-app-tls
I0221 19:16:28.620194       1 utils.go:149] Syncing default/echo-app-tls
I0221 19:16:28.620433       1 controller.go:291] Syncing default/echo-app-tls
I0221 19:16:28.620572       1 backends.go:309] Sync: backends [30659]
I0221 19:16:29.138563       1 instances.go:202] Syncing nodes [gke-tony-test-default-pool-42a243ce-kst4 gke-tony-test-default-pool-995fe96e-slt7 gke-tony-test-default-pool-cfc042b6-6w44]
I0221 19:16:29.435647       1 loadbalancers.go:165] Creating loadbalancers [0xc4201eaa40]
I0221 19:16:30.057753       1 loadbalancers.go:298] Url map k8s-um-default-echo-app-tls--d785be79bbf6d463 already exists
I0221 19:16:30.272218       1 loadbalancers.go:685] Updating urlmap for l7 default-echo-app-tls--d785be79bbf6d463
I0221 19:16:30.272880       1 event.go:217] Event(api.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"echo-app-tls", UID:"187ba481-f86a-11e6-a3e9-42010a800021", APIVersion:"extensions", ResourceVersion:"3612938", FieldPath:""}): type: 'Normal' reason: 'Service' default backend set to echo-app:30659
I0221 19:16:30.349903       1 loadbalancers.go:732] UrlMap for l7 default-echo-app-tls--d785be79bbf6d463 is unchanged
I0221 19:16:30.782699       1 controller.go:327] Finished syncing default/echo-app-tls
I0221 19:16:47.693625       1 pools.go:90] Replenishing pool
I0221 19:16:56.093747       1 reflector.go:392] k8s.io/ingress/controllers/gce/controller/controller.go:237: Watch close - *api.Service total 0 items received
I0221 19:17:17.885015       1 pools.go:90] Replenishing pool
I0221 19:17:48.083352       1 pools.go:90] Replenishing pool
I0221 19:18:04.092681       1 reflector.go:392] k8s.io/ingress/controllers/gce/controller/controller.go:235: Watch close - *extensions.Ingress total 3 items received
I0221 19:18:11.098578       1 reflector.go:392] k8s.io/ingress/controllers/gce/controller/controller.go:238: Watch close - *api.Pod total 0 items received
I0221 19:18:18.283407       1 pools.go:90] Replenishing pool
I0221 19:18:48.482433       1 pools.go:90] Replenishing pool
I0221 19:19:18.620162       1 pools.go:90] Replenishing pool
I0221 19:19:45.084396       1 reflector.go:273] k8s.io/ingress/controllers/gce/controller/controller.go:235: forcing resync

This warning seems interesting though: W0221 19:15:36.266932 1 utils.go:513] Failed to list ingresses for service echo-app.

@bowei
Copy link
Member Author

bowei commented Oct 11, 2017

From @tonglil on February 21, 2017 23:24

That actually solves the instance not found error, thank you.

Is there a way to run my fork of the ingress controller without having to set this file on the instance manually (aka should this be a fix for kubernetes/GKE)?

@tonglil
Copy link
Contributor

tonglil commented Oct 17, 2017

This can be closed. Thanks.

@bowei bowei closed this as completed Oct 17, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants