Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issue with SNI and latest kong ingress #510

Closed
dcherniv opened this issue Jan 18, 2020 · 30 comments · Fixed by #523
Closed

Issue with SNI and latest kong ingress #510

dcherniv opened this issue Jan 18, 2020 · 30 comments · Fixed by #523

Comments

@dcherniv
Copy link

Summary

Kong appears to have issues creating SNIs. Our ingresses are sharing the same host, we do path based routing.

Kong Ingress controller version
0.7.0 with postgres DB

Kong or Kong Enterprise version
1.4.3

Kubernetes version

v1.14.9-eks-c0eccc

Environment

  • Cloud provider or hardware configuration: AWS
  • OS (e.g. from /etc/os-release): AWS linux
  • Kernel (e.g. uname -a): n/a
  • Install tools: helm
  • Others:

What happened

When below happens kong starts falling back to default localhost certificate, which makes TLS request fail. After 5-10 minutes, sometimes it recovers, only to then go into the same problem again.
Timing appears random, though i suspect it does it when it updates the configuration.

W0118 04:23:50.121945       1 parser.go:339] Deprecated KongCredential in use, please use secret-based credentials. KongCredential resource will be removed in future.
E0118 04:23:50.255044       1 controller.go:119] unexpected failure updating Kong configuration: 
1 errors occurred:
        while processing event: {Create} failed: 400 Bad Request {"message":"schema violation (snis: a.domain.com already associated with existing certificate '11f9d5bf-2343-11ea-a792-0ed7c98255e7')","name":"schema violation","fields":{"snis":"a.domain.com already associated with existing certificate '11f9d5bf-2343-11ea-a792-0ed7c98255e7'"},"code":2}
W0118 04:23:50.255072       1 queue.go:112] requeuing dev/dev-ops-tools-backend, err 1 errors occurred:
        while processing event: {Create} failed: 400 Bad Request {"message":"schema violation (snis: a.domain.com already associated with existing certificate '11f9d5bf-2343-11ea-a792-0ed7c98255e7')","name":"schema violation","fields":{"snis":"a.domain.com already associated with existing certificate '11f9d5bf-2343-11ea-a792-0ed7c98255e7'"},"code":2}
W0118 04:23:53.454872       1 parser.go:1043] service demo/demo-c-service does not have any active endpoints
W0118 04:23:53.454942       1 parser.go:1043] service dev/dev-b-service does not have any active endpoints
W0118 04:23:53.455034       1 parser.go:1043] service dev/dev-a-service does not have any active endpoints
W0118 04:23:53.455209       1 parser.go:339] Deprecated KongCredential in use, please use secret-based credentials. KongCredential resource will be removed in future.
I0118 04:23:53.583069       1 kong.go:66] successfully synced configuration to Kong

Expected behavior

Steps To Reproduce

  1. Create multiple ingresses pointing to the same TLS secret with same hostname
  2. Use postgres database
  3. Scale down kong to 1 pod.
  4. Watch logs
@dcherniv
Copy link
Author

Forgot to mention, 0.6.0 controller didn't have this problem. Is it possible a migration screwed it up somehow? Is this even database related?

@dcherniv
Copy link
Author

dcherniv commented Jan 18, 2020

Just tested this on our GKE cluster, which is pretty much pristine. To summarize, two kong ingress controllers installed into different namespaces, using latest helm chart.
Both set to use separate ingress classes via the value in helm chart:

ingressController:
  ingressClass: kong-(internal|external)

External is set to not install CRDs via:

ingressController:
  ingressClass: kong-external
  # Don't install CRDs because kong-internal controller supposedly already installed them.
  installCRDs: false

Two ingresses:

dcherniv@debbie:~/$ kubectl get ing --all-namespaces | grep Host
kong-external   kong-external-kong-ingress-kong-admin   host-dev.example.com                10.133.0.75      80, 443   20m
kong-internal   kong-internal-kong-ingress-kong-admin   host-dev.example.com                10.133.0.75      80, 443   10h
dcherniv@debbie:~/$ 

Both ingresses are pretty standard loopback admin configuration to expose admin api through kong. Both ingresses use the same kong-internal class:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    kubernetes.io/ingress.class: kong-internal
    plugins.konghq.com: kong-external-kong-acl,kong-external-kong-auth,kong-external-kong-prometheus
  creationTimestamp: "2020-01-18T05:37:55Z"
  generation: 1
  labels:
    app.kubernetes.io/instance: kong-external-kong-ingress
    app.kubernetes.io/managed-by: Tiller
    app.kubernetes.io/name: kong
    app.kubernetes.io/version: "1.4"
    helm.sh/chart: kong-1.0.0
  name: kong-external-kong-ingress-kong-admin
  namespace: kong-external
  resourceVersion: "72413049"
  selfLink: /apis/extensions/v1beta1/namespaces/kong-external/ingresses/kong-external-kong-ingress-kong-admin
  uid: adbcb438-39b4-11ea-b20f-42010a8c4006
spec:
  rules:
  - host: host-dev.example.com
    http:
      paths:
      - backend:
          serviceName: kong-external-kong-ingress-kong-admin
          servicePort: 8444
        path: /admin-api/external
  tls:
  - hosts:
    - host-dev.example.com
    secretName: wild.example.com-2019
status:
  loadBalancer:
    ingress:
    - ip: 10.133.0.75
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    kubernetes.io/ingress.class: kong-internal
    plugins.konghq.com: kong-internal-kong-acl,kong-internal-kong-auth,kong-internal-kong-prometheus
  creationTimestamp: "2020-01-17T19:16:28Z"
  generation: 5
  labels:
    app.kubernetes.io/instance: kong-internal-kong-ingress
    app.kubernetes.io/managed-by: Tiller
    app.kubernetes.io/name: kong
    app.kubernetes.io/version: "1.4"
    helm.sh/chart: kong-1.0.0
  name: kong-internal-kong-ingress-kong-admin
  namespace: kong-internal
  resourceVersion: "72291406"
  selfLink: /apis/extensions/v1beta1/namespaces/kong-internal/ingresses/kong-internal-kong-ingress-kong-admin
  uid: dc9eaf50-395d-11ea-9251-42010a8c4006
spec:
  rules:
  - host: host-dev.example.com
    http:
      paths:
      - backend:
          serviceName: kong-internal-kong-ingress-kong-admin
          servicePort: 8444
        path: /admin-api/internal
  tls:
  - hosts:
    - host-dev.example.com
    secretName: wild.example.com-2019
status:
  loadBalancer:
    ingress:
    - ip: 10.133.0.75

Ingress controller throws the following error:

        while processing event: {Create} failed: 400 Bad Request {"message":"schema violation (snis: host-dev.example.com already associated with existing certificate '854c20d3-39a8-11ea-b20f-42010a8c4006')","name":"schema violation","fields":{"snis":"host-dev.example.com already associated with existing certificate '854c20d3-39a8-11ea-b20f-42010a8c4006'"},"code":2}
W0118 06:02:02.440133       1 queue.go:112] requeuing kong-internal/kong-internal-kong-user, err 1 errors occurred:
        while processing event: {Create} failed: 400 Bad Request {"message":"schema violation (snis: host-dev.example.com already associated with existing certificate '854c20d3-39a8-11ea-b20f-42010a8c4006')","name":"schema violation","fields":{"snis":"host-dev.example.com already associated with existing certificate '854c20d3-39a8-11ea-b20f-42010a8c4006'"},"code":2}
W0118 06:02:05.727364       1 parser.go:339] Deprecated KongCredential in use, please use secret-based credentials. KongCredential resource will be removed in future.
I0118 06:02:05.729729       1 kong.go:57] no configuration change, skipping sync to Kong

EDIT: this is a pretty significant issue, because apis essentially stop working due to untrusted certificate that kong falls back on.

@dcherniv
Copy link
Author

dcherniv commented Jan 18, 2020

Some more information:
Downgrading ingress controller to 0.6.2 by manually editing the deployment appears to have resolved the problem.
Currently running in this configuration: 0.6.2 controller is driving latest 1.4.3 kong api gateway.

-------------------------------------------------------------------------------
Kong Ingress controller
  Release:    0.6.2
  Build:      01d61b5
  Repository: [email protected]:kong/kubernetes-ingress-controller.git
  Go:         go1.13.1
-------------------------------------------------------------------------------

I0118 16:57:49.458687       1 main.go:362] Creating API client for https://172.20.0.1:443
I0118 16:57:49.465176       1 main.go:406] Running in Kubernetes Cluster version v1.14+ (v1.14.9-eks-c0eccc) - git (clean) commit c0eccca51d7500bb03b2f163dd8d534ffeb2f7a2 - platform linux/amd64
I0118 16:57:49.609872       1 main.go:148] kong version: 1.4.3
I0118 16:57:49.609895       1 main.go:157] Kong datastore: postgres
I0118 16:57:49.742553       1 controller.go:242] starting Ingress controller
E0118 16:57:49.742880       1 main.go:310] error running the admission controller server:open /admission-webhook/tls.crt: no such file or directory
I0118 16:57:49.748122       1 status.go:201] new leader elected: kong-internal-kong-ingress-kong-5b54b6784d-wcwd6
I0118 16:58:51.038930       1 status.go:201] new leader elected: kong-internal-kong-ingress-kong-7c44d84575-z97qz
W0118 16:58:54.009171       1 parser.go:772] service demo/a-service does not have any active endpoints
W0118 16:58:54.009268       1 parser.go:772] service dev/b-service does not have any active endpoints
W0118 16:58:54.009393       1 parser.go:772] service dev/c-service does not have any active endpoints
I0118 16:58:54.649603       1 controller.go:135] successfully synced configuration to Kong

@hbagdi
Copy link
Member

hbagdi commented Jan 20, 2020

Thanks for the detailed report. I'll try to reproduce this and report back here soon.

@hbagdi
Copy link
Member

hbagdi commented Jan 20, 2020

This happens because of this change:9bd6a8c#diff-b1c61ff32890f6c8189e0ff5be6e4cc8R790

What happens is the same certificate is being created again, with a different ID. And then the SNI is being associated with the new certificate, which violates a constraint as it is already associated with the old certificate. This case was foreseen before and is mentioned as a TODO comment in the underlying library: https://github.com/hbagdi/deck/blob/beaced6e32ee00658e1580dc027d1899dd5681f3/diff/diff.go#L142

To fix this problem, you have to manually delete the old certificate, so that the controller can create the new certificate. This has to be done only once between 0.6 to 0.7 upgrade.
You can implement the fix using two options:

  • Upgrade to 0.7, the controller will start failing with the above error, delete the certificate via Admin API, controller will sync it back again and everything will proceed as normal
  • Role out a new Kong Ingress Controller into the cluster with 0.7 and a new database and then decommission the old one. This might or might not be feasible in your environment.

I'd like to automatically handle this in the controller but I don't think that's possible in any way.

Hope this helps.

@landorg
Copy link

landorg commented Jan 21, 2020

I am running into the same issue when upgrading the ingress-controller. But when I delete the certificates or the snis it seems to sync everyting. I also get successfully synced configuration to Kong. But after a few seconds it starts again with

while processing event: {Create} failed: 400 Bad Request {"message":"schema violation (snis: *.example.com already associated with existing certificate '1789731b-808f-11e9-a1f3-901b0ee183c7')","name":"schema violation","fields":{"snis":"*.example.com already associated with existing certificate '1789731b-808f-11e9-a1f3-901b0ee183c7'"},"code":2}

@dcherniv
Copy link
Author

@hbagdi my test on GKE was with a clean installation of 0.7.0, new database from scratch. I did in fact try to delete the cert manually and can confirm that it did start failing again in a few minutes like @RolandG reported.

@hbagdi
Copy link
Member

hbagdi commented Jan 21, 2020

@hbagdi my test on GKE was with a clean installation of 0.7.0, new database from scratch.

I'm not sure I follow, what do you mean? Do you see this behavior even when you do a clean installation?

@dcherniv @RolandG A couple more questions:

  • The ID you see for existing certificate (1789731b-808f-11e9-a1f3-901b0ee183c7 in the above comment), does that ID match with the Secret resource which contains the TLS certificate?
  • Has the certificate changed after the upgrade or is it the same?

@dcherniv
Copy link
Author

dcherniv commented Jan 21, 2020

@hbagdi Correct the issue appears on the clean installation of 0.7.0 ingress controller via latest helm chart.
Same certificate installed into two different namespaces

dchernivetsky@host:~/$ kubectl get secrets -n kong-external | grep wild
wild.example.com-2019                         kubernetes.io/tls                     2      3d13h
dchernivetsky@host:~/$ kubectl get secrets -n kong-internal | grep wild
wild.example.com-2019                         kubernetes.io/tls                     2      32d
dchernivetsky@host:~/$ 

Let me re-create the issue. I helm del purged the existing kong ingresses. Deleted the db and installed both from scratch:

dchernivetsky@host:~/$ kubectl logs -f -n kong-internal kong-internal-kong-ingress-kong-init-migrations-cgnrh
Bootstrapping database...
migrating core on database 'kong'...
core migrated up to: 000_base (executed)
core migrated up to: 001_14_to_15 (executed)
core migrated up to: 002_15_to_1 (executed)
core migrated up to: 003_100_to_110 (executed)
core migrated up to: 004_110_to_120 (executed)
core migrated up to: 005_120_to_130 (executed)
core migrated up to: 006_130_to_140 (executed)
migrating hmac-auth on database 'kong'...
hmac-auth migrated up to: 000_base_hmac_auth (executed)
hmac-auth migrated up to: 001_14_to_15 (executed)
hmac-auth migrated up to: 002_130_to_140 (executed)
migrating oauth2 on database 'kong'...
oauth2 migrated up to: 000_base_oauth2 (executed)
oauth2 migrated up to: 001_14_to_15 (executed)
oauth2 migrated up to: 002_15_to_10 (executed)
oauth2 migrated up to: 003_130_to_140 (executed)
migrating jwt on database 'kong'...
jwt migrated up to: 000_base_jwt (executed)
jwt migrated up to: 001_14_to_15 (executed)
jwt migrated up to: 002_130_to_140 (executed)
migrating basic-auth on database 'kong'...
basic-auth migrated up to: 000_base_basic_auth (executed)
basic-auth migrated up to: 001_14_to_15 (executed)
basic-auth migrated up to: 002_130_to_140 (executed)
migrating key-auth on database 'kong'...
key-auth migrated up to: 000_base_key_auth (executed)
key-auth migrated up to: 001_14_to_15 (executed)
key-auth migrated up to: 002_130_to_140 (executed)
migrating rate-limiting on database 'kong'...
rate-limiting migrated up to: 000_base_rate_limiting (executed)
rate-limiting migrated up to: 001_14_to_15 (executed)
rate-limiting migrated up to: 002_15_to_10 (executed)
rate-limiting migrated up to: 003_10_to_112 (executed)
migrating acl on database 'kong'...
acl migrated up to: 000_base_acl (executed)
acl migrated up to: 001_14_to_15 (executed)
acl migrated up to: 002_130_to_140 (executed)
migrating response-ratelimiting on database 'kong'...
response-ratelimiting migrated up to: 000_base_response_rate_limiting (executed)
response-ratelimiting migrated up to: 001_14_to_15 (executed)
response-ratelimiting migrated up to: 002_15_to_10 (executed)
migrating session on database 'kong'...
session migrated up to: 000_base_session (executed)
34 migrations processed
34 executed
Database is up-to-date

Internal ingress controller is installed and running by itself:

dchernivetsky@host:~$ kubectl logs -f -n kong-internal kong-internal-kong-ingress-kong-85b7bb6f76-894jt ingress-controller
-------------------------------------------------------------------------------
Kong Ingress controller
  Release:    0.7.0
  Build:      e4605db
  Repository: [email protected]:kong/kubernetes-ingress-controller.git
  Go:         go1.13.1
-------------------------------------------------------------------------------

I0121 18:06:11.684814       1 main.go:407] Creating API client for https://10.135.0.1:443
I0121 18:06:11.694506       1 main.go:451] Running in Kubernetes Cluster version v1.14+ (v1.14.10-gke.0) - git (clean) commit a988db14950de3628f9e21773f3de0bf52485534 - platform linux/amd64
I0121 18:06:11.966737       1 main.go:187] kong version: 1.4.3
I0121 18:06:11.966793       1 main.go:196] Kong datastore: postgres
I0121 18:06:12.103030       1 controller.go:224] starting Ingress controller
I0121 18:06:12.107625       1 status.go:201] new leader elected: kong-internal-kong-ingress-kong-796cb9bff9-hc85g
I0121 18:06:43.013667       1 status.go:201] new leader elected: kong-internal-kong-ingress-kong-85b7bb6f76-894jt
W0121 18:06:43.013995       1 parser.go:339] Deprecated KongCredential in use, please use secret-based credentials. KongCredential resource will be removed in future.
I0121 18:06:43.324734       1 kong.go:66] successfully synced configuration to Kong
I0121 18:07:43.022500       1 status.go:342] updating Ingress kong-internal/kong-internal-kong-ingress-kong-admin status to [{10.133.0.32 }]
W0121 18:07:43.029080       1 parser.go:339] Deprecated KongCredential in use, please use secret-based credentials. KongCredential resource will be removed in future.
I0121 18:07:43.030352       1 kong.go:57] no configuration change, skipping sync to Kong

Installed the second controller into kong-external namespace:
Migrations are run:

dchernivetsky@host:~$ kubectl logs -f -n kong-external kong-external-kong-ingress-kong-init-migrations-sc4rc
Bootstrapping database...
migrating core on database 'kong'...
core migrated up to: 000_base (executed)
core migrated up to: 001_14_to_15 (executed)
core migrated up to: 002_15_to_1 (executed)
core migrated up to: 003_100_to_110 (executed)
core migrated up to: 004_110_to_120 (executed)
core migrated up to: 005_120_to_130 (executed)
core migrated up to: 006_130_to_140 (executed)
migrating hmac-auth on database 'kong'...
hmac-auth migrated up to: 000_base_hmac_auth (executed)
hmac-auth migrated up to: 001_14_to_15 (executed)
hmac-auth migrated up to: 002_130_to_140 (executed)
migrating oauth2 on database 'kong'...
oauth2 migrated up to: 000_base_oauth2 (executed)
oauth2 migrated up to: 001_14_to_15 (executed)
oauth2 migrated up to: 002_15_to_10 (executed)
oauth2 migrated up to: 003_130_to_140 (executed)
migrating jwt on database 'kong'...
jwt migrated up to: 000_base_jwt (executed)
jwt migrated up to: 001_14_to_15 (executed)
jwt migrated up to: 002_130_to_140 (executed)
migrating basic-auth on database 'kong'...
basic-auth migrated up to: 000_base_basic_auth (executed)
basic-auth migrated up to: 001_14_to_15 (executed)
basic-auth migrated up to: 002_130_to_140 (executed)
migrating key-auth on database 'kong'...
key-auth migrated up to: 000_base_key_auth (executed)
key-auth migrated up to: 001_14_to_15 (executed)
key-auth migrated up to: 002_130_to_140 (executed)
migrating rate-limiting on database 'kong'...
rate-limiting migrated up to: 000_base_rate_limiting (executed)
rate-limiting migrated up to: 001_14_to_15 (executed)
rate-limiting migrated up to: 002_15_to_10 (executed)
rate-limiting migrated up to: 003_10_to_112 (executed)
migrating acl on database 'kong'...
acl migrated up to: 000_base_acl (executed)
acl migrated up to: 001_14_to_15 (executed)
acl migrated up to: 002_130_to_140 (executed)
migrating response-ratelimiting on database 'kong'...
response-ratelimiting migrated up to: 000_base_response_rate_limiting (executed)
response-ratelimiting migrated up to: 001_14_to_15 (executed)
response-ratelimiting migrated up to: 002_15_to_10 (executed)
migrating session on database 'kong'...
session migrated up to: 000_base_session (executed)
34 migrations processed
34 executed
Database is up-to-date

Logs from kong-internal controller which now fronts both external and internal admin ingress resources which i posted earlier in the thread:

W0121 18:09:22.641588       1 parser.go:339] Deprecated KongCredential in use, please use secret-based credentials. KongCredential resource will be removed in future.
I0121 18:09:22.642932       1 kong.go:57] no configuration change, skipping sync to Kong
W0121 18:09:25.975025       1 parser.go:1043] service kong-external/kong-external-kong-ingress-kong-admin does not have any active endpoints
W0121 18:09:25.975158       1 parser.go:339] Deprecated KongCredential in use, please use secret-based credentials. KongCredential resource will be removed in future.
I0121 18:09:26.163761       1 kong.go:66] successfully synced configuration to Kong
W0121 18:09:29.308497       1 parser.go:1043] service kong-external/kong-external-kong-ingress-kong-admin does not have any active endpoints
W0121 18:09:29.308585       1 parser.go:339] Deprecated KongCredential in use, please use secret-based credentials. KongCredential resource will be removed in future.
E0121 18:09:29.384206       1 controller.go:119] unexpected failure updating Kong configuration: 
1 errors occurred:
        while processing event: {Create} failed: 400 Bad Request {"message":"schema violation (snis: apikg-dev.example.com already associated with existing certificate '9f0fa735-2344-11ea-9251-42010a8c400
6')","name":"schema violation","fields":{"snis":"apikg-dev.example.com already associated with existing certificate '9f0fa735-2344-11ea-9251-42010a8c4006'"},"code":2}
W0121 18:09:29.384271       1 queue.go:112] requeuing kong-external/kong-external-kong-ingress-kong-admin, err 1 errors occurred:
        while processing event: {Create} failed: 400 Bad Request {"message":"schema violation (snis: apikg-dev.example.com already associated with existing certificate '9f0fa735-2344-11ea-9251-42010a8c400
6')","name":"schema violation","fields":{"snis":"apikg-dev.example.com already associated with existing certificate '9f0fa735-2344-11ea-9251-42010a8c4006'"},"code":2}
W0121 18:09:32.641831       1 parser.go:1043] service kong-external/kong-external-kong-ingress-kong-admin does not have any active endpoints
W0121 18:09:32.641971       1 parser.go:339] Deprecated KongCredential in use, please use secret-based credentials. KongCredential resource will be removed in future.
I0121 18:09:32.708171       1 kong.go:66] successfully synced configuration to Kong
W0121 18:09:39.365950       1 parser.go:1043] service kong-external/kong-external-kong-ingress-kong-admin does not have any active endpoints
W0121 18:09:39.366028       1 parser.go:339] Deprecated KongCredential in use, please use secret-based credentials. KongCredential resource will be removed in future.
E0121 18:09:39.413624       1 controller.go:119] unexpected failure updating Kong configuration: 
1 errors occurred:
        while processing event: {Create} failed: 400 Bad Request {"message":"schema violation (snis: apikg-dev.example.com already associated with existing certificate '854c20d3-39a8-11ea-b20f-42010a8c400
6')","name":"schema violation","fields":{"snis":"apikg-dev.example.com already associated with existing certificate '854c20d3-39a8-11ea-b20f-42010a8c4006'"},"code":2}
W0121 18:09:39.413672       1 queue.go:112] requeuing kong-external/kong-external-kong-ingress-kong-proxy, err 1 errors occurred:
        while processing event: {Create} failed: 400 Bad Request {"message":"schema violation (snis: apikg-dev.example.com already associated with existing certificate '854c20d3-39a8-11ea-b20f-42010a8c400
6')","name":"schema violation","fields":{"snis":"apikg-dev.example.com already associated with existing certificate '854c20d3-39a8-11ea-b20f-42010a8c4006'"},"code":2}
W0121 18:09:42.699555       1 parser.go:339] Deprecated KongCredential in use, please use secret-based credentials. KongCredential resource will be removed in future.
E0121 18:09:42.750644       1 controller.go:119] unexpected failure updating Kong configuration: 
1 errors occurred:
        while processing event: {Create} failed: 400 Bad Request {"message":"schema violation (snis: apikg-dev.example.com already associated with existing certificate '854c20d3-39a8-11ea-b20f-42010a8c400
6')","name":"schema violation","fields":{"snis":"apikg-dev.example.com already associated with existing certificate '854c20d3-39a8-11ea-b20f-42010a8c4006'"},"code":2}
W0121 18:09:42.750681       1 queue.go:112] requeuing kong-external/kong-external-kong-ingress-kong-admin, err 1 errors occurred:
        while processing event: {Create} failed: 400 Bad Request {"message":"schema violation (snis: apikg-dev.example.com already associated with existing certificate '854c20d3-39a8-11ea-b20f-42010a8c400
6')","name":"schema violation","fields":{"snis":"apikg-dev.example.com already associated with existing certificate '854c20d3-39a8-11ea-b20f-42010a8c4006'"},"code":2}
I0121 18:09:43.023653       1 status.go:342] updating Ingress kong-external/kong-external-kong-ingress-kong-admin status to [{10.133.0.32 }]
W0121 18:09:46.032855       1 parser.go:339] Deprecated KongCredential in use, please use secret-based credentials. KongCredential resource will be removed in future.
I0121 18:09:46.088177       1 kong.go:66] successfully synced configuration to Kong
W0121 18:09:49.366252       1 parser.go:339] Deprecated KongCredential in use, please use secret-based credentials. KongCredential resource will be removed in future.
I0121 18:09:49.367749       1 kong.go:57] no configuration change, skipping sync to Kong

In this particular case the secret that triggers the error is in kong-external namespace:

dchernivetsky@host:~$ kubectl get secrets -n kong-external -o yaml wild.example.com-2019 | grep 854c20d3-39a8-11ea-b20f-42010a8c4006
  uid: 854c20d3-39a8-11ea-b20f-42010a8c4006
dchernivetsky@host:~$ 

So to summarize to reproduce reliably.
0. Install 0.7.0 ingress controller from scratch via helm chart into two namespaces

  1. Create a secret containing the same certificate in two namespaces, call them the same name.
  2. Create two ingress resources pointing to the secrets in their respective namespace.
  3. Make sure hostnames in the ingresses are the same.

This should about reproduce the issue.

@hbagdi
Copy link
Member

hbagdi commented Jan 21, 2020

Create two ingress resources pointing to the secrets in their respective namespace.

Do you annotate these two Ingress resource with Ingress.class?
Can you also share your Ingress Controller config (cli flags and environment variables?

@dcherniv
Copy link
Author

@hbagdi
Here's the ingress resources in question:

dchernivetsky@host:~$ kubectl get ing -n kong-internal
NAME                                    HOSTS                   ADDRESS       PORTS     AGE
kong-internal-kong-ingress-kong-admin   apikg-dev.example.com   10.133.0.32   80, 443   3h47m
dchernivetsky@host:~$ kubectl get ing -n kong-external
NAME                                    HOSTS                   ADDRESS       PORTS     AGE
kong-external-kong-ingress-kong-admin   apikg-dev.example.com   10.133.0.32   80, 443   3h43m
dchernivetsky@host:~$ 

Details of each ingress resource. kong-external:

dchernivetsky@host:~$ kubectl get ing -o yaml -n kong-external
apiVersion: v1
items:
- apiVersion: extensions/v1beta1
  kind: Ingress
  metadata:
    annotations:
      kubernetes.io/ingress.class: kong-internal
      plugins.konghq.com: kong-external-kong-acl,kong-external-kong-auth,kong-external-kong-prometheus
    creationTimestamp: "2020-01-21T18:09:22Z"
    generation: 1
    labels:
      app.kubernetes.io/instance: kong-external-kong-ingress
      app.kubernetes.io/managed-by: Tiller
      app.kubernetes.io/name: kong
      app.kubernetes.io/version: "1.4"
      helm.sh/chart: kong-1.0.0
    name: kong-external-kong-ingress-kong-admin
    namespace: kong-external
    resourceVersion: "73597622"
    selfLink: /apis/extensions/v1beta1/namespaces/kong-external/ingresses/kong-external-kong-ingress-kong-admin
    uid: 26bb7091-3c79-11ea-b906-42010a8c4009
  spec:
    rules:
    - host: apikg-dev.example.com
      http:
        paths:
        - backend:
            serviceName: kong-external-kong-ingress-kong-admin
            servicePort: 8444
          path: /admin-api/external
    tls:
    - hosts:
      - apikg-dev.example.com
      secretName: wild.example.com-2019
  status:
    loadBalancer:
      ingress:
      - ip: 10.133.0.32
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""
dchernivetsky@host:~$ 

kong-internal:

dchernivetsky@host:~$  kubectl get ing -o yaml -n kong-internal
apiVersion: v1
items:
- apiVersion: extensions/v1beta1
  kind: Ingress
  metadata:
    annotations:
      kubernetes.io/ingress.class: kong-internal
      plugins.konghq.com: kong-internal-kong-acl,kong-internal-kong-auth,kong-internal-kong-prometheus
    creationTimestamp: "2020-01-21T18:06:01Z"
    generation: 1
    labels:
      app.kubernetes.io/instance: kong-internal-kong-ingress
      app.kubernetes.io/managed-by: Tiller
      app.kubernetes.io/name: kong
      app.kubernetes.io/version: "1.4"
      helm.sh/chart: kong-1.0.0
    name: kong-internal-kong-ingress-kong-admin
    namespace: kong-internal
    resourceVersion: "73597623"
    selfLink: /apis/extensions/v1beta1/namespaces/kong-internal/ingresses/kong-internal-kong-ingress-kong-admin
    uid: aed113e9-3c78-11ea-b906-42010a8c4009
  spec:
    rules:
    - host: apikg-dev.example.com
      http:
        paths:
        - backend:
            serviceName: kong-internal-kong-ingress-kong-admin
            servicePort: 8444
          path: /admin-api/internal
    tls:
    - hosts:
      - apikg-dev.example.com
      secretName: wild.example.com-2019
  status:
    loadBalancer:
      ingress:
      - ip: 10.133.0.32
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""

The only difference between the two ingress resources is the path on which they are exposed.

@dcherniv
Copy link
Author

dcherniv commented Jan 21, 2020

Here are the flags:
kong-internal

dchernivetsky@host:~$ kubectl exec -it -n kong-internal kong-internal-kong-ingress-kong-85b7bb6f76-gpsbr -c ingress-controller /bin/sh
/ $ ps auxwww | grep kong
    1 kic       0:13 /kong-ingress-controller /kong-ingress-controller --publish-service=kong-internal/kong-internal-kong-ingress-kong-proxy --ingress-class=kong-internal --election-id=kong-ingress-controller-leader-kong-internal --kong-url=http://localhost:8444
   44 kic       0:00 grep kong
/ $ 

kong-external:

dchernivetsky@host:~$ kubectl exec -it -n kong-external kong-external-kong-ingress-kong-558cb54b7d-hfn4n /bin/sh
Defaulting container name to ingress-controller.
Use 'kubectl describe pod/kong-external-kong-ingress-kong-558cb54b7d-hfn4n -n kong-external' to see all of the containers in this pod.
/ $ ps auxwww | grep kong
    1 kic       0:11 /kong-ingress-controller /kong-ingress-controller --publish-service=kong-external/kong-external-kong-ingress-kong-proxy --ingress-class=kong-external --election-id=kong-ingress-controller-leader-kong-external --kong-url=http://localhost:8444
   31 kic       0:00 grep kong
/ $

Environment variables:
external

      env:
      - name: KONG_PG_HOST
        value: db-external.example.com
      - name: KONG_PG_USER
        value: postgres
      - name: KONG_PG_PORT
        value: "5432"
      - name: KONG_PG_PASSWORD
        valueFrom:
          secretKeyRef:
            key: postgresql-password
            name: kong-external-kong-postgres
      - name: POD_NAME
        valueFrom:
          fieldRef:
            apiVersion: v1
            fieldPath: metadata.name
      - name: POD_NAMESPACE
        valueFrom:
          fieldRef:
            apiVersion: v1
            fieldPath: metadata.namespace

Internal:

      - name: KONG_PG_HOST
        value: db-internal.example.com
      - name: KONG_PG_USER
        value: postgres
      - name: KONG_PG_PORT
        value: "5432"
      - name: KONG_PG_PASSWORD
        valueFrom:
          secretKeyRef:
            key: postgresql-password
            name: kong-internal-kong-postgres
      - name: POD_NAME
        valueFrom:
          fieldRef:
            apiVersion: v1
            fieldPath: metadata.name
      - name: POD_NAMESPACE
        valueFrom:
          fieldRef:
            apiVersion: v1
            fieldPath: metadata.namespace

@hbagdi
Copy link
Member

hbagdi commented Jan 21, 2020

You have ingress class as kong-internal in both the Ingress resources.
Is that intended?

@dcherniv
Copy link
Author

@hbagdi yes that is intended. I want both external and internal admin apis to be exposed on the internal controller. I don't want external api exposed publicly.

@hbagdi
Copy link
Member

hbagdi commented Jan 22, 2020

Can you remove the tls section from one of the Ingress resources? That might solve the problem.

@dcherniv
Copy link
Author

dcherniv commented Jan 22, 2020

@hbagdi as in to test? or permanently? These are valid ingress specs.

This has the effect of disabling TLS on the ingress as expected. It doesn't really solve the problem, because TLS is a must.

dcherniv@debbie:~$ curl apikg-dev.example.com/admin-api/external
{"message":"No API key found in request"}dcherniv@debbie:~$ curl https://apikg-dev.example.com/admin-api/external
curl: (60) SSL certificate problem: self signed certificate
More details here: https://curl.haxx.se/docs/sslcerts.html

curl failed to verify the legitimacy of the server and therefore could not
establish a secure connection to it. To learn more about this situation and
how to fix it, please visit the web page mentioned above.
dcherniv@debbie:~$ 

Works on port 80 as expected, fails on port 443. But the error does indeed go away from kong logs.

@hbagdi
Copy link
Member

hbagdi commented Jan 22, 2020

Let me explain.

You have two Ingress resources, both being satisfied by the same Kong, (kong-internal).
Both of them have identical TLS sections:

    tls:
    - hosts:
      - apikg-dev.example.com
      secretName: wild.example.com-2019
    tls:
    - hosts:
      - apikg-dev.example.com
      secretName: wild.example.com-2019

The thing is, you only need one of those sections. TLS configuration is not tied to a specific Ingress rule, the configuration of TLS certificate and SNI is global, meaning it applies to all requests.

So, if you specify the TLS section once, that should be enough.

Unless, apikg-dev.example.com is different in the two resources.

@dcherniv
Copy link
Author

@hbagdi That makes sense but its not quite right. the paths for ingresses are different. They are different ingress resources installed by different applications in the different namespaces.
Let me give you a more concrete use case.
The secret that contains the certificate is installed in the namespace by the cluster administrator.
The certificates are wildcards. I want to absolve our developers from the responsibility of using cert-manager.
Every developer is then free to write their own ingress spec to expose the application on whatever path they want. They are all using the same wildcard secret.
The ingress resources that developers write are in the separate repos. They are NOT one big ingress resource, that would be unmaintainable in the long run.

So with all the above in mind. If developer A exposes their service on apikg-dev.example.com/v2/ServiceA with wild.example.com-2019 in the TLS section and developer B exposes their service on apikg-dev.example.com/v2/ServiceB with wild.example.com-2019 then this configuration appears to not be supported.
I would understand if the ingress resources from both developer a and developer b were exactly identical. That would be a legit collision. But int our case we do path-based routing, the ingress resources are DIFFERENT.
In other words whatever mechanism kong uses to ensure ingress resource uniqness appears to be broken in version 0.7.0. It appears to be hashing on host and secret name, whereas it should hash on host+path+secretName (just guessing here)

@hbagdi
Copy link
Member

hbagdi commented Jan 22, 2020

I understand your problem here, but please do note that TLS has nothing to do with Ingress rules usually. You can't control TLS properties of an endpoint based on different paths.

the ingress resources are DIFFERENT.

Indeed. I didn't say the Ingress resources are the same, I said that the tls sections of the two Ingress resources are exactly the same.

I want to absolve our developers from the responsibility of using cert-manager.

This is a common separation of responsibilities that we see with a lot of users. An alternate approach I've seen is to have a namespace in the cluster that is used by ops folks and you create an Ingress rule with all the TLS configurations in it. And then, the developers don't even need to specify TLS section in their Ingress rules at all.

I do understand that this is broken for you at the moment and we will try to fix it in a patch release.
I'm looking into ways how to tackle this but figuring out a robust solution is not straight-forward here.

@dcherniv
Copy link
Author

dcherniv commented Jan 22, 2020

@hbagdi Thank you for your help.

An alternate approach I've seen is to have a namespace in the cluster that is used by ops folks and you create an Ingress rule with all the TLS configurations in it.

I would like to avoid doing that, because it creates an organizational barrier. But it is definitely an approach that one can take.

I also understand that it may be non-trivial to fix and i'm not asking for immediate fix. I'm quite happy staying on 0.6.2 for the time being.
This is a regression though, because the ingress specs are valid. Kubernetes doesn't place any restrictions on the location and composition of ingress resources as far as TLS goes. For kong ingress to conform to specifications it needs to be able to fully support the ingress spec.

Again thank you for your help and support.

@hbagdi
Copy link
Member

hbagdi commented Jan 22, 2020

The root cause here is this change: Kong/deck@890404c.

@dcherniv In your case, there is a collision.
The same SNI apikg-dev.example.com is associated with two different Secrets.
Each SNI can have one and only certificate (and hence one Secret).
The solution here is only to set an order of precedence, where you the oldest Ingress or oldest Secret or the newest Ingress/Secret is honored by the controller.
Do you have any thoughts on this?

@dcherniv
Copy link
Author

dcherniv commented Jan 22, 2020

Ideal solution would be to refer to secrets from separate namespace so that it is guaranteed to be unique.
But from I recon that is disallowed by ingress spec. We rewrote our certificates helm chart to install the same wildcard in every namespace because of the below issue.
kubernetes/ingress-nginx#2170
In the absence of that, perhaps a checksum would suffice on the secret content and if the secrets are the same just pick one of them?
Although I can foresee a use case where someone might want to do a gradual rollout of a new wildcard cert through different namespaces over the course of a week. In which case contents of certs would be different.
My Golang is rusty so cant really suggest a meaningful workaround. I'm just a user

@hbagdi
Copy link
Member

hbagdi commented Jan 22, 2020

I'd highly recommend to change your configuration to keep all TLS certificates in one namespace and create Ingress resources with tls sections in that namespace only. This also frees up your developers from worrying about TLS certificates.

Meanwhile, I'm working on a solution for this. I'll ping you once I've something. Thanks for the promptness here, appreciate it!

@hbagdi
Copy link
Member

hbagdi commented Jan 27, 2020

Hey @dcherniv,

Can you tests out a change for this fix?

I've created a hbagdi/tests:secret-sync docker image hosted on Dockerhub.

It contains the code from current master branch plus this patch:

diff --git a/internal/ingress/controller/parser/parser.go b/internal/ingress/controller/parser/parser.go
index 45432ae..1eb9c71 100644
--- a/internal/ingress/controller/parser/parser.go
+++ b/internal/ingress/controller/parser/parser.go
@@ -378,9 +378,52 @@ func (p *Parser) fillConsumersAndCredentials(state *KongState) error {
        return nil
 }
 
+func filterHosts(secretNameToSNIs map[string][]string, hosts []string) []string {
+       hostsToAdd := []string{}
+       seenHosts := map[string]bool{}
+       for _, hosts := range secretNameToSNIs {
+               for _, host := range hosts {
+                       seenHosts[host] = true
+               }
+       }
+       for _, host := range hosts {
+               if !seenHosts[host] {
+                       hostsToAdd = append(hostsToAdd, host)
+               }
+       }
+       return hostsToAdd
+}
+
+func processTLSSections(tlsSections []networking.IngressTLS,
+       namespace string, secretNameToSNIs map[string][]string) {
+       // TODO: optmize: collect all TLS sections and process at the same
+       // time to avoid regenerating the seen map; or use a seen map in the
+       // parser struct itself.
+       for _, tls := range tlsSections {
+               if len(tls.Hosts) == 0 {
+                       continue
+               }
+               if tls.SecretName == "" {
+                       continue
+               }
+               hosts := tls.Hosts
+               secretName := namespace + "/" + tls.SecretName
+               hosts = filterHosts(secretNameToSNIs, hosts)
+               if secretNameToSNIs[secretName] != nil {
+                       hosts = append(hosts, secretNameToSNIs[secretName]...)
+               }
+               secretNameToSNIs[secretName] = hosts
+       }
+}
+
 func (p *Parser) parseIngressRules(
        ingressList []*networking.Ingress) (*parsedIngressRules, error) {
 
+       sort.SliceStable(ingressList, func(i, j int) bool {
+               return ingressList[i].CreationTimestamp.Before(
+                       &ingressList[j].CreationTimestamp)
+       })
+
        // generate the following:
        // Services and Routes
        var allDefaultBackends []networking.Ingress
@@ -396,20 +439,7 @@ func (p *Parser) parseIngressRules(
 
                }
 
-               for _, tls := range ingressSpec.TLS {
-                       if len(tls.Hosts) == 0 {
-                               continue
-                       }
-                       if tls.SecretName == "" {
-                               continue
-                       }
-                       hosts := tls.Hosts
-                       secretName := ingress.Namespace + "/" + tls.SecretName
-                       if secretNameToSNIs[secretName] != nil {
-                               hosts = append(hosts, secretNameToSNIs[secretName]...)
-                       }
-                       secretNameToSNIs[secretName] = hosts
-               }
+               processTLSSections(ingressSpec.TLS, ingress.Namespace, secretNameToSNIs)
 
                for i, rule := range ingressSpec.Rules {
                        host := rule.Host

If this works, then I'll polish this change up to fix the problem.

@dcherniv
Copy link
Author

@hbagdi looks good so far.

-------------------------------------------------------------------------------
Kong Ingress controller
  Release:    secret-sync
  Build:      45a414b
  Repository: [email protected]:kong/kubernetes-ingress-controller.git
  Go:         go1.13.1
-------------------------------------------------------------------------------

I0129 23:36:32.173881       1 main.go:407] Creating API client for https://10.135.0.1:443
I0129 23:36:32.183577       1 main.go:451] Running in Kubernetes Cluster version v1.14+ (v1.14.10-gke.0) - git (clean) commit a988db14950de3628f9e21773f3de0bf52485534 - platform linux/amd64
I0129 23:36:32.356569       1 main.go:187] kong version: 1.4.3
I0129 23:36:32.356599       1 main.go:196] Kong datastore: postgres
I0129 23:36:32.690393       1 controller.go:224] starting Ingress controller
I0129 23:36:32.694969       1 status.go:201] new leader elected: kong-internal-kong-ingress-kong-796cb9bff9-fvvh7
I0129 23:36:48.606967       1 status.go:201] new leader elected: kong-internal-kong-ingress-kong-796cb9bff9-zdnz4
I0129 23:37:20.304900       1 status.go:201] new leader elected: kong-internal-kong-ingress-kong-6946994b56-gdwpb
W0129 23:37:20.305208       1 parser.go:339] Deprecated KongCredential in use, please use secret-based credentials. KongCredential resource will be removed in future.
I0129 23:37:20.545462       1 kong.go:66] successfully synced configuration to Kong
W0129 23:40:34.297145       1 parser.go:339] Deprecated KongCredential in use, please use secret-based credentials. KongCredential resource will be removed in future.
I0129 23:40:34.298313       1 kong.go:57] no configuration change, skipping sync to Kong
W0129 23:40:50.638505       1 parser.go:339] Deprecated KongCredential in use, please use secret-based credentials. KongCredential resource will be removed in future.
E0129 23:40:50.638701       1 parser.go:958] reading KongPlugin 'kong-internal/kong-internal-kong-prometheu': fetching KongPlugin: plugin kong-internal/kong-internal-kong-prometheu was not found
I0129 23:40:50.746263       1 kong.go:66] successfully synced configuration to Kong
W0129 23:41:00.072282       1 parser.go:339] Deprecated KongCredential in use, please use secret-based credentials. KongCredential resource will be removed in future.
I0129 23:41:00.133522       1 kong.go:66] successfully synced configuration to Kong

I triggered the sync by editing ingresses in both internal and external namespaces.

@hbagdi
Copy link
Member

hbagdi commented Jan 30, 2020

Awesome!
I'll polish the code up and submit a PR.
I also plan to include this in 0.7.1 release (sometime later this week or early next week)!

@elkh510
Copy link

elkh510 commented Jan 31, 2020

hi @hbagdi
could you please tell what could be the problem
I use one wildcard certificate in several namespaces
kong ingress сontroller using docker image hbagdi/tests:secret-sync,
and when we add another ingress with tls to another namespace (ingress have different url - wild2.example.com ) , in ingress logs appear following error:
while processing event: {Create} failed: 400 Bad Request {"message": "schema violation (snis: wild1.example.com already associated with existing certificate '909a0fd9-4d94-4adf-a795-102a482a65c2')", "name": "schema violation", "fields": {"snis": "wild1.example.com already associated with existing certificate '909a0fd9-4d94-4adf-a795-102a482a65c2'"}, "code": 2}
this happens several times, after which everything works.
is there any way to fix this?

@hbagdi
Copy link
Member

hbagdi commented Jan 31, 2020

@elkh510 Can you share your Ingress resources?
It seems like you have a common SNI between two Ingress resources.

hbagdi added a commit that referenced this issue Jan 31, 2020
When the same SNI is associated with different Kubernetes Secrets.
When using Kong with a database, this results in the same SNI being
associated with a different certificate and the controller fails to
update the SNI in Kong.

This fix reduces the likelihood of this happening for some cases but
will not fix the problem for all cases.

See
Kong/deck@890404c
for the root cause of this issue.

Fix #510
hbagdi added a commit that referenced this issue Jan 31, 2020
When the same SNI is associated with different Kubernetes Secrets.
When using Kong with a database, this results in the same SNI being
associated with a different certificate and the controller fails to
update the SNI in Kong.

This fix reduces the likelihood of this happening for some cases but
will not fix the problem for all cases.

See
Kong/deck@890404c
for the root cause of this issue.

Fix #510
From #523
@dcherniv
Copy link
Author

dcherniv commented Jan 31, 2020

@hbagdi i'd hate to resurrect a closed ticket. I can open a new one if you want.
Just tried 0.7.1 and it is better in that TLS doesn't seem to break anymore. The configuration appears to be updating properly when kong resources are added/modified or deleted. However there appears to be still an issue.

 while processing event: {Create} failed: 400 Bad Request {"message":"schema violation (snis: demo.example.com already associated with existing certificate '2a6ab519-83ec-49e6-85ae-6ac214f93edd')","name":"schema violation","fields":{"snis":"demo.example.com already associated with existing certificate '2a6ab519-83ec-49e6-85ae-6ac214f93edd'"},"code":2}
W0131 22:50:16.069320       1 queue.go:112] requeuing kong-internal/kong-internal-kong-ingress-kong-admin, err 1 errors occurred:
        while processing event: {Create} failed: 400 Bad Request {"message":"schema violation (snis: demo.example.com already associated with existing certificate '2a6ab519-83ec-49e6-85ae-6ac214f93edd')","name":"schema violation","fields":{"snis":"demo.example.com already associated with existing certificate '2a6ab519-83ec-49e6-85ae-6ac214f93edd'"},"code":2}
W0131 22:50:19.207064       1 parser.go:1079] service dev/dev-ocr-service does not have any active endpoints
W0131 22:50:19.207330       1 parser.go:339] Deprecated KongCredential in use, please use secret-based credentials. KongCredential resource will be removed in future.
E0131 22:50:19.431283       1 controller.go:119] unexpected failure updating Kong configuration: 
1 errors occurred:
        while processing event: {Create} failed: 400 Bad Request {"message":"schema violation (snis: demo.example.com already associated with existing certificate '2a6ab519-83ec-49e6-85ae-6ac214f93edd')","name":"schema violation","fields":{"snis":"demo.example.com already associated with existing certificate '2a6ab519-83ec-49e6-85ae-6ac214f93edd'"},"code":2}
W0131 22:50:19.431319       1 queue.go:112] requeuing dev/echoheaders, err 1 errors occurred:
        while processing event: {Create} failed: 400 Bad Request {"message":"schema violation (snis: demo.example.com already associated with existing certificate '2a6ab519-83ec-49e6-85ae-6ac214f93edd')","name":"schema violation","fields":{"snis":"demo.example.com already associated with existing certificate '2a6ab519-83ec-49e6-85ae-6ac214f93edd'"},"code":2}

Update:
This appears to happen during initial sync/reconcile.
Once everything settles (ingress controller reconciles state between kubernetes and api gateway) things appear to work:

I0131 22:55:10.522034       1 kong.go:57] no configuration change, skipping sync to Kong
W0131 22:55:13.851217       1 parser.go:1079] service dev/dev-ocr-service does not have any active endpoints
W0131 22:55:13.851447       1 parser.go:339] Deprecated KongCredential in use, please use secret-based credentials. KongCredential resource will be removed in future.
I0131 22:55:13.855469       1 kong.go:57] no configuration change, skipping sync to Kong
W0131 22:55:17.184543       1 parser.go:1079] service dev/dev-ocr-service does not have any active endpoints
W0131 22:55:17.184731       1 parser.go:339] Deprecated KongCredential in use, please use secret-based credentials. KongCredential resource will be removed in future.
I0131 22:55:17.189021       1 kong.go:57] no configuration change, skipping sync to Kong
W0131 22:55:25.184148       1 parser.go:1079] service dev/dev-ocr-service does not have any active endpoints
W0131 22:55:25.184390       1 parser.go:339] Deprecated KongCredential in use, please use secret-based credentials. KongCredential resource will be removed in future.
I0131 22:55:25.331643       1 kong.go:66] successfully synced configuration to Kong

@hbagdi
Copy link
Member

hbagdi commented Jan 31, 2020

Yeah, I expected this. The current solution I've put in works but is not perfect. It should never break TLS but will result in transient errors sometimes based on how you update/create your resources.
Feel free to Open another issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants