Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Random ObjectReference errors during ingress deletion #2256

Closed
mahpatil opened this issue Mar 25, 2018 · 3 comments
Closed

Random ObjectReference errors during ingress deletion #2256

mahpatil opened this issue Mar 25, 2018 · 3 comments

Comments

@mahpatil
Copy link

mahpatil commented Mar 25, 2018

What keywords did you search in NGINX Ingress controller issues before filing this one? (If you have found any duplicates, you should instead reply there.):
Event(v1.ObjectReference, wrap.go:42] PATCH
Is this a BUG REPORT or FEATURE REQUEST? (choose one): BUG REPORT

NGINX Ingress controller version:
0.11.0

Kubernetes version (use kubectl version):
1.8.5

Environment:
Production

  • Cloud provider or hardware configuration: AWS
  • OS (e.g. from /etc/os-release): CoreOS
  • Kernel (e.g. uname -a):

What happened:
We noticed an outage in our beta production environment. Basically, there was a helm based deployment that happened around 12:34:34, that seemed to have triggered the ingress watch & reloads, which then triggered delete on few of the ingresses of certain services. What we see from the logs is an objectrefence error, and this seems to occur constantly. It feels as if ingress-controller is constantly trying to delete these ingresses and failing due to object reference error. This happened for a few hours, during that time there were multiple ingress reloads triggered and the apps stopped responding/ were timing out.
This was resolved by deleting some of the ingresses and redeploying an app, but we want to see why this could have happened in the first place.

March 23rd 2018, 12:34:47.000 kube-system - stdout - 2018-03-23 12:34:47.488 [INFO][127] int_dataplane.go 705: Finished applying updates to dataplane. msecToApply=1.31537
  March 23rd 2018, 12:34:47.000 kube-system - stdout I0323 12:34:47.410096 8 wrap.go:42] GET /api/v1/nodes/ip-172-XX-XX-227.eu-west-2.compute.internal?resourceVersion=0: (915.504µs) 200 [[kubelet/v1.8.5 (linux/amd64) kubernetes/cce11c6] 172.XX.XX.227:56056]
  March 23rd 2018, 12:34:48.000 kube-system - stdout I0323 12:34:48.290214 8 wrap.go:42] GET /apis/extensions/v1beta1/ingresses?resourceVersion=46259159&timeoutSeconds=381&watch=true: (6m21.000828738s) 200 [[nginx-ingress-controller/v0.0.0 (linux/amd64) kubernetes/$Format] 172.XX.XX.106:56520]
  March 23rd 2018, 12:34:48.000 kube-system - stdout I0323 12:34:48.291218 8 rest.go:362] Starting watch for /apis/extensions/v1beta1/ingresses, rv=45136176 labels= fields= timeout=8m33s
  March 23rd 2018, 12:34:48.000 kube-system - stdout I0323 12:34:48.327241 8 wrap.go:42] PATCH /api/v1/namespaces/app-prd/events/app-webapp.151c1a2a21d85808: (7.692759ms) 200 [[nginx-ingress-controller/v0.0.0 (linux/amd64) kubernetes/$Format] 172.XX.XX.106:56520]
March 23rd 2018, 12:34:48.000 nginx-ingress - stderr - I0323 12:34:48.297721 7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"app-prd", Name:"app-webapp", UID:"6ae234e0-2852-11e8-b900-0adf3eaaf8e4", APIVersion:"extensions", ResourceVersion:"45365471", FieldPath:""}): type: 'Normal' reason: 'DELETE' Ingress app-webapp
March 23rd 2018, 12:34:48.000 nginx-ingress - stderr - I0323 12:34:48.295955 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"app-prd", Name:"app-webapp", UID:"6ae234e0-2852-11e8-b900-0adf3eaaf8e4", APIVersion:"extensions", ResourceVersion:"45365471", FieldPath:""}): type: 'Normal' reason: 'DELETE' Ingress app-webapp
March 23rd 2018, 12:34:48.000 nginx-ingress - stderr - I0323 12:34:48.297979 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"app-prd", Name:"app-webapp", UID:"6ae234e0-2852-11e8-b900-0adf3eaaf8e4", APIVersion:"extensions", ResourceVersion:"45365471", FieldPath:""}): type: 'Normal' reason: 'DELETE' Ingress app-webapp
...
March 23rd 2018, 12:34:48.000 nginx-ingress - stderr - I0323 12:34:48.300507 7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"-prd", Name:"-service", UID:"aa857936-2c52-11e8-9f5e-06cdee462168", APIVersion:"extensions", ResourceVersion:"45412187", FieldPath:""}): type: 'Normal' reason: 'UPDATE' Ingress -prd/-service

What you expected to happen:
Expect the ingress deletion to have completed and proceeded with the next updates

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know:

@aledbf
Copy link
Member

aledbf commented Mar 26, 2018

What we see from the logs is an objectrefence error, and this seems to occur constantly

This is received by the ingress informer and it's just an event. This only triggers an update.
If you see this it basically means "something" is calling kubectl delete .... on an Ingress

Can you post the log from the pod (feel free to replace any private informations)

@mahpatil
Copy link
Author

mahpatil commented Apr 2, 2018

@aledbf thanks for getting back. Yes there was a helm delete that triggered multiple deletes, however this was done for a specific webapplication in a namespace, and it seemed to have triggered updates/changes to the unrelated apps running in other namespaces.
Assuming you want logs from the ingress controller, ive attached more logs.
logs.txt
ss controller pods.

@aledbf
Copy link
Member

aledbf commented Jun 15, 2018

Closing. This was fixed in #2598

@aledbf aledbf closed this as completed Jun 15, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants