Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refactor default backend handling and add better events #21

Merged
merged 5 commits into from
Dec 7, 2016

Conversation

bprashanth
Copy link
Contributor

We were setting up and tearing down the default http backend used in url maps that don't have a user specified default backend in weird places that didn't make sense. This pr also sends better events that include health check information, and sets better timeouts on readiness-probes converted to health checks.

@k8s-ci-robot k8s-ci-robot added the cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. label Nov 24, 2016
@k8s-oncall
Copy link

This change is Reviewable

@@ -694,7 +694,7 @@ func (l *L7) UpdateUrlMap(ingressRules utils.GCEURLMap) error {
} else {
l.um.DefaultService = l.glbcDefaultBackend.SelfLink
}
glog.V(3).Infof("Updating url map %+v", ingressRules)
glog.Infof("Updating url map:\n%+v", ingressRules)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The default level is 0 right?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

V(2) is default

if err != nil {
return nil, err
}
glog.Warningf("Creating l7 without a default backend")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I noticed this before when I played with Ingress. Will this incur a lag for the default backend?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we were always creating it as part of sync: https://github.com/kubernetes/contrib/blob/master/ingress/controllers/gce/loadbalancers/loadbalancers.go#L180, so this was just a redundant check. But we were also deleting it as part of sync, when there are no more lbs: https://github.com/kubernetes/contrib/blob/master/ingress/controllers/gce/loadbalancers/loadbalancers.go#L191

That's not the right time to try and delete it, because a url-map could still be referencing it (the delete would fail saying: "backend already in use", then we'd requeue, come back in, delete the url map, and now the backend delete would pass).

So I moved the delete to GC(), which happens after url map deletion.
https://github.com/kubernetes/ingress/pull/21/files#diff-247e0d4a5e00d30d32c0e454986614cdR210

@freehan
Copy link
Contributor

freehan commented Dec 6, 2016

LGTM

@freehan
Copy link
Contributor

freehan commented Dec 7, 2016

I do not have permission to merge or apply label.

@bprashanth
Copy link
Contributor Author

Oh ,thanks

@bprashanth bprashanth added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Dec 7, 2016
@bprashanth bprashanth merged commit cd07b0b into kubernetes:master Dec 7, 2016
haoqing0110 referenced this pull request in stolostron/management-ingress Mar 5, 2021
Alvaro-Campesino added a commit to Alvaro-Campesino/ingress-nginx-k8s that referenced this pull request Feb 10, 2023
* deadlock detected

* Add debug and fix duped UID

* clean comments
Alvaro-Campesino added a commit to Alvaro-Campesino/ingress-nginx-k8s that referenced this pull request Feb 13, 2023
* deadlock detected

* Add debug and fix duped UID

* clean comments
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. lgtm "Looks good to me", indicates that a PR is ready to be merged.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants