-
Notifications
You must be signed in to change notification settings - Fork 979
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Resolved a race condition where provisioners could skip having defaults if applied before karpenter-webhook came online #465
Conversation
…ts if applied before karpenter-webhook came online
@@ -46,8 +46,8 @@ spec: | |||
containerPort: 8081 | |||
livenessProbe: | |||
httpGet: | |||
path: /healthz | |||
port: 8081 | |||
scheme: HTTPS |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A couple of quick questions that may just be my lack of understanding of liveness probes:
- Do we still need a path here?
- Does putting the port to the same port as the webhook container mean it only pings healthy when that container is healthy?
- Do we still need the health-probe port at 8081?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we still need a path here?
Knative pkg sets up probe handlers here: https://github.com/knative/pkg/blob/b558677ab03404ed118ba3e179a7438fe2d8509a/network/handlers/drain.go#L83 that do not require a path and share the port with the webhook. It's actually pretty cool how it works. I just missed this the first time around and it was a good opportunity to simplify and remove our custom webhook.
Does putting the port to the same port as the webhook container mean it only pings healthy when that container is healthy?
Yeah more or less. See the code above.
Do we still need the health-probe port at 8081?
Good catch! No! Removed.
// Controllers and webhook | ||
sharedmain.MainWithConfig( | ||
webhook.WithOptions(injection.WithNamespaceScope(signals.NewContext(), system.Namespace()), webhook.Options{ | ||
Port: options.Port, | ||
ServiceName: "karpenter-webhook", | ||
SecretName: "karpenter-webhook-cert", | ||
}), | ||
"Karpenter Webhooks", | ||
"karpenter.webhooks", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks like this component is for logging purposes. Since we name the webhooks karpenter-webhook
, maybe we could do Karpenter-Webhooks
? WDYT?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's corresponds to a configmap, which means that the name must be dns compliant. This was causing noisy errors in the log and logging fallback behavior.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
Issue #, if available: #463
Description of changes:
The main change is that we now statically populate webhooks.yaml instead of allowing the webhook controller to inject the resources into the webhook. This ensures that the webhook configuration prevents creation of provisioners even if the webhook controller fails to come online (e.g. auth issues)
rules: - apiGroups: - provisioning.karpenter.sh apiVersions: - v1alpha1 resources: - provisioners provisioners/status operations: - CREATE - UPDATE - DELETE
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.