-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
KEP 1645: update conditions in ServiceExport after KEP-1623 #4672
Conversation
773826d
to
ad87e76
Compare
The Kubernetes project currently lacks enough contributors to adequately respond to all PRs. This bot triages PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: MrFreezeex The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Following today's sig-multicluster meeting I am proposing to add a new condition to reflect if the local service export is directly involved with the conflict. I am not that familiar with Kubernetes conditions so it may be more suitable to encode that info in the reason of the existing "Conflict" condition for instance. Feel free to comment if you have opinion about how this extra info should be exposed! |
keps/sig-multicluster/1645-multi-cluster-services-api/README.md
Outdated
Show resolved
Hide resolved
- type: Conflict | ||
status: "True" | ||
lastTransitionTime: "2020-03-30T01:33:55Z" | ||
message: "Conflicting type. Using \"ClusterSetIP\" from oldest service export in \"cluster-1\". 2/5 clusters disagree." | ||
reason: "Conflict" | ||
message: "Conflicting type. 2/5 clusters disagree. Using \"ClusterSetIP\" from oldest service export in \"cluster-1\"." |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the reason should be "ConflictingType".
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes indeed, although maybe we could extend the reason to do what I wanted to do for LocalConflict instead of encoding the type as well (like having LocalConflict/ExternalConflict as a reason) WDYT?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So with some inspiration from your message on slack if we encode having a local conflict in the same condition it would be probably one of these:
- type: Conflict
status: "True"
lastTransitionTime: "2020-03-30T01:33:55Z"
reason: "LocalConflict"
message: "The local service type conflicts with other constituent clusters. 2/5 clusters disagree. Using \"ClusterSetIP\" from oldest service export in \"cluster-1\"."
- type: Conflict
status: "True"
lastTransitionTime: "2020-03-30T01:33:55Z"
reason: "ConflictingType"
message: "The local service type conflicts with other constituent clusters. 2/5 clusters disagree. Using \"ClusterSetIP\" from oldest service export in \"cluster-1\"."
So if going that route we would have to decide if we want to encode local/external conflict or the "type" of conflict within the reason.
Those are still a suggested options to implementers and not the exact message that they absolutely need to provide ofc.
@@ -471,13 +455,22 @@ status: | |||
- type: Ready | |||
status: "True" | |||
lastTransitionTime: "2020-03-30T01:33:51Z" | |||
- type: InvalidService | |||
reason: "Ready" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit but I don't think this should merely duplicate the type - if there's no other short reason that adds value then leave it blank.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I now remember why I did this, we actually cannot add a Condition with an empty reason
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you mean the API service rejects it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not sure who rejects it but I retested it and by removing one reason I got the following error:
time="2024-09-26T10:12:11.051078297Z" level=error msg="Reconciler error" ServiceImport="{rebel-base-mcsapi default}" controller=serviceimport controllerGroup=multicluster.x-k8s.io controllerKind=ServiceImport error="ServiceExport.multicluster.x-k8s.io \"rebel-base-mcsapi\" is invalid: status.conditions[0].reason: Invalid value: \"\": conditions[0].reason in body should be at least 1 chars long" name=rebel-base-mcsapi namespace=default reconcileID="\"9d39229b-9e9b-4c8d-81d8-5a6cfda7dd68\"" subsys=controller-runtime
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
K8s rejects it b/c Reason
is required in metav1.Condition
:
// This field may not be empty
// +required
// +kubebuilder:validation:Required
// +kubebuilder:validation:MaxLength=1024
// +kubebuilder:validation:MinLength=1
// +kubebuilder:validation:Pattern=`^[A-Za-z]([A-Za-z0-9_,:]*[A-Za-z0-9_])?$`
Reason string `json:"reason" protobuf:"bytes,5,opt,name=reason"`
In the previous ServiceExportCondition it wasn't.
b8eb3ea
to
8f34a9e
Compare
8f34a9e
to
e4415cd
Compare
Also swap the "2/5 cluster disagree" in the conflict message so that it's clear that the disagree sentence is for the service we export and not from the oldest Service that we conflict with. Signed-off-by: Arthur Outhenin-Chalandre <[email protected]>
Add extra information to indicate to the user via the new ServiceExportLocalConflict condition if the local service export is directly involved in the conflict. As ServiceExportConflict should be added on every ServiceExport involved whether or not the local service export is involved, users were not involved to grasp that information without manually checking the service exported on multiple clusters. Signed-off-by: Arthur Outhenin-Chalandre <[email protected]>
e4415cd
to
66d3f05
Compare
The Kubernetes project currently lacks enough contributors to adequately respond to all PRs. This bot triages PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all PRs. This bot triages PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close |
@k8s-triage-robot: Closed this PR. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
Update conditions after KEP-1623 on KEP-1645 which match the actual CRDs here: https://github.com/kubernetes-sigs/mcs-api/blob/master/pkg/apis/v1alpha1/serviceexport.go#L47.