-
Notifications
You must be signed in to change notification settings - Fork 17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: add new status conditions for k8s v1.31 #403
Conversation
case batchv1.JobComplete, batchv1.JobSuccessCriteriaMet: | ||
complete <- nil | ||
return | ||
case batchv1.JobFailed: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we also check the JobFailureTarget
condition?
From the doc:
You can use the FailureTarget or the SuccessCriteriaMet condition to evaluate whether the Job has failed or succeeded without having to wait for the controller to add a terminal condition.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
good catch! I thought about it, but decided to wait for JobFailed
. that means all pods are terminated
.
to avoid an error aquasecurity/trivy#5639
wdyt?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you, that makes sense.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thank you!
In Kubernetes v1.31.* the diagram of transitions looks like below.
so we can complete the job after the first success, but we should wait for all pods are terminated.
the docs: Transition of "status.conditions"