-
Notifications
You must be signed in to change notification settings - Fork 14.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update jobs-run-to-completion.md - fix numbering #21177
Conversation
seems to me it should be 1,2,3 and not 1,1,1 :)
Thanks for your pull request. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA). 📝 Please follow instructions at https://git.k8s.io/community/CLA.md#the-contributor-license-agreement to sign the CLA. It may take a couple minutes for the CLA signature to be fully registered; after that, please reply here with a new comment and we'll verify. Thanks.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
Welcome @borod108! |
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Deploy preview for kubernetes-io-master-staging ready! Built with commit ece5eb0 https://deploy-preview-21177--kubernetes-io-master-staging.netlify.app |
I signed it |
1. Non-parallel Jobs | ||
- normally, only one Pod is started, unless the Pod fails. | ||
- the Job is complete as soon as its Pod terminates successfully. | ||
1. Parallel Jobs with a *fixed completion count*: | ||
2. Parallel Jobs with a *fixed completion count*: | ||
- specify a non-zero positive value for `.spec.completions`. | ||
- the Job represents the overall task, and is complete when there is one successful Pod for each value in the range 1 to `.spec.completions`. | ||
- **not implemented yet:** Each Pod is passed a different index in the range 1 to `.spec.completions`. | ||
1. Parallel Jobs with a *work queue*: | ||
3. Parallel Jobs with a *work queue*: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems to me that the original numbering is valid Markdown, but the bullets aren't being parsed right.
The reason for leaving all the numbers as 1.
is that it makes future editing easier as list items are removed or added.
My suggestion for how this part of the page should look as Markdown source:
1. Non-parallel Jobs
* normally, only one Pod is started, unless the Pod fails.
* the Job is complete as soon as its Pod terminates successfully.
1. Parallel Jobs with a *fixed completion count*:
* specify a non-zero positive value for `.spec.completions`.
* the Job represents the overall task, and is complete when there is one successful Pod for each value in the range 1 to `.spec.completions`.
* **not implemented yet:** Each Pod is passed a different index in the range 1 to `.spec.completions`.
1. Parallel Jobs with a *work queue*:
* do not specify `.spec.completions`, default to `.spec.parallelism`.
* the Pods must coordinate amongst themselves or an external service to determine what each should work on. For example, a Pod might fetch a batch of up to N items from the work queue.
* each Pod is independently capable of determining whether or not all its peers are done, and thus that the entire Job is done.
* when _any_ Pod from the Job terminates with success, no new Pods are created.
* once at least one Pod has terminated with success and all Pods are terminated, then the Job is completed with success.
* once any Pod has exited with success, no other Pod should still be doing any work for this task or writing any output. They should all be in the process of exiting.
What do you think about that approach, @borod108 ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@sftim than you for your quick review! How will the end user see it then?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can preview your changes locally, or push them and view the Netlify preview:
https://deploy-preview-21177--kubernetes-io-master-staging.netlify.app/docs/concepts/workloads/controllers/jobs-run-to-completion/
Do either of those help?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@borod108 did you get time to preview these changes?
This is trying to fix a regression that's related to issue #20335 |
Thanks for the PR @borod108! However, a PR has already been merged that fixes the issue: https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/ Thanks again! |
@borod108: PR needs rebase. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@jimangel: Closed this PR. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
seems to me it should be 1,2,3 and not 1,1,1 :)