Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Incorrect default for startingDeadlineSeconds #13419

Closed
1 of 2 tasks
bronger opened this issue Mar 26, 2019 · 7 comments
Closed
1 of 2 tasks

Incorrect default for startingDeadlineSeconds #13419

bronger opened this issue Mar 26, 2019 · 7 comments

Comments

@bronger
Copy link

bronger commented Mar 26, 2019

This is a...

  • Feature Request
  • Bug Report

Problem:

On https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/, it says “… and its startingDeadlineSeconds field is not set. The default for this field is 100 seconds.” I doubt the latter. It contradicts another expression above. Besides, this default is not mentioned in the API.

Proposed Solution:

Remove the sentence “The default for this field is 100 seconds.”

Page to Update:
https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/

@tengqm
Copy link
Contributor

tengqm commented Mar 27, 2019

Sad thing is .... the 100 seconds was hardcoded here: https://github.com/kubernetes/kubernetes/blob/master/pkg/controller/cronjob/utils.go#L143

It is hand picked, and sounds like semantically equivalent to a default value.

Feel free to reopen.

@tengqm tengqm closed this as completed Mar 27, 2019
@bronger
Copy link
Author

bronger commented Mar 27, 2019

No – the 100 is the number of failed restarts. This is hardcoded, but not the default of startingDeadlineSeconds which is nil as far as I can see.

@tengqm
Copy link
Contributor

tengqm commented Mar 27, 2019

Alright. Reopening.

@tengqm tengqm reopened this Mar 27, 2019
@rajeshdeshpande02
Copy link
Contributor

@bronger @tengqm I will pick this for fixing in the documentation.

@anbugit004
Copy link

Is anyone have a look and let me know if you have any idea on the below things:

How many pods can be run in kubernetes master nodes in HA Cluster

For example there is 10 node cluster ( 3 master + 7 worker nodes are available)

Can you please tell us the maximum number of pods can assign it to each master nodes

@daminisatya
Copy link
Contributor

/close

@k8s-ci-robot
Copy link
Contributor

@daminisatya: Closing this issue.

In response to this:

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants