Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Workload spreading across failure domains (fix pod anti-affinity performance problem) #51

Closed
7 of 21 tasks
aronchick opened this issue Jul 22, 2016 · 14 comments
Closed
7 of 21 tasks
Assignees
Labels
sig/scheduling Categorizes an issue or PR as relevant to SIG Scheduling.
Milestone

Comments

@aronchick
Copy link
Contributor

aronchick commented Jul 22, 2016

Description

As a user, I want to be able to spread workloads to multiple clusters spread across multiple failure domains to improve my application uptime.

(Note: This is just one use case for pod affinity/anti-affinity.)

Progress Tracker

  • Before Alpha
  • Before Beta
    • Testing is sufficient for beta
    • User docs with tutorials
      • Updated walkthrough / tutorial in the docs repo: kubernetes/kubernetes.github.io
      • cc @kubernetes/docs on docs PR
      • cc @kubernetes/feature-reviewers on this issue to get approval before checking this off
    • Thorough API review
      • cc @kubernetes/api
  • Before Stable
    • docs/proposals/foo.md moved to docs/design/foo.md
      • cc @kubernetes/feature-reviewers on this issue to get approval before checking this off
    • Soak, load testing
    • detailed user docs and examples
      • cc @kubernetes/docs
      • cc @kubernetes/feature-reviewers on this issue to get approval before checking this off

FEATURE_STATUS is used for feature tracking and to be updated by @kubernetes/feature-reviewers.
FEATURE_STATUS: IN_DEVELOPMENT

More advice:

Design

  • Once you get LGTM from a @kubernetes/feature-reviewers member, you can check this checkbox, and the reviewer will apply the "design-complete" label.

Coding

  • Use as many PRs as you need. Write tests in the same or different PRs, as is convenient for you.
  • As each PR is merged, add a comment to this issue referencing the PRs. Code goes in the http://github.com/kubernetes/kubernetes repository,
    and sometimes http://github.com/kubernetes/contrib, or other repos.
  • When you are done with the code, apply the "code-complete" label.
  • When the feature has user docs, please add a comment mentioning @kubernetes/feature-reviewers and they will
    check that the code matches the proposed feature and design, and that everything is done, and that there is adequate
    testing. They won't do detailed code review: that already happened when your PRs were reviewed.
    When that is done, you can check this box and the reviewer will apply the "code-complete" label.

Docs

  • Write user docs and get them merged in.
  • User docs go into http://github.com/kubernetes/kubernetes.github.io.
  • When the feature has user docs, please add a comment mentioning @kubernetes/docs.
  • When you get LGTM, you can check this checkbox, and the reviewer will apply the "docs-complete" label.
@aronchick aronchick added this to the v1.4 milestone Jul 22, 2016
@aronchick aronchick assigned ghost Jul 22, 2016
@ghost
Copy link

ghost commented Jul 22, 2016

I believe that this is the affinity performance improvement work that @davidopp and @wojtek-t ? are working on. Reassigning.

@ghost
Copy link

ghost commented Jul 22, 2016

@aronchick Ooh. I don't seem to have permission to assign issues on this repo. Can someone grant me permission? @erictune might know what's going on here, because I think this repo was his idea.

@aronchick aronchick assigned bprashanth, davidopp and wojtek-t and unassigned ghost and bprashanth Jul 22, 2016
@idvoretskyi
Copy link
Member

@davidopp @wojtek-t folks, which SIG is responsible for this feature?

@davidopp
Copy link
Member

davidopp commented Aug 4, 2016

SIG Scheduling

@davidopp
Copy link
Member

davidopp commented Aug 4, 2016

(since it is primarily a new scheduling feature -- it just happens that the last bit of work that had to be done was making the performance acceptable to enable it, which @wojtek-t just finished)

@philips philips added the sig/scheduling Categorizes an issue or PR as relevant to SIG Scheduling. label Aug 4, 2016
@alex-mohr
Copy link

@davidopp says done.

@davidopp davidopp changed the title Workload spreading across failure domains (fix pod anti-affinity performance problem)) Workload spreading across failure domains (fix pod anti-affinity performance problem) Aug 26, 2016
@davidopp
Copy link
Member

Updated check-list. Only docs remains for alpha in 1.4.

@janetkuo
Copy link
Member

janetkuo commented Sep 2, 2016

@davidopp @wojtek-t Please update the docs in https://github.com/kubernetes/kubernetes.github.io, and then add PR numbers and check the docs box in the issue description

@jaredbhatti
Copy link

Ping. Any update on docs?

@jaredbhatti
Copy link

@davidopp @wojtek-t Another ping on docs. Any PRs you can point me to?

@davidopp
Copy link
Member

This is
kubernetes/website#1148

@davidopp davidopp closed this as completed Oct 1, 2016
@idvoretskyi
Copy link
Member

@davidopp has the feature development been finished? If not, it would be better to reopen the issue and mark it with non-1.4 milestone.

@davidopp
Copy link
Member

davidopp commented Oct 7, 2016

This is finished. It maybe should have never been filed as a feature; the real feature is #60 and this is just one use case for it (and one piece of the work to bringing it to GA).

@idvoretskyi
Copy link
Member

@davidopp thank you for explanation, I've been confused a bit about closing the feature in "Alpha" status.

ingvagabund pushed a commit to ingvagabund/enhancements that referenced this issue Apr 2, 2020
start a document about kube-apiserver certs
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
sig/scheduling Categorizes an issue or PR as relevant to SIG Scheduling.
Projects
None yet
Development

No branches or pull requests

9 participants