Skip to content
This repository has been archived by the owner on Sep 30, 2020. It is now read-only.

Introduce the rescheduler #394

Closed

Conversation

cknowles
Copy link
Contributor

@cknowles cknowles commented Mar 8, 2017

For #118.

  • Currently using raw Pod as per salt setup
  • Logging routed via standard streams
  • gcr image used, not sure if hyperkube has the rescheduler

For kubernetes-retired#118.

- Currently using raw Pod as per [salt
setup](https://github.com/kubernetes/kubernetes/blob/master/cluster/salt
base/salt/rescheduler/rescheduler.manifest0
- Logging routed via standard streams
- gcr image used, not sure if hyperkube has the rescheduler
@cknowles
Copy link
Contributor Author

cknowles commented Mar 8, 2017

@mumoshu basic part is there, I understand from #118 we wish for a flag for this. Are we still adding experimental flags or we're agreed to just mark them as experimental in docs?

@codecov-io
Copy link

Codecov Report

Merging #394 into master will not change coverage.
The diff coverage is n/a.

@@           Coverage Diff           @@
##           master     #394   +/-   ##
=======================================
  Coverage   38.75%   38.75%           
=======================================
  Files          29       29           
  Lines        2276     2276           
=======================================
  Hits          882      882           
  Misses       1275     1275           
  Partials      119      119

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update d299961...1dce572. Read the comment docs.

@mumoshu
Copy link
Contributor

mumoshu commented Mar 9, 2017

Thanks for the PR @c-knowles!
Yes, we can just mark them as experimental in docs for now.

containers:
- name: rescheduler
image: gcr.io/google-containers/rescheduler:v0.2.2
# TODO: Make resource requirements depend on the size of the cluster
Copy link
Contributor

@mumoshu mumoshu Mar 23, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So, do we need to manually modify /etc/kubernetes/manifests/kube-rescheduler.yaml to update the resources.requests after each time worker nodes are scaled out considerably?
Mind documenting it?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mumoshu I copied this from kubernetes/kubernetes@c304fa1, it's what the salt setup has in the main repo. I'd be happy to just remove it perhaps as none of the others have it.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see; let it remain here so that we can at least notice users about it

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note: As this pod is "burstable" in cpu/memory(requests set, but no limits set), as long as your cluster has enough capacity, rescheduler would keep working.

@cknowles
Copy link
Contributor Author

New commits are not showing, perhaps due to org move to Kubernetes Incubator. Going to close and open a new PR.

@cknowles cknowles closed this Mar 23, 2017
@cknowles
Copy link
Contributor Author

Now in #441.

@mumoshu
Copy link
Contributor

mumoshu commented Mar 23, 2017

OK. Thanks for your effort 👍

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants