-
Notifications
You must be signed in to change notification settings - Fork 6.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix for Issue #2141 #2155
Fix for Issue #2141 #2155
Conversation
@brutus333 Looks good, where did you find the default policy file ? |
I found it in kubernetes repository https://github.com/kubernetes/kubernetes/blob/master/examples/scheduler-policy-config.json |
ci check this |
{ | ||
"kind" : "Policy", | ||
"apiVersion" : "v1", | ||
"predicates" : [ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It feels like these are things that would be worthwhile to abstract into variables, but I'm not super familiar with the use case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not sure if there are of any use outside Openstack deployment scenario. In Openstack you may have different AZ setup for compute and storage, not sure if there is any other cloud with the same behavior.
And usually you don't want to disable default scheduler restrictions, do you?
Yes but we need a way to not disable AZ check. I have some customer on
OpenStack where Nova cross zone attachement is disable.
You can (and I think it is good practice ) have nova Az and cinder Az to
match.
For example 2 Datacenter with each a different Cinder backend.
Le jeu. 18 janv. 2018 à 08:09, brutus333 <[email protected]> a
écrit :
… ***@***.**** commented on this pull request.
------------------------------
In roles/kubernetes/master/templates/kube-scheduler-policy.yaml.j2
<#2155 (comment)>
:
> @@ -0,0 +1,18 @@
+{
+"kind" : "Policy",
+"apiVersion" : "v1",
+"predicates" : [
I am not sure if there are of any use outside Openstack deployment
scenario. In Openstack you may have different AZ setup for compute and
storage, not sure if there is any other cloud with the same behavior.
And usually you don't want to disable default scheduler restrictions, do
you?
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#2155 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ABN4qTgoeChQW-diXcKvfcTxwjvlcN81ks5tLu4MgaJpZM4Rb5HI>
.
|
@ArchiFleKs while the best practice is indeed to disallow cross-AZ attachments the fact of the matter is this is not the out of the box behavior of openstack and there are customers out there which have a single large storage system (CEPH especially) that they don't carve out separately between AZs thus the need to allow this kind of behavior. |
Ok, in this case I can add a variable for this. What about disable_volume_zone_conflict? |
@cristicalin I agree, I was not saying that the feature is unneeded, we just need a way to enable/disable it even for OpenStack |
@brutus333 maybe |
@cristicalin: Yes, maybe it's a better name. But it depends whether we will keep disabling the constraint only for openstack (as done in the current patch) or we will allow disabling this constraint on all clouds. |
@brutus333 I think it is a bit confusing. In nova.conf the feature is called something |
@ArchiFleKs: I agree with volume_cross_zone_attachment. |
@brutus333 looks ok to me |
…ne_attachment and removed cloud provider condition; fix identation
using ignore-volume-az option[3]. [1]: kubernetes-sigs#2155 [2]: kubernetes-sigs#2346 [3]: kubernetes/kubernetes#53523
…ion (#2980) * Better fix for openstack cinder zone issue[1][2] using ignore-volume-az option[3]. [1]: #2155 [2]: #2346 [3]: kubernetes/kubernetes#53523 * Remove kube-scheduler-policy.yaml
This is a fix for issue: #2141
The solution was to configure a bit more permissive default scheduler policy, by removing the NoVolumeZoneConflict predicate from default policy hosted on kubernetes repo.
One possible improvement is to gather somehow the default policy file and remove only this predicate.