-
Notifications
You must be signed in to change notification settings - Fork 22
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support autoscaling #39
Conversation
I believe in this case you've actually ran the auto scaler on the workload cluster with a kubeconfig file to talk to the management cluster even if things can be secure and locked down, this could be a security issue down the road. Therefore, I think it's better to run the auto scaler on the management cluster talking to the workload instead. So I think you'll have to remove it from the ClusterResourceSet and get rid of the kubeconfig option, for the actual deployment in the management cluster, you can mount the clustername-kubeconfig secret to get access to the cluster directly. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See commet
when we run autoscaler in mgmt cluster, i wonder it means we need to deploy one autoscaler per one magnum cluster |
yes, indeed, actually we will need to create a autoscaler per CAPI cluster on the control plane, but I think the load added of this is kinda negligible so we can be OK with it. |
Ok, I have changed the clusterAPIMode as you mentioned. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@okozachenko1203 good progress
Now get_object method has an assert. Once this assertion is negative, deleteion is failed as well as creation. By override deletion method using configmap name directly, it avoids deletion failure. Also it will reduce api requests.
Some K8S resources follow dns-1035 but not for openstack coe resources
…pi into add-autoscaler
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@okozachenko1203 I pushed some changes.
I think the issue is that we're being hit by kubernetes-sigs/cluster-api#7088 -- without that in place, we're unable to use the metadata the way we want to.
I think what we should do is instead take advantage of update_cluster_status
(or maybe update_node_group_status
) to mutate and reconcile the annotations on the MachineDeployment
.
Let's leave a comment pointing to the PR.
Fix #3