-
Notifications
You must be signed in to change notification settings - Fork 4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Rancher as a cloud provider #4041
Conversation
Thanks for your pull request. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA). 📝 Please follow instructions at https://git.k8s.io/community/CLA.md#the-contributor-license-agreement to sign the CLA. It may take a couple minutes for the CLA signature to be fully registered; after that, please reply here with a new comment and we'll verify. Thanks.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
Welcome @douglasmakey! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please update the main readme as well so.
Done, thanks @mwielgus |
/assign @aleksandra-malinowska @feiskyer |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you also add an OWNERS file in the new rancher directory?
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: douglasmakey, feiskyer The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Done, thanks @feiskyer |
Hi @leodotcloud, sure, I will remove the file. |
access=token-abcdef | ||
secret=ksjdhfiusdhfkjsdfhisudhfnskjdfhskjdfhksdjfhksjdfhksdjf | ||
cluster-id=c-abcdef | ||
autoscaler_node_arg: "2:6:c-abcdef:np-abcde" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
minor: new line
@@ -0,0 +1,118 @@ | |||
apiVersion: rbac.authorization.k8s.io/v1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
seems it ends with txt. is it used for rendering templates?
|
||
// GetAvailableGPUTypes return all available GPU types cloud provider supports. | ||
func (u rancherProvider) GetAvailableGPUTypes() map[string]struct{} { | ||
return availableGPUTypes |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is this supported in rancher platform? Seems it's an empty map?
|
||
``` | ||
docker push rancher/cluster-autoscaler:dev | ||
``` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you add some information about contacts of maintainers here?
It would be helpful for maintainers to reach out to the cloud provider owners for release coordination when necessary.
@leodotcloud I help review non-rancher codes and please take a look. |
@douglasmakey: PR needs rebase. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
How is this going? We'd love to use it with our rancher infrastructure. |
@douglasmakey please rebase |
I am having some issues setting up this code on a new rancher-controlled cluster. I bet my token is wrong but this stacktrace is pretty nasty: https://gist.github.com/xrl/e7caf2e45048aa71b0d489267741c251
|
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
Hey all! Just for transparency purposes, both @douglasmakey and myself -- the two people running this PR -- have left Ubisoft since and moved on to new adventures. Back then, I think we were lacking some of the support from Rancher to have this merged. As such, we might be able to contribute some extra time but it would be better if Ubisoft/Rancher takes ownership instead... Speaking for both Douglas and myself, happy if either wants to take over the PR to have it merged. Hope that helps with expectations regarding whether this will be merged or not! |
@patrickdappollonio Thank you for the note, I am in a similar situation too. Bringing this to your attention @cjellick, @deniseschannon, @jambajaar, @cloudnautique. |
/remove-lifecycle stale |
Closed due to inactivity. Feel free to reopen. |
Please re-open. We really need this. |
Unfortunately, unless someone wants to take it from here, neither @douglasmakey nor I still work at the previous company where this was our goal. |
For what it's worth we (I mean my company but doing it in an OSS fashion) are building the equivalent of this using the "external grpc" cloud provider.
It should reach 1.0 by end of the month
Sent from a typical smartphone. If this is illiterate, it’s the voice recognition’s fault.
…________________________________
From: patrickdappollonio ***@***.***>
Sent: Friday, June 3, 2022 4:53:09 PM
To: kubernetes/autoscaler ***@***.***>
Cc: Romain Cambier ***@***.***>; Mention ***@***.***>
Subject: Re: [kubernetes/autoscaler] Add Rancher as a cloud provider (#4041)
Unfortunately, unless someone wants to take it from here, neither @douglasmakey<https://github.com/douglasmakey> nor I still work at the previous company where this was our goal.
—
Reply to this email directly, view it on GitHub<#4041 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/ACTBVY2E2NJCJAMH24QQ7PDVNIL5LANCNFSM43VQ3K3Q>.
You are receiving this because you were mentioned.Message ID: ***@***.***>
|
@cambierr - This sounds fantastic! - Could you provide mere deets? And you said OSS. Am I to presume it will be open-sourced? |
Yep, as soon as it will be considered stable, we'll make make it public.
Sent from a typical smartphone. If this is illiterate, it’s the voice recognition’s fault.
…________________________________
From: George Lerma ***@***.***>
Sent: Sunday, June 5, 2022 1:05:53 AM
To: kubernetes/autoscaler ***@***.***>
Cc: Romain Cambier ***@***.***>; Mention ***@***.***>
Subject: Re: [kubernetes/autoscaler] Add Rancher as a cloud provider (#4041)
@cambierr<https://github.com/cambierr> - This sounds fantastic! - Could you provide mere deets? And you said OSS. Am I to presume it will be open-sourced?
—
Reply to this email directly, view it on GitHub<#4041 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/ACTBVY6LXCNUJAODBW4DQQ3VNPONDANCNFSM43VQ3K3Q>.
You are receiving this because you were mentioned.Message ID: ***@***.***>
|
We, at Ubisoft, will work with Rancher to re-open this MR. |
An bit of a coincidence @rodcloutier, you wrote this just a few hours before we got ready to post our RKE2 autoscaler implementation (see PR linked above). This PR here contains an implementation for RKE1, RKE2 provisioning in Rancher was not even a thing back then. Were you planning on adding support for RKE1 or RKE2? |
Hello Kubernetes Community,
We are the Ubisoft Kubernetes Team: @douglasmakey, @patrickdappollonio, and @promagne. We worked during the past couple of months alongside Rancher (and now Suse) to implement a Node Autoscaler based on the Rancher software as a cloud provider, and we're sending it here for broader community use.
It has the basic support you would expect, like increasing and decreasing node groups and configuring using a
ConfigMap
.We're here if you have any questions!