Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kops + digital ocean #2150

Closed
krisnova opened this issue Mar 20, 2017 · 21 comments
Closed

kops + digital ocean #2150

krisnova opened this issue Mar 20, 2017 · 21 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@krisnova
Copy link
Contributor

krisnova commented Mar 20, 2017

Personally I think this would be big win, but I am wondering if anyone else has any interest in running kubernetes on Digital Ocean with kops?

I just want to get a feel for people interested to gauge how valuable this work would be. Cheers.

@krisnova
Copy link
Contributor Author

So after reaching out to a few people on Twitter on the manner, it looks like we have have a small amount of interested parties here. Thanks everyone for helping connect me!

I am going to reach out to the open source team at digital ocean and will follow up with more info as soon as I have it.

@ghost
Copy link

ghost commented Apr 5, 2017

+1

@krisnova
Copy link
Contributor Author

krisnova commented Apr 5, 2017

Looking at getting this into a future version of kops, but looking for volunteers to help code with me.

I can manage the DO account and give everyone access (they were nice enough to give us a budget, and I am happy to "eat" any remaining costs) but we just need to make sure we aren't throwing money away 😄

If you are interested ping me and I will add you to the list, I have a really solid idea of everything that needs to be done, just looking for Go hackers to help code them. I can do it all myself too, but that might take a little longer 😉

Cheers

@andrewsykim
Copy link
Member

andrewsykim commented Apr 5, 2017

@kris-nova I've been doing some work on kops + GCE and have noticed that a lot of the kops code assumes AWS. I would love to help refactor/generalize kops so it can be more pluggable with other cloud providers.

Getting kops working with DO sounds like fun, I would be interested in helping in my free time :). Feel free to reach out on kubernetes slack.

@chrislovecnm
Copy link
Contributor

@andrewsykim I agree that we need to start refactoring into interfaces so we can start abstracting code paths. It is going to be tricky :P

I need to take a look at what the vSphere folks did as well.

@andrewsykim
Copy link
Member

andrewsykim commented May 26, 2017

I opened an umbrella issue re: better compatibility with cloud providers other than AWS: #2646. Hopefully in the long run it'll make it easier to add new cloud providers.

k8s-github-robot pushed a commit that referenced this issue Aug 23, 2017
Automatic merge from submit-queue

Create cluster requirements for DigitalOcean

Initial changes required to create a cluster state. Running `kops update cluster --yes` does not work yet. 

Note that DO has already adopted cloud controller managers (https://github.com/digitalocean/digitalocean-cloud-controller-manager) so we set `--cloud-provider=external`. This will end up being the case for aws, gce and vsphere over the next couple of releases. 

#2150

```bash
$ kops create cluster --cloud=digitalocean --name=dev.asykim.com --zones=tor1
I0821 18:47:06.302218   28623 create_cluster.go:845] Using SSH public key: /Users/AndrewSyKim/.ssh/id_rsa.pub
I0821 18:47:06.302293   28623 subnets.go:183] Assigned CIDR 172.20.32.0/19 to subnet tor1
Previewing changes that will be made:

I0821 18:47:11.457696   28623 executor.go:91] Tasks: 0 done / 27 total; 27 can run
I0821 18:47:12.113133   28623 executor.go:91] Tasks: 27 done / 27 total; 0 can run
Will create resources:
  Keypair/kops
  	Subject             	o=system:masters,cn=kops
  	Type                	client

  Keypair/kube-controller-manager
  	Subject             	cn=system:kube-controller-manager
  	Type                	client

  Keypair/kube-proxy
  	Subject             	cn=system:kube-proxy
  	Type                	client

  Keypair/kube-scheduler
  	Subject             	cn=system:kube-scheduler
  	Type                	client

  Keypair/kubecfg
  	Subject             	o=system:masters,cn=kubecfg
  	Type                	client

  Keypair/kubelet
  	Subject             	o=system:nodes,cn=kubelet
  	Type                	client

  Keypair/kubelet-api
  	Subject             	cn=kubelet-api
  	Type                	client

  Keypair/master
  	Subject             	cn=kubernetes-master
  	Type                	server
  	AlternateNames      	[100.64.0.1, 127.0.0.1, api.dev.asykim.com, api.internal.dev.asykim.com, kubernetes, kubernetes.default, kubernetes.default.svc, kubernetes.default.svc.cluster.local]

  ManagedFile/dev.asykim.com-addons-bootstrap
  	Location            	addons/bootstrap-channel.yaml

  ManagedFile/dev.asykim.com-addons-core.addons.k8s.io
  	Location            	addons/core.addons.k8s.io/v1.4.0.yaml

  ManagedFile/dev.asykim.com-addons-dns-controller.addons.k8s.io-k8s-1.6
  	Location            	addons/dns-controller.addons.k8s.io/k8s-1.6.yaml

  ManagedFile/dev.asykim.com-addons-dns-controller.addons.k8s.io-pre-k8s-1.6
  	Location            	addons/dns-controller.addons.k8s.io/pre-k8s-1.6.yaml

  ManagedFile/dev.asykim.com-addons-kube-dns.addons.k8s.io-k8s-1.6
  	Location            	addons/kube-dns.addons.k8s.io/k8s-1.6.yaml

  ManagedFile/dev.asykim.com-addons-kube-dns.addons.k8s.io-pre-k8s-1.6
  	Location            	addons/kube-dns.addons.k8s.io/pre-k8s-1.6.yaml

  ManagedFile/dev.asykim.com-addons-limit-range.addons.k8s.io
  	Location            	addons/limit-range.addons.k8s.io/v1.5.0.yaml

  ManagedFile/dev.asykim.com-addons-storage-aws.addons.k8s.io
  	Location            	addons/storage-aws.addons.k8s.io/v1.6.0.yaml

  Secret/admin

  Secret/kube

  Secret/kube-proxy

  Secret/kubelet

  Secret/system:controller_manager

  Secret/system:dns

  Secret/system:logging

  Secret/system:monitoring

  Secret/system:scheduler

Must specify --yes to apply changes

Cluster configuration has been created.

Suggestions:
 * list clusters with: kops get cluster
 * edit this cluster with: kops edit cluster dev.asykim.com
 * edit your node instance group: kops edit ig --name=dev.asykim.com nodes
 * edit your master instance group: kops edit ig --name=dev.asykim.com master-tor1

Finally configure your cluster with: kops update cluster dev.asykim.com --yes
```
@andrewsykim
Copy link
Member

andrewsykim commented Oct 11, 2017

I have not forgotten about this, just haven't had much bandwidth lately. If anyone has time to work on this please let me know and I can provide some context on how to continue this work :).

@xech3l0nx
Copy link

We are developing a cli for automated production ready k8s clusters provisioning and configuration
‘kubedo ’ started the project yesterday and looking for contributors have a look at the dev branch and let me know. We will keep the project on because we at SNAPUP LABS LLC are DO customers.
https://github.com/snapup-labs/kubedo

@wingyplus
Copy link
Contributor

@andrewsykim I interesting in DO provider. Could you provide some information to me?

@andrewsykim
Copy link
Member

andrewsykim commented Feb 24, 2018

@wingyplus sorry for the late reply!

So far kops + digitalocean support does:

What's left to do is:

  • add spaces support for VFS backend
  • modify the default user data + nodeup to work with DigitalOcean

Happy to answer anymore questions you might have :).

@JorgeCeja
Copy link

Any updates on the current status? What is need to be production ready? I'll like to contribute, I just need more info on where the integration stands. Thanks!

@andrewsykim
Copy link
Member

andrewsykim commented Apr 11, 2018

Hi @JorgeCeja! I expect that in the next release you should be able to stand up a working version of a kubernetes cluster. I don't recommend running kops + digitalocean in production any time soon since it's still in the very early stages of development, there are other use cases it will be helpful for though (my use cases are for developing kubernetes itself and E2E tests for other related projects).

Some features that are missing though are:

  • support for multiple masters (currently only supports 1)
  • support for multiple regions - this is a debatable feature since I think generally you should have 1 cluster per region anyways.
  • rolling update support
  • generating terraform state files
  • lots of testing :)

Happy to help you tackle any of those. I will post here with instructions on how to get a cluster up and running once the next release is out

@andrewsykim
Copy link
Member

For anyone interested, on kops 1.9 you should be able to build a working Kubernetes cluster on DigitalOcean with kops. More details here https://github.com/kubernetes/kops/blob/master/docs/tutorial/digitalocean.md#getting-started-with-kops-on-digitalocean

@JorgeCeja
Copy link

Awesome work! This made my day! I will initially help by testing it. I am ok with go, but if you have any resources or help on how to approach one of these problems, in general, it will be very helpful. From there I can go ahead and give them one of them a shot (I am looking into tacking support for multiple masters). Thanks!

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 13, 2018
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Aug 12, 2018
@andrewsykim
Copy link
Member

/remove-lifecycle rotten

(will close once a beta is out)

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Aug 15, 2018
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 13, 2018
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Dec 13, 2018
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

8 participants