Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add topology with private masters and public nodes #1241

Closed
bcorijn opened this issue Dec 22, 2016 · 7 comments
Closed

Add topology with private masters and public nodes #1241

bcorijn opened this issue Dec 22, 2016 · 7 comments

Comments

@bcorijn
Copy link
Contributor

bcorijn commented Dec 22, 2016

Hi,

With the introduction of the private topology, I think a big step was made but while the original issue and PR contained a mixed model where only the masters were private, it was sadly discarded in the end.
I believe if the user does not want his masters/api to be publicly accessible, a setup where the masters are put it in private subnet with explicit bastion/VPN access is preferred over having them in a public subnet and locking down access through security groups. It also follows what is generally regarded as AWS best-practices for subnetting a lot more closely.

This could be achieved by either introducing a third topology ('privateMasters' for example) or allowing to set the topology of nodes and masters separately, although the latter would also mean users could set the masters public and the nodes private (for which I personally cannot think of a use-case).
From my basic knowledge of looking how everything gets set up, the biggest issue seems to be that the single-route table would have to be split up in a public and a private table?

@chrislovecnm
Copy link
Contributor

@kris-nova networking guru... all you

@krisnova
Copy link
Contributor

We have designed this, and looked at this before. Basically all we need to do here is have a concept of mixed topologies, and support them. So this is mostly a testing issue, and hopefully we can find any bugs in the process and knock them out.

Replication steps

@brunoWoo for this ticket, can you please define your use case and replication steps for this feature? What would you actually want to type into the CLI to get kops to do this?

Success

@brunoWoo what would be a way of verifying that this is working up to par for this use case? Just a working k8s API with masters in a private subnet?

Testing

Any volunteers to run the replication steps that we gather and dump their logs? That should help us define any problems, and get a WIP PR open.

Helping

We love help, if you think you are up to the challenge of helping us code this please ping one of us and we can help get you started. I am available on slack quite often, so reach out whenever and we can go from there.

@bcorijn
Copy link
Contributor Author

bcorijn commented Dec 27, 2016

Replication steps

The most logical option would be to add a third --topology option to the CLI, say --topology=privatemasters, which would be passed in an identical way to the --topology=private.
My personal use-case is that I am fine (and even prefer) running my public facing applications on worker nodes in a public subnet, but I would strongly prefer having my API more secure in a private subnet which is accessed trough a bastion host or a VPN (which could be provisioned through kops or just manually afterwards/before). This way it can only be accessed by users which have been granted access to this bastion, while not having to worry about keeping security group rules up-to-date and secure.
E.g.: kops --zones=us-west-1a,us-west-1b --topology=privatemasters

Success

@kris-nova: a working cluster with private masters indeed. Playing around with the private topology, I see a public ELB gets added anyway for the masters, so for me personally it would be best combined with #1097, to keep all access private.
A correctly working bastion host could also be one of the success-factors, but I haven't been able to get one to work for a private topology to judge if any changes would be needed for this use-case. Personally I would probably provision a bastion manually, as I prefer to bring my own AMI (passing a bastion AMI into KOPS would be great, but that is a different discussion).

@justinsb justinsb added this to the 1.5.0 milestone Dec 28, 2016
@justinsb
Copy link
Member

passing a bastion AMI into KOPS would be great, but that is a different discussion

This should "just work" now that bastion is an instance group - just change the image. Do open an issue if there's any problem! (And, BTW, anything special about that AMI you're bringing? Should we choose our bastion AMI differently?)

I started on this a little bit (by editing the cluster post-creation, changing nodes to public). There are a few challenges:

  • We currently block mixed configurations. I don't think there is any reason to any more.
  • The nodes will default to the private subnet, so the IG needs to be edited to be the public subnet. This is a little awkward, because now you have a huge private subnet with a few masters in it and a tiny public subnet with your nodes in it. We can default differently, but not sure what to do here.
  • It still creates an ELB.

But ... it seems to work. Put up a WIP PR: #1503

@bcorijn
Copy link
Contributor Author

bcorijn commented Jan 19, 2017

I wanted to use my bastion as VPN endpoint as well, to reach the private masters this way. Being able to pass in my own VPN AMI would make that easier.

Intuitively I would say the size of the subnets should be the other way around? So a small private subnet for the masters and a big public subnet for nodes and pods.

@chrislovecnm
Copy link
Contributor

@bcorijn that is actually a separate issue that should be fixed by some work I am doing with phases.

@chrislovecnm
Copy link
Contributor

Going to close this issue, please reopen if needed

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants