Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support multiple endpoints for API (private + internet-facing) #2849

Open
AverageMarcus opened this issue Oct 14, 2021 · 16 comments
Open

Support multiple endpoints for API (private + internet-facing) #2849

AverageMarcus opened this issue Oct 14, 2021 · 16 comments
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete.

Comments

@AverageMarcus
Copy link
Member

/kind feature

Describe the solution you'd like

The current implementation of CAPA when using non-managed clusters only allows for the creation of either a private ELB or an internet-facing ELB for the Kubernetes API.

In contrast to this, when creating managed clusters it's possible to create both private and internet-facing endpoints (as this is a feature of EKS).

We'd like the ability to be able to create both types of endpoints which still using the non-managed clusters. Ref: giantswarm/roadmap#492

It seems unlikely that CAPA would be able to introduce anything wildly different with the ongoing Load Balancer Provider proposal but it might be possible to cover this simple usecase (needing both private and internet-facing) by adding a new value to ClassicELBScheme with something like both to indicate the desire to have both types created.

Anything else you would like to add:

There is some related issues in upstream CAPI:

@k8s-ci-robot k8s-ci-robot added kind/feature Categorizes issue or PR as related to a new feature. needs-priority needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Oct 14, 2021
@AverageMarcus
Copy link
Member Author

I've created a draft PR to outline an approach that could be implemented without the need to wait for the load balancer provider proposal.

#2852

@dlipovetsky
Copy link
Contributor

In today's CAPA meeting, @randomvariable mention that for this to be usable, CAPI needs to support multiple cluster endpoints (kubernetes-sigs/cluster-api#5295). Does that sound right to you, @AverageMarcus?

@AverageMarcus
Copy link
Member Author

Ideally, yes, but I'm not sure it's completely required. All cluster resources (e.g. worker nodes) would make use of the internal API endpoint and we could reference that as the ControlPlaneEndpoint.

Things get a little messier when it comes to the Kubeconfig secret generated for the workload cluster. I'm not sure if there's any previous examples of the kubeconfig containing multiple entries but we could generate a secret containing two different kubeconfig contexts, one for each endpoint.

@randomvariable
Copy link
Member

randomvariable commented Oct 20, 2021

I don't want to add complexity to CAPA without us sorting out the problem of consuming these endpoints from a Cluster API perspective. What endpoint does a management cluster use depending on where it's located is not very clear from the proposed implementation.

@richardcase
Copy link
Member

Blocked by kubernetes-sigs/cluster-api#5295

/triage accepted
/priority important-longterm
/milestone backlog

cc @lubronzhan

@lubronzhan
Copy link

I assume CAPI still need corresponding change?
CAPA could expose both endpoint in the cluster but CAPI would be responsible for generating kubeconfig for both endpoints.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 6, 2022
@AverageMarcus
Copy link
Member Author

AverageMarcus commented Mar 8, 2022

Can we get a

/lifecycle frozen

added to this to match the CAPI issue blocking this (kubernetes-sigs/cluster-api#5295)

Edit: Didn't realise I had the ability to set the lifecycle 😁

@k8s-ci-robot k8s-ci-robot added lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Mar 8, 2022
@richardcase
Copy link
Member

/remove-lifecycle frozen

@k8s-ci-robot k8s-ci-robot removed the lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. label Jul 12, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 10, 2022
@AverageMarcus
Copy link
Member Author

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 10, 2022
@dlipovetsky
Copy link
Contributor

From triage 12/2022:

  • Use case is to make kubelets use the private endpoint to avoid bandwidth egress charges, while allowing end users to use public endpoint.
  • Even without core CAPI support for multiple endpoints, CAPA could create the infra for the private endpoint, and users could modify their kubeconfig to use it.

@dlipovetsky
Copy link
Contributor

/triage accepted
/priority important-longterm

@k8s-ci-robot k8s-ci-robot added triage/accepted Indicates an issue or PR is ready to be actively worked on. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. and removed needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. needs-priority labels Dec 12, 2022
@k8s-triage-robot
Copy link

This issue has not been updated in over 1 year, and should be re-triaged.

You can:

  • Confirm that this issue is still relevant with /triage accepted (org members only)
  • Close this issue with /close

For more details on the triage process, see https://www.kubernetes.dev/docs/guide/issue-triage/

/remove-triage accepted

@k8s-ci-robot k8s-ci-robot added needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. and removed triage/accepted Indicates an issue or PR is ready to be actively worked on. labels Jan 19, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 18, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels May 18, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

7 participants