Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

🌱 Make load balancer first-party package in CAPD #8246

Merged
merged 1 commit into from
Mar 9, 2023

Conversation

killianmuldoon
Copy link
Contributor

Moves the loadbalancer package from third_part to first part in the Docker provider. This lets us update and possibly make changes to the load balancer without needing to sync with the kind implementation.

This version still uses the kindest/haproxy image so it's still pretty tightly coupled with Kind.

Note this should fix the issue described in #7344, but those changes are also available in the Kind loadbalancer library now.

@k8s-ci-robot k8s-ci-robot added the cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. label Mar 7, 2023
@k8s-ci-robot k8s-ci-robot added the size/M Denotes a PR that changes 30-99 lines, ignoring generated files. label Mar 7, 2023
@@ -14,5 +14,6 @@ See the License for the specific language governing permissions and
limitations under the License.
*/

// Package loadbalancer contains external loadbalancer related constants and configuration
// Package loadbalancer contains external loadbalancer related constants and configuration.
// This package was originally forked from the kind load balancer at https://github.com/kubernetes-sigs/kind/tree/v0.7.0/pkg/cluster/internal/loadbalancer
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure if this is the correct level of attribution, open to different opinions here for sure.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Probably it's fine given that we copy from kubernetes-sigs to kubernetes-sigs.

If you want to make sure I would ask Ben.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Kubernetes has done this sort of thing before, including in KIND!

IMHO we have a nice shortcut here in that these projects are both under the same license and "owner" (the project / contributors) so it's fine to just copy. I recommend attribution comments like this both for historical context and to acknowledge that another project originally wrote them.

So yes, this is fine to me. For prior art see also kubeadm/kinder tool 🙂

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds good, thx for chiming in!

@killianmuldoon killianmuldoon force-pushed the pr-docker-lb branch 2 times, most recently from 653e092 to c54be0b Compare March 7, 2023 18:33
@sbueringer
Copy link
Member

sbueringer commented Mar 8, 2023

Q: How similar is this package now to the upstream kind package? (related: how was it before this PR) Is the maintenance now entirely on us?

@killianmuldoon
Copy link
Contributor Author

Q: How similar is this package now to the upstream kind package? (related: how was it before this PR) Is the maintenance now entirely on us?

It's pretty similar on purpose - I fixed up some linter findings but that's it. Before it was pretty out of date. Because this was a third_party package maintenance was always on us, but we never really did anything as it continued to work. This change is mostly about making that maintenance explicit.

@sbueringer
Copy link
Member

/lgtm

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Mar 8, 2023
@k8s-ci-robot
Copy link
Contributor

LGTM label has been added.

Git tree hash: b2ee749ec0bf7b854fa72bade2d580ccc09f8186

@@ -16,14 +16,16 @@ limitations under the License.

package loadbalancer

// Image defines the loadbalancer image name
const Image = "haproxy"
const (
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could probably also change to use the "official" haproxy image, so

Image = "haproxy-alpine"
DefaultImageRepository = "haproxytech"
DefaultImageTag        = "2.8"

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The issue (same as the issue for the Kind repo) is the official image doesn't have a config file attached which means it won't start. We could definitely workaround this by setting up the image in the code, but AFAIU CAPI needs to make some changes either way.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We used the official image for the sample provider used in the kubecon na tutorial: https://github.com/capi-samples/cluster-api-provider-docker/tree/main/pkg/loadbalancer

I don't overly remember us having to change anything in the loadbalancer package but i can check.

Copy link
Member

@sbueringer sbueringer Mar 8, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's please consider that the more we move away from what kind is doing (probably) the more maintenance effort we have. After all CAPD clusters are still very closely modeled after kind clusters and there are dependencies.

Copy link
Contributor Author

@killianmuldoon killianmuldoon Mar 8, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the pointer @richardcase - this is working. Looks like it's the 'global' haproxy image which doesn't have a config loaded. The haproxytech image has a default config which lets it run initially.

Let's please consider that the more we move away from what kind is doing (probably) the more maintenance effort we have. After all CAPD clusters are still very closely modeled after kind clusters and there are dependencies.

I'm fine with pushing changing the image to a follow up - but I don't think there's additional maintenance effort here compared to using a third_party folder. The change is that this part of CAPD wasn't really maintained at all previously.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Up to you. I didn't look into it in detail, but if you're sure that there are no dependencies to the kind haproxy I'm fine with switching to the "upstream" one. I just know that we have a lot of dependencies to the kindest/node image and even make sure that we start the node image exactly the same way as kind does.

If we don't have stuff like this for the lb image, I agree, it shouldn't matter

Copy link
Member

@sbueringer sbueringer Mar 9, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's this comment in the kind repo:

We cannot merely use the upstream haproxy image as haproxy will exit without a minimal config, so we introduce one that will list on the intended port and hot reload it at runtime with the actual desired config.
https://github.com/kubernetes-sigs/kind/tree/main/images/haproxy

I assume hot reloading works with the haproxytech image?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Exactly - this image exists with the _/haproxy image, but not the haproxytext/haproxy image.

I'm 100 percent fine with sticking with the haproxy image from kind for now btw - I don't see any motivation to change this. That said, making this directory first party will allow changes of that kind in future if someone is interested.

For now let's not change it.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds good!

@killianmuldoon
Copy link
Contributor Author

/hold

Waiting for input on #8246 (comment)

@k8s-ci-robot k8s-ci-robot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Mar 8, 2023
@sbueringer
Copy link
Member

/lgtm

@sbueringer
Copy link
Member

/hold cancel
given answer above

@k8s-ci-robot k8s-ci-robot removed the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Mar 9, 2023
Copy link
Member

@fabriziopandini fabriziopandini left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm
/approve

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: fabriziopandini

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Mar 9, 2023
@killianmuldoon
Copy link
Contributor Author

/retest

@k8s-ci-robot k8s-ci-robot merged commit 9232ca7 into kubernetes-sigs:main Mar 9, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. lgtm "Looks good to me", indicates that a PR is ready to be merged. size/M Denotes a PR that changes 30-99 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants