Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Possibility to set dns-cluster-ip for Karpenter provider #928

Closed
SoulKyu opened this issue Dec 6, 2021 · 13 comments · Fixed by #1013
Closed

Possibility to set dns-cluster-ip for Karpenter provider #928

SoulKyu opened this issue Dec 6, 2021 · 13 comments · Fixed by #1013
Labels
burning Time sensitive issues feature New feature or request launch-templates Questions related to AWS launch templates networking Issues related to Karpenter and cluster networking

Comments

@SoulKyu
Copy link

SoulKyu commented Dec 6, 2021

It could be very nice to add a way to overload the dns-cluster-ip on the LaunchTemplate when you use a custom svc IP.

I'm actually using 10.10.0.0 as my svc IP and i can't have DNS working because my instance is launched with 172.20.0.10 as DNS IP.
So every of my pods launched in instance provided by Karpenter are on the wrong DNS IP.

Maybe you already have something that i haven't found yet in the documentation ..

Thanks.
BR Guillaume.

@SoulKyu SoulKyu added the feature New feature or request label Dec 6, 2021
@akestner akestner added launch-templates Questions related to AWS launch templates networking Issues related to Karpenter and cluster networking labels Dec 6, 2021
@ellistarn
Copy link
Contributor

ellistarn commented Dec 7, 2021

This is an interesting use case. We should include this in our upcoming launch template extensions support.

@tuananh
Copy link
Contributor

tuananh commented Dec 7, 2021

@ellistarn may i ask what's the plan for launch template extension support? is this documented somewhere?

i'm still reading the codebase but from the look of it, i will have to add another flag, similar to ClusterEndpoint and modify getUserData() to render dns-cluster-ip there.

however, this is kind of adhoc and not generic enough.

@ellistarn
Copy link
Contributor

ellistarn commented Dec 7, 2021

The CLI flags are only for use cases that make sense for all provisioners. We likely want to pass this through the provisioner CRD. I'm thinking something along the lines of:

spec:
  provider:
    ami: /aws/bottlerocket/...
    userDataExtensions: ...

We need to think about this though. We potentially would want to model it like:

spec:
  kubeletFlags:
    key1: val1
    key2: val2

We're tracking a bunch of issues about launch template overrides in our issues, and will likely tackle them all at once w/ a design, etc. I'd love to hear any ideas or feedback you have in this space.

@mikesir87
Copy link
Contributor

We actually just ran into this exact same issue, needing support for dns-cluster-ip. To work around the issue for now, I can obviously create my own launch template and use that.

I like the idea of being able to provide some flags without needing to write an entire user-data script.

@suket22 suket22 added the burning Time sensitive issues label Dec 9, 2021
@mikesir87
Copy link
Contributor

The CLI flags are only for use cases that make sense for all provisioners. We likely want to pass this through the provisioner CRD.

One thought here... in the case of dns-cluster-ip, that would be the same value for all Provisioners, at least in my clusters. Maybe it does make sense for some of these to be higher-level config 🤔

@ellistarn
Copy link
Contributor

I want to be careful about the proliferation of CLI flags. I want to be explicit that we do not current plan to support backwards compatibility for CLI flags.

I can see implementing this as a short term workaround until we find a better long term solution. As long as the flag has plenty of caveats (and you agree to upgrade when we find a better solution), I think this is a decent path forward.

@mikesir87
Copy link
Contributor

Would you be ok with, for right now, simply allowing a CLI flag that's something like --additional-bootstrap-flags? That way, we can use it for dns-cluster-ip, but might meet other temporary needs. Thoughts?

@SoulKyu
Copy link
Author

SoulKyu commented Dec 13, 2021

I agree with @mikesir87, actually have to edit my launch_template created by Karpenter to manually add the flags to make it work properly, it's such a bad thing for automation especially for cloud infrastructure...
A temporary flags cloud make the work for a first commit i guess ! :D

@ellistarn
Copy link
Contributor

Would you be ok with, for right now, simply allowing a CLI flag that's something like --additional-bootstrap-flags? That way, we can use it for dns-cluster-ip, but might meet other temporary needs. Thoughts?

I'm supportive of an experimental feature --additional-kubelet-args. We may eventually bake this into the API.

@mikesir87
Copy link
Contributor

I'm supportive of an experimental feature --additional-kubelet-args.

While I do agree that would be useful, the problem is that the dns-cluster-ip flag is on the bootstrap.sh script directly, not on the underlying --kubelet-extra-args flag. While it can be set using kubelet args, the docs mention that that approach is deprecated and encourage admins to set it by modifying the kubelet config file (which is how the bootstrap.sh script does it).

@ellistarn
Copy link
Contributor

Can we use karpenter's bootstrap to edit this file?

@mikesir87, this may be a great opportunity to implement this using a configmap instead of as a CLI arg. That might allow you to more easily pass a YAML patch to the kubelet config file.

@mikesir87
Copy link
Contributor

Can we use karpenter's bootstrap to edit this file?

Nah... it would go in as part of the user-data that Karpenter is creating, simply as an additional flag to the bootstrap.sh script. That script in the launched machine will then modify the file.

this may be a great opportunity to implement this using a configmap instead of as a CLI arg

I have thought that, as we're starting to do more "global" config, it might make sense to explore a ConfigMap-based approach as the number of potential CLI flags is going to grow as more cloud providers are added. I'm definitely up for the idea.

@mikesir87
Copy link
Contributor

Now that my other PR is wrapping up, I'd like to start taking a stab at this one. I'm thinking of something like this...

spec:
  kubeletFlags:
    dnsClusterIp: 10.0.1.100

With that, I'll update the Provisioner spec to support that API and then introduce the ability to add the appropriate flag when creating the user-data. Obviously, this would be specific to the AMI type, but allows the specific provider to figure out how to translate the specified flags to the proper config needed for the instance (eg, AL2 instances will use bootstrap.sh while Bottlerocket will need a TOML file).

Sound reasonable @ellistarn?

ellistarn added a commit that referenced this issue Dec 17, 2021
…#1013)

* Add ability to specify ClusterDNSIP

Signed-off-by: Michael Irwin <[email protected]>

Resolves #928

* Add unit test to validate clusterDNSIP setting

Signed-off-by: Michael Irwin <[email protected]>

* Update pkg/apis/provisioning/v1alpha5/constraints.go

Co-authored-by: Ellis Tarn <[email protected]>

* Rename kubelet args to align with kubelet types

Signed-off-by: Michael Irwin <[email protected]>

* CRD docs update

Signed-off-by: Michael Irwin <[email protected]>

* Refactored userData creation to reduce code complexity

Signed-off-by: Michael Irwin <[email protected]>

* Rename kubeletArgs to kubeletConfiguration

Signed-off-by: Michael Irwin <[email protected]>

* Fix mismatch between struct name and JSON key

Signed-off-by: Michael Irwin <[email protected]>

Co-authored-by: Ellis Tarn <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
burning Time sensitive issues feature New feature or request launch-templates Questions related to AWS launch templates networking Issues related to Karpenter and cluster networking
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants