-
Notifications
You must be signed in to change notification settings - Fork 983
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
v1beta1 Laundry List #1327
Comments
Fix casing on capacity type to conform w/ k8s value casing: |
I'd like to explore separating the |
I'd like to rename |
I'd like to explore creating a decoupled |
I'd like to explore adding |
Check out the related discussion here: #783 |
Another thought. Remove provisioner |
Thanks for the transparency ... I'll definitely be monitoring this issue for changes that need to be applied to the cdk-karpenter: https://www.npmjs.com/package/cdk-karpenter 👍 |
Remove kubeletconfiguration.clusterdnsip in favor of discovery #1241 (comment) |
Like We set these options via LTs currently (good defaults could mitigate some of them), I'm happy to open separate issues for any of these which warrant it.
I've listed a couple of items above related to the ones already listed, but I think there might need to be some additional thought given to generic K8s settings vs provider settings. For example I'd suggest that |
I'd like to add detailed property for
Current property of provider is useless to reference
|
@stevehipwell is #1327 (comment) still useful to you? We were curious about what the specific use case is. It's pretty straightforward to implement, but we weren't sure it was still necessary. |
Related: #967 |
@ellistarn yes this is still very important to our use case. |
@ellistarn is the provisioner failure logic documented anywhere? And is the specific failure logic for AWS also documented? |
Just wanted to chime in on this with a use case I have, hope that is ok. We are setting up a multi-architecture cluster that supports both amd64 and arm64. Because of certain requirements we have on our end, we have to use custom launch templates and cannot allow Karpenter to generate them for us. Since we want to use multiple architectures, we then need to have multiple launch templates since the AMI will be different for each. That means we need multiple provisioners. If we have a pending pod that supports both architectures, we would like for Karpenter to prefer the arm64 nodes. What we would ultimately want would be for Karpenter to select the cheapest option when multiple provisioners match with a pending pod or pods. However, just having a simple priority mechanism would be enough to get us most of the way there. |
@Cagafuego I think there is a discussion somewhere with requests to move capabilities out of custom LTs you might want to add your requirements there? |
Thanks @stevehipwell Yes, there is a PR with a doc at #1691 that talks about what we are looking at exposing to hopefully eliminate the need to use custom LTs. Feel free to add questions/comments there. Ping @suket22 |
@tzneal I've looked over custom-user-data-and-amis.md and have some feedback.
|
I hate to relitigate this, but the syntactic sugar of
|
@ellistarn I agree with your view on this; I'd also add that the ID should be preferred for AWS resources with an ID. I think the launch template lookup is currently on name only? |
We prefer user controlled identifiers (names, tags) rather than system controlled identifiers (ids) since it removes a serialized creation requirement where each resource needs to know the output of its dependencies (i.e. "wire it up"). This enables static config files and minimizes logic for things like cfn/terraform. The securitygroup/subnet ID problem emerged from a limitation on shared VPCs where tags could not be read, which broke the selector semantic. |
@ellistarn I'm not sure of any benefits to using names, but can see lots of limitations. By providing an ID I'm guaranteed that if something significant changes I'll get an error and not unexpected behaviour. Names are effectively sliding identifiers in AWS so just as you wouldn't use the In addition to the above infrastructure is likely to be layered, so adding a tag to a subnet for Karpenter isn't going to be possible or even desirable for a lot of people. I personally tried adding Karpenter to an EKS cluster built with a complex Terraform module and not being able to use ID for everything breaks our architecture practices. |
You could do this with short names, without having to rename the CRD name |
@jonathan-innis, WDYT about the idea of moving from instanceProfile to iamRole for user configuration. Instance Profiles are kind of awkward, since they don't appear in the AWS console, and are specific to EC2 instance launches. You could imagine that we automatically create an instance profile per AWSNodeTemplate, and attach the corresponding IAMRole to it. Users would specify the IAMRole at the global configmap or Provisioner CRD level. The permission scope is relatively benign, since creating instance profiles isn't a scary permission, and we already have the PassRole permission required to attach an IAMRole to an InstanceProfile Related: #3051 |
Yeah, I think this makes good sense as a usability improvement, considering how many users I have seen with issues around creating and tracking the instance profile. |
We can tag the instance profile too. I think that's a helpful thing to do. I might want to have different roles for different provisioners though. For example, nodes made through the the monkey-of-chaos provisioner are allowed to invoke Does this change limit my options there? I know I could use a DaemonSet, but IRSA gives all the Pods in a DaemonSet the same role - and no node-level identity. |
(answering my own query - I think I can have 2 provisioners that use two different AWSNodeTemplates, and all is good) |
Role should be attached to node template. We use one role and do not attach any more permissions than needed due to how painful configuring aws-auth configmap is. |
@ellistarn yeah instanceProfile is an odd one not being properly reflected actoss API/console 🙄 . Using IAM roles could work nicely and offer better visibility 👍 . But also at the same time, not super bothered by profiles either 🤷 |
Consider defaulting to Bottlerocket if not specified for v1beta1? |
If it's feasible, leave the defaulting to be separate from the API - either default at admission time with a mutating admission controller, or have a config option for the controller, something like that. I'm asking because I'd the API to be especially unopinionated about cloud providers, whilst still giving cluster operators the option. |
I think the defaulting mechanism would live inside the AWS-specific cloudprovider defaulting logic so that it wouldn't be a default across cloudproviders.
What do you mean by this, you mean don't contain it in the CRD definition? |
Consider pulling out tag-based AMI requirements. The assumption is that there are very few users that are doing this, if any at all and those who want to achieve something similar should be able to do so by creating different provisioners. |
Yep, exactly. APIs are not the best home for opinions. |
I think in this case, you would probably want defaulting to expose it so that it's more clear what you are using. IMO, it seems more ambiguous if the default isn't propagated through to the api-server by the time the CRD is applied. i.e. having an empty string for AMIFamily and, in code, defaulting to BR |
Deprecate the following labels in favor of:
|
Consider renaming |
“purchase” isn't always right either. For example, you might be choosing between capitalalized, on-prem equipment vs. bursted capacity in the cloud. |
Change |
Should we deprecate |
Consider removing the |
Worth considering this in the context of Cluster Autoscaler annotations |
Scheduling, preemption and eviction are things that happen to Pods; disruption happens to workloads. The thing you disrupt (or not) is a set of Pods. So, I'd call the label |
FYI: This laundry list is frozen at this point. We've captured the items that are being tracked in v1beta1 here: https://github.com/aws/karpenter/blob/main/designs/v1beta1-api.md. We'll plan to close this issue when |
v0.32.1 is the latest beta version! We've officially launched beta so I'm going to close this issue and make a v1 laundry list now! |
If you have further breaking changes that you want to see be made in Karpenter, these can be added to the new kubernetes-sigs/karpenter#758 issue |
Tell us about your request
This issue contains a list of breaking API changes for v1beta1.
Community Note
The text was updated successfully, but these errors were encountered: