-
Notifications
You must be signed in to change notification settings - Fork 216
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
v1 Laundry List #758
Comments
Drop support for |
I quite like the convenience of having finalizers on |
I'm open minded to it, and would love to hear more feedback on it. Minimally, we need to stop deleting the node and allow the kubelet to deregister as part of our standard termination workflow, as it's currently causing daemons to leak. |
Align the |
Drop support for do-not-consolidate and do-not-evict annotations from v1alpha5. |
Can we move the Karpenter docs from aws provider github to the k8s sigs Karpenter? |
@garvinp-stripe there is an ongoing discussion on this #832 |
We should consider requiring |
Consider changing the |
We should check that the v1 API will work with |
We could retain support but inject a |
NodeClaim's Some cloud providers might not have an equivalent field; for example, there might be a JSON document that defines bootstrap for a new machine but no single field within that document that uniquely specifies the initial OS. My ask for the v1 API is to provide a place to put arbitrary values that are relevant to the node class for this NodeClaim. For AWS, it could be a structure containing the ARN of the AMI and nothing else. For Azure, it could be the Maybe one day for AWS the status of a NodeClaim includes the launch template ARN as well as the image ARN, which could be helpful - and, with this model, doesn't require a change to the v1 API of NodeClaim. |
I'd like to remove references to |
One thing I'm curious about with this is hearing from the Azure folks what they think about the current implementation of the |
I'd like to see the aws specific labels for resource based selection move to a cloud provider agnostic api group and label: Currently in AWS, these are |
Proposal sounds fine. |
Just FYI On the AKS Providers selectors |
@sftim Should it be in the top-level K8s namespace or would this get scoped into |
We should settle on what we want our stable JSON printer columns to be for NodePool and NodeClaims. |
Either is fine with me. Whatever we pick, it should be obvious that CPU is a quantity for the node overall, and that values such as "AMD64" or "Graviton 42" or "Ryzen Pro" would be very wrong.
|
I'm not sure that we have enough examples of this point from different cloud providers of trying to set a bunch of arbitrary fields. I could see the utility for it, but I'd rather let cloud providers add this data to labels and annotations until we get a critical mass of "things" where it's really easy to conceptualize what should go in that bucket. The other thing here is that it would be nice if we just didn't have an arbitrary bucket of things to begin with and we could just type everything. I get this probably isn't realistic since there are enough differences between CloudProviders, but it's probably sufficient to take these cases one-by-one so we can evaluate whether they belong in the common schema or not. |
I'd like to recommend that #651 be addressed before v1 is released. This can waste a lot of time causing confusion for cluster admins and their users tracking down why Karpenter can evict a pod with the "do-not-disrupt" annotation added. |
Agreed. This is definitely on the list of things that we want to knock-out by v1. |
I'd like to propose that Karpenter start tainting nodes with |
I'd also like to propose that Karpenter re-consider its entire monitoring story before going v1. We should think about our metrics, logging, eventing, status conditions, and other status fields that we currently surface back to users: #1051 |
I'd like to consider shifting the |
Kubelet is cloud provider agnostic, maybe EKS should allow users to configure feature gates and other k8s configuration options instead? 🙂 This is the primary reason that CAPI has both |
I opened a small PR that bring a bit of life-improvements to So proposing to look at the output from the different CRDs and adjust accordingly. My changes allow to understand at a glance the utilization of the different nodepools in your cluster. |
We should ensure that the README for https://artifacthub.io/packages/helm/karpenter/karpenter very, very clearly tells people it's out of date. We could consider publishing a new release of that legacy chart that is almost the same code, but you get a |
Not sure where the issue lives but we should clearly define what is the responsibility of each CRD karpenter creates. I think its fine to move it but I want to make sure each move isn't just throwing things from one bucket to another |
I wish this was the case. But from the Cloud Providers that we've heard from at this point, it strikes me that each CloudProvider is rather opinionated about what can and can't be set.
100% agree. I think we are really talking about high-level node scheduling and selection in the NodePool vs. Node-level configuration on the NodeClass side. I agree that an addition to the docs on this could be super valuable if we had this. Maybe a preamble in the https://karpenter.sh/docs/concepts/ that talks about the purpose of each of these APIs. |
kubeletConfiguration.failOnSwap
and kubeletConfiguration.memorySwap
in Provisioner
spec
#663
We could support it but emit a |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
Is there any appetite to change all references to |
Oh that's interesting. Are you thinking about kwok specifically? In a lot of ways this is just a "fake" cloudprovider. I'm not sure how much value we would get here by changing the naming |
Closing this one out since we closed-out on the v1 items that we were taking in with #1222 |
/close |
@jonathan-innis: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
Description
This issue contains a list of breaking API changes that we want to make for v1.
Linked Cloud Provider v1 Laundry Lists
The text was updated successfully, but these errors were encountered: