Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

v1 Laundry List #758

Closed
jonathan-innis opened this issue Oct 31, 2023 · 41 comments
Closed

v1 Laundry List #758

jonathan-innis opened this issue Oct 31, 2023 · 41 comments
Labels
kind/feature Categorizes issue or PR as related to a new feature. roadmap We're thinking about next steps v1 Issues requiring resolution by the v1 milestone

Comments

@jonathan-innis
Copy link
Member

jonathan-innis commented Oct 31, 2023

Description

This issue contains a list of breaking API changes that we want to make for v1.

Linked Cloud Provider v1 Laundry Lists

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment
@jonathan-innis jonathan-innis added kind/feature Categorizes issue or PR as related to a new feature. roadmap We're thinking about next steps v1 Issues requiring resolution by the v1 milestone labels Oct 31, 2023
@ellistarn
Copy link
Contributor

Drop support for kubectl delete node, now that we have kubectl delete nodeclaim instead. While this is nice for Karpenter users, it teaches users to take an action that's dangerous and disruptive in non-karpenter environments.

@mbevc1
Copy link

mbevc1 commented Nov 1, 2023

I quite like the convenience of having finalizers on delete node, maybe I'm just being lazy :) But it also offers a safety net I reckon 🤔

@ellistarn
Copy link
Contributor

ellistarn commented Nov 1, 2023

I'm open minded to it, and would love to hear more feedback on it. Minimally, we need to stop deleting the node and allow the kubelet to deregister as part of our standard termination workflow, as it's currently causing daemons to leak.

@njtran njtran transferred this issue from aws/karpenter-provider-aws Nov 2, 2023
@jonathan-innis
Copy link
Member Author

Align the karpenter_interruption_actions_performed values with the karpenter_disruption_actions_performed values. Currently the values are different and (since these components have similar names) we should make them similar

@njtran
Copy link
Contributor

njtran commented Nov 11, 2023

Drop support for do-not-consolidate and do-not-evict annotations from v1alpha5.

@garvinp-stripe
Copy link
Contributor

I think having finalizer in nodeclaim make sense that way as operators we can force the node to terminate outside of the finalizer when necessary. #796 / #743

@garvinp-stripe
Copy link
Contributor

Can we move the Karpenter docs from aws provider github to the k8s sigs Karpenter?

@Bryce-Soghigian
Copy link
Member

@garvinp-stripe there is an ongoing discussion on this #832

@jonathan-innis
Copy link
Member Author

jonathan-innis commented Dec 11, 2023

We should consider requiring apiVersion and kind in the nodeClassRef so that we can dynamically look-up the NodeClass from the NodePool in the neutral code. Some use-cases for this are called out here: #337

@jonathan-innis
Copy link
Member Author

Consider changing the apiVersion field in the nodeClassRef to group to match what is done in Gateway API. Referencing references by apiVersion is also not useful here since the reference is only going to be reasoned about in terms of a single version anyways. We really only need the group parameter here.

@sftim
Copy link

sftim commented Dec 19, 2023

We should check that the v1 API will work with InPlacePodVerticalScaling if / when that becomes stable
#829

@sftim
Copy link

sftim commented Dec 19, 2023

Drop support for kubectl delete node, now that we have kubectl delete nodeclaim instead. While this is nice for Karpenter users, it teaches users to take an action that's dangerous and disruptive in non-karpenter environments.

We could retain support but inject a Warning: when we see it. I think we can do that just using a ValidatingAdmissionPolicy, too.

@sftim
Copy link

sftim commented Dec 19, 2023

NodeClaim's status field has imageID hard coded, which I think refers to AMIs (maybe also AKIs, the legacy thing before AMIs). Do we want to provide something more generic?

Some cloud providers might not have an equivalent field; for example, there might be a JSON document that defines bootstrap for a new machine but no single field within that document that uniquely specifies the initial OS.

My ask for the v1 API is to provide a place to put arbitrary values that are relevant to the node class for this NodeClaim. For AWS, it could be a structure containing the ARN of the AMI and nothing else. For Azure, it could be the imageReference.

Maybe one day for AWS the status of a NodeClaim includes the launch template ARN as well as the image ARN, which could be helpful - and, with this model, doesn't require a change to the v1 API of NodeClaim.

@jonathan-innis
Copy link
Member Author

I'd like to remove references to knative for v1, including the tasklist that's included here (#332). That includes leveraging the controller runtime logger (#922) rather than using our own knative-based custom-built logger as well as relying on our own status conditions library rather than the knative-based one.

@jonathan-innis
Copy link
Member Author

We absolutely need to define #493 ahead of releasing the v1 APIs. Given that, I think we need to do #909 so that we can default these values for pre-v1 and then require them post-v1

@jonathan-innis
Copy link
Member Author

jonathan-innis commented Jan 17, 2024

For Azure, it could be the imageReference.

One thing I'm curious about with this is hearing from the Azure folks what they think about the current implementation of the imageID field that sits on the NodeClaim. This was added because it was assumed that every node has an imageID; however, I'm curious if there are other fields on the NodeClaim that y'all would like to make first-class citizens.

cc: @Bryce-Soghigian @tallaxes @jackfrancis

@njtran
Copy link
Contributor

njtran commented Jan 18, 2024

I'd like to see the aws specific labels for resource based selection move to a cloud provider agnostic api group and label:

Currently in AWS, these are karpenter.k8s.aws/instance-cpu and karpenter.k8s.aws/instance-memory. Ideally, since this is a common concept, we can use karpenter.sh/instance-cpu and karpenter.sh/instance-memory.

@sftim
Copy link

sftim commented Jan 18, 2024

I'd like to see the aws specific labels for resource based selection move to a cloud provider agnostic api group and label:

Currently in AWS, these are karpenter.k8s.aws/instance-cpu and karpenter.k8s.aws/instance-memory. Ideally, since this is a common concept, we can use karpenter.sh/instance-cpu and karpenter.sh/instance-memory.

Proposal sounds fine.
Aside: if we wanted kubernetes.io/node-cpu and kubernetes.io/node-cpu kubernetes.io/node-memory, we can probably have 'em.

@Bryce-Soghigian
Copy link
Member

I'd like to see the aws specific labels for resource based selection move to a cloud provider agnostic api group and label:
we can probably also share the karpenter.sh gpu labels, instance family etc

Just FYI On the AKS Providers selectors
https://learn.microsoft.com/en-gb/azure/aks/node-autoprovision?tabs=azure-cli#sku-selectors-with-well-known-labels

@jonathan-innis
Copy link
Member Author

Aside: if we wanted kubernetes.io/node-cpu and kubernetes.io/node-cpu, we can probably have 'em.

@sftim Should it be in the top-level K8s namespace or would this get scoped into node.kubernetes.io?

@jonathan-innis
Copy link
Member Author

We should settle on what we want our stable JSON printer columns to be for NodePool and NodeClaims.

@sftim
Copy link

sftim commented Jan 22, 2024

Should it be in the top-level K8s namespace or would this get scoped into node.kubernetes.io?

Either is fine with me. Whatever we pick, it should be obvious that CPU is a quantity for the node overall, and that values such as "AMD64" or "Graviton 42" or "Ryzen Pro" would be very wrong.

node.kubernetes.io/resource-cpu or node.kubernetes.io/total-cpu could help signal that.

@jonathan-innis
Copy link
Member Author

My ask for the v1 API is to provide a place to put arbitrary values that are relevant to the node class for this NodeClaim

I'm not sure that we have enough examples of this point from different cloud providers of trying to set a bunch of arbitrary fields. I could see the utility for it, but I'd rather let cloud providers add this data to labels and annotations until we get a critical mass of "things" where it's really easy to conceptualize what should go in that bucket.

The other thing here is that it would be nice if we just didn't have an arbitrary bucket of things to begin with and we could just type everything. I get this probably isn't realistic since there are enough differences between CloudProviders, but it's probably sufficient to take these cases one-by-one so we can evaluate whether they belong in the common schema or not.

@wmgroot
Copy link
Contributor

wmgroot commented Feb 26, 2024

I'd like to recommend that #651 be addressed before v1 is released.

This can waste a lot of time causing confusion for cluster admins and their users tracking down why Karpenter can evict a pod with the "do-not-disrupt" annotation added.

@jonathan-innis
Copy link
Member Author

I'd like to recommend that #651 be addressed before v1 is released

Agreed. This is definitely on the list of things that we want to knock-out by v1.

@jonathan-innis
Copy link
Member Author

I'd like to propose that Karpenter start tainting nodes with karpenter.sh/unregistered on startup through their startup script and then signal that the node has been seen and operated on by Karpenter by removing the taint: #1049

@jonathan-innis
Copy link
Member Author

I'd also like to propose that Karpenter re-consider its entire monitoring story before going v1. We should think about our metrics, logging, eventing, status conditions, and other status fields that we currently surface back to users: #1051

@jonathan-innis
Copy link
Member Author

I'd like to consider shifting the kubelet into the CloudProvider-specific configuration for v1. The more we think about it, the more it seems like this is a property of how the node is brought-up and configured, and less about the selection behavior of the node. This also allows CloudProviders to pick and choose what they want to allow users to directly specify through their API. Right now, there isn't any deep coupling with the kubelet in the neutral code. AWS is only using these values right now through the GetInstanceTypes() call to affect the instance type allocatable.

@wmgroot
Copy link
Contributor

wmgroot commented Feb 29, 2024

This also allows CloudProviders to pick and choose what they want to allow users to directly specify through their API.

Kubelet is cloud provider agnostic, maybe EKS should allow users to configure feature gates and other k8s configuration options instead? 🙂

This is the primary reason that CAPI has both XMachineTemplate and KubeadmConfigTemplate CRDs.

@ROunofF
Copy link

ROunofF commented Mar 1, 2024

I opened a small PR that bring a bit of life-improvements to kubectl get nodepools.karpenter.sh: #1055, I believe the current change isn't API-Breaking but if we want to take this further, it may require an api change (I believe this is the laundry list for breaking change)?

So proposing to look at the output from the different CRDs and adjust accordingly. My changes allow to understand at a glance the utilization of the different nodepools in your cluster.

@sftim
Copy link

sftim commented Mar 11, 2024

We should ensure that the README for https://artifacthub.io/packages/helm/karpenter/karpenter very, very clearly tells people it's out of date.

We could consider publishing a new release of that legacy chart that is almost the same code, but you get a Warning: every time you create or update a Provisioner.

@garvinp-stripe
Copy link
Contributor

I'd like to consider shifting the kubelet into the CloudProvider-specific configuration for v1.

Not sure where the issue lives but we should clearly define what is the responsibility of each CRD karpenter creates. I think its fine to move it but I want to make sure each move isn't just throwing things from one bucket to another

@jonathan-innis
Copy link
Member Author

jonathan-innis commented Mar 11, 2024

Kubelet is cloud provider agnostic, maybe EKS should allow users to configure feature gates and other k8s configuration options instead

I wish this was the case. But from the Cloud Providers that we've heard from at this point, it strikes me that each CloudProvider is rather opinionated about what can and can't be set.

Not sure where the issue lives but we should clearly define what is the responsibility of each CRD karpenter creates

100% agree. I think we are really talking about high-level node scheduling and selection in the NodePool vs. Node-level configuration on the NodeClass side. I agree that an addition to the docs on this could be super valuable if we had this. Maybe a preamble in the https://karpenter.sh/docs/concepts/ that talks about the purpose of each of these APIs.

@sftim
Copy link

sftim commented Mar 18, 2024

Drop support for kubectl delete node, now that we have kubectl delete nodeclaim instead. While this is nice for Karpenter users, it teaches users to take an action that's dangerous and disruptive in non-karpenter environments.

#758 (comment)

We could support it but emit a Warning: (via a separate ValidatingAdmissionPolicy)

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 16, 2024
@GnatorX
Copy link

GnatorX commented Jun 25, 2024

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 25, 2024
@GnatorX
Copy link

GnatorX commented Jun 25, 2024

Is there any appetite to change all references to cloudprovider to just plain provider, https://github.com/kubernetes-sigs/karpenter/tree/main/pkg/cloudprovider? Since not all provider are cloud

@jonathan-innis
Copy link
Member Author

Since not all provider are cloud

Oh that's interesting. Are you thinking about kwok specifically? In a lot of ways this is just a "fake" cloudprovider. I'm not sure how much value we would get here by changing the naming

@jonathan-innis
Copy link
Member Author

Closing this one out since we closed-out on the v1 items that we were taking in with #1222

@jonathan-innis
Copy link
Member Author

/close

@k8s-ci-robot
Copy link
Contributor

@jonathan-innis: Closing this issue.

In response to this:

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. roadmap We're thinking about next steps v1 Issues requiring resolution by the v1 milestone
Projects
None yet
Development

No branches or pull requests