-
Notifications
You must be signed in to change notification settings - Fork 980
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error from server: failed to prune fields: failed add back owned items: failed to convert pruned object at version karpenter.sh/v1: #6824
Comments
My understanding from the reproduction steps is that I should be able to reproduce this by applying the provided NodePool on |
I noticed that the error only occurs if we use kubectl apply --server-side. |
Using client-side apply mitigated the issue for us. It's not perfect for out GitOps solution tho. |
Hi! apiVersion: karpenter.sh/v1
kind: NodePool
metadata:
annotations:
compatibility.karpenter.sh/v1beta1-kubelet-conversion: '{"clusterDNS":["x.x.x.x"]}'
compatibility.karpenter.sh/v1beta1-nodeclass-reference: '{"kind":"EC2NodeClass","name":"bottlerocket","apiVersion":"karpenter.k8s.aws/v1beta1"}'
labels:
kustomize.toolkit.fluxcd.io/name: karpenter-node-pool
kustomize.toolkit.fluxcd.io/namespace: karpenter
name: default-ondemand-amd64
spec:
disruption:
budgets:
- nodes: 10%
consolidateAfter: 0s
consolidationPolicy: WhenEmptyOrUnderutilized
limits:
cpu: "100"
template:
spec:
expireAfter: 720h
nodeClassRef:
group: karpenter.k8s.aws
kind: EC2NodeClass
name: bottlerocket
requirements:
- key: karpenter.sh/capacity-type
operator: In
values:
- on-demand
- key: kubernetes.io/arch
operator: In
values:
- amd64
- key: karpenter.k8s.aws/instance-category
operator: In
values:
- c
- key: karpenter.k8s.aws/instance-family
operator: In
values:
- c5a
- c6a
- key: karpenter.k8s.aws/instance-cpu
operator: In
values:
- "4"
- "8"
- "16"
startupTaints:
- effect: NoExecute
key: node.cilium.io/agent-not-ready This results in the following error during apply:
And the following traceback on the karpenter controller:
|
related #6867 |
What should be done? |
Closing this issue as a duplicate of #6867. Please follow there on the progress of this issue |
Description
Observed Behavior:
We've migrated from Karpenter 0.37.1 to 1.0.0. Now if I apply a NodePool the Karpenter pod logs the following error:
Kubectl logs the following error:
Here is the NodePool I want to apply:
Expected Behavior:
NodePool gets applied without a error.
Reproduction Steps (Please include YAML):
Apply the yaml from above with 0.37.1, and reapply it with 1.0.0.
Versions:
kubectl version
): 1.0.0The text was updated successfully, but these errors were encountered: