-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Helm not installing? #28
Comments
Is your EKS cluster running k8s 1.22 by any chance?
I had to revert EKS v1.22 PR in CDK because Kubectl lambda layer is 1.20
and it’s not compatible with 1.22 and if we update it to 1.22 it would
break backwards compatibility with <1.20 clusters out there.
aws/aws-cdk#20000
…On Thu, Apr 28, 2022 at 4:45 PM Matthew Bonig ***@***.***> wrote:
Ok, I don't think this is exactly on your construct, but I'm seeing a wild
problem after installing the latest version. I've set it up to deploy to a
cluster. I can review the eks provider logs and see it running the command:
Running command: ['kubectl', 'apply', '--kubeconfig', '/tmp/kubeconfig', '-f', '/tmp/manifest.yaml', '--prune', '-l', 'aws.cdk.eks/prune-c85936281e947e3c6cf66002393da6e780f6ed634e']
b'provisioner.karpenter.sh/default configured\n`
But when I check the cluster, I see no pods, no deployments, no
replicasets. Helm doesn't show anything deployed to the karpenter namespace.
Any ideas why this could be happening?
—
Reply to this email directly, view it on GitHub
<#28>, or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAAHWN3VDVKF65D7MYUQ4N3VHMIIVANCNFSM5UT7G5KQ>
.
You are receiving this because you are subscribed to this thread.Message
ID: ***@***.***>
|
Nope, 1.21 |
Would you mind sharing your CDK code snippet for cluster and Karpenter configuration? |
And which version of CDK are you using? |
2.22.0 |
for the cluster:
|
@mbonig Can you check the contents of this file in your cdk app ( |
1.20.0 and 3.8.1 |
k, I think I might be seeing the issue... looking at the manifest that's getting deployed, it contains this prune label:
and when looking at the command being run against the server:
I'm not familiar with the prune flag, but reading the docs it sounds like maybe it's pruning the resource it's deploying? |
Hmm, actually I'm seeing something different now. I tried taking that manifest I see in my logs and applying it manually and I'm getting an invalid format. Isn't this manifest generated from helm?
|
I'm getting a different error:
I wonder if Karpenter itself is set on I'll have to troubleshoot this further (and possibly revert instance profile name changes) to see what's going on. |
This is generated by this... I'm not sure when prune flag gets added.
|
When I compare that manifest to what is templated by helm (running locally) I get a much different deployment. I think I have an idea of what could have happened. I had the original Karpenter deployed, it took care of all the CRDS. Then, I deployed 'v2', which is only trying to add the provisioner, since all the other CRDS existed. Now it's trying to deploy and all the CRDS and other controllers and deployments are gone, so the 'Provisioner' RD isn't enough to succeed. Maybe?? |
First of all ... thanks for helping me troubleshoot the issue. I think we have two different issues here. I think I'll have to revert #27 because |
Also... as a rule of thumb... when I update my clusters I tend to comment out cdk-karpenter and provisioner sections, deploy and then un-comment them and apply. From time to time manifest propagation on the EKS cluster through CDK can be finicky. I wonder if introducing cdk8s to this construct to handle cluster related operations would be a better fit. |
Few other questions (some of them unrelated)
|
If there was an API connection issue I don't think it'd make it as far in the process as it does. As for the CapacityType:
to avoid a name collision on the ECS enum |
K, Ill try this. Was hoping for an easier way forward because this code will be deployed out to a lot of environments and I don't want reduced capacity along the way. I assume if I delete Karpenter it won't necessarily delete the underlying nodes and remove capacity. |
Ok, well now I have a whole new problem, should I open a new issue for it? From my Karpenter pod:
Looking in IAM I do see the role with the right instanceProfileArn:
|
@mbonig Just pinged you on slack. |
Closing this for now. Reopen if needed. |
Ok, I don't think this is exactly on your construct, but I'm seeing a wild problem after installing the latest version. I've set it up to deploy to a cluster. I can review the eks provider logs and see it running the command:
But when I check the cluster, I see no pods, no deployments, no replicasets. Helm doesn't show anything deployed to the karpenter namespace.
Any ideas why this could be happening?
The text was updated successfully, but these errors were encountered: