-
Notifications
You must be signed in to change notification settings - Fork 135
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Karpenter: panic #299
Comments
please try using |
After upgrading to v1.11.0 I see a new error: So after that I re-installed karpenter by setting This recreated everything correctly and my Karpenter pods are now ready. Thank you very much |
Upon creating a new node, I am getting more errors. This time in reference to the default instance profile which seems undefined by the module.
I see this value was set prior to v1.10.0 but is not set in this version. terraform plan:
|
Have you seen the Karpenter upgrade guide https://karpenter.sh/preview/upgrading/upgrade-guide/ - and specifically, for 0.32.0? |
Sadly, no. I didn't see a breaking change in the release notes so was not aware I had to. I foolishly assumed it would work out of the box. Now, after reviewing the upgrade guide I can confirm non of the breaking changes should be affecting the |
As per the migration docs you should be able to run both API versions side-by-side to allow for the migration from Provisioner -> NodePool and AWSNodeTemplate -> EC2NodeClass: Running v1alpha1 alongside v1beta1: Having different Kind names for v1alpha5 and v1beta1 allows them to coexist for the same Karpenter controller for v0.32.x. This gives you time to transition to the new v1beta1 APIs while existing Provisioners and other objects stay in place. Keep in mind that there is no guarantee that the two versions will be able to coexist in future Karpenter versions. So if that's the case then |
Description
v0.10.0 introduces a new bug.
Karpter pods error with:
panic: validating settings, missing field(s): aws.clusterName, aws.clusterName is required
Reverting the module version to 1.9.0 corrects the error.
Versions
Module version [Required]:
v0.10.0
Terraform version:
Terraform v1.4.6
Provider version(s):
Reproduction Code [Required]
Steps to reproduce the behavior:
Upgrade the module version string, and run
terraform plan
Observe the output and note the required values have been incorrectly placed in the yaml hierarchy:
Expected behaviour
Values should be in the correct place so Karpenter wont immediately panic and pods should be in a running state after fresh install.
Actual behaviour
Karpenter pods fail to start with the error:
panic: validating settings, missing field(s): aws.clusterName, aws.clusterName is required
Additional context
I confirmed this problem on both existing and freshly installed clusters.
The text was updated successfully, but these errors were encountered: