Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

mixed_instances_policy is not working #1807

Closed
shaibs3 opened this issue Jan 24, 2022 · 6 comments · Fixed by #1808
Closed

mixed_instances_policy is not working #1807

shaibs3 opened this issue Jan 24, 2022 · 6 comments · Fixed by #1808

Comments

@shaibs3
Copy link

shaibs3 commented Jan 24, 2022

Description

i am trying to create a self managed node group with the eks module latest version 18.2.2
i am using mixed_instances_policy to create
mixed_instances_policy = { instances_distribution = { on_demand_base_capacity = "1" on_demand_percentage_above_base_capacity = "100" on_demand_allocation_strategy = "prioritized" } override = [ { instance_type = "m5.2xlarge" }, { instance_type = "c5.2xlarge" }, ] }

along with
use_mixed_instances_policy = true
unfortunatly the plan does not show that the correct policy was created.

looking at the code in node_groups.tf in module "self_managed_node_group"
i see that the variable mixed_instances_policy is not passed to the module so that why it cant work.
if we added a line
mixed_instances_policy = try(each.value.mixed_instances_policy, var.self_managed_node_group_defaults.mixed_instances_policy, null)

to the node_groups.tf this should fix the issue

Versions

  • Terraform: 1.1.4
  • Provider(s):
  • Module: eks 18.2.2

Reproduction

Expected behavior

Actual behavior

Terminal Output Screenshot(s)

Additional context

@daroga0002
Copy link
Contributor

please past a terraform module configuration which you are using

@nickvanwegen
Copy link

I seem to have the same issue.

Following terraform config

module "eks" {
  source          = "terraform-aws-modules/eks/aws"
  version         = "18.2.1"
  cluster_version = local.cluster_version
  cluster_name    = local.name
  vpc_id          = var.vpc_id
  subnet_ids      = var.subnets

  iam_role_name = "test"

  enable_irsa = true

  self_managed_node_group_defaults = local.self_managed_node_group_defaults
  self_managed_node_groups         = local.self_managed_node_group

this is a copy from the given example in github repo

  self_managed_node_group = {
    one = {
      name = "spot-1"

      public_ip    = true
      max_size     = 5
      desired_size = 2

      use_mixed_instances_policy = true
      mixed_instances_policy = {
        instances_distribution = {
          on_demand_base_capacity                  = 0
          on_demand_percentage_above_base_capacity = 10
          spot_allocation_strategy                 = "capacity-optimized"
        }

        override = [
          {
            instance_type     = "m5.large"
            weighted_capacity = "1"
          },
          {
            instance_type     = "m6i.large"
            weighted_capacity = "2"
          },
        ]
      }

      pre_bootstrap_user_data = <<-EOT
      echo "foo"
      export FOO=bar
      EOT

      bootstrap_extra_args = "--kubelet-extra-args '--node-labels=node.kubernetes.io/lifecycle=spot'"

      post_bootstrap_user_data = <<-EOT
      cd /tmp
      sudo yum install -y https://s3.amazonaws.com/ec2-downloads-windows/SSMAgent/latest/linux_amd64/amazon-ssm-agent.rpm
      sudo systemctl enable amazon-ssm-agent
      sudo systemctl start amazon-ssm-agent
      EOT
    }
  }

output seems to show that input should be fine according to examples.


self_managed_node_groups = {
  "one" = {
    "bootstrap_extra_args" = "--kubelet-extra-args '--node-labels=node.kubernetes.io/lifecycle=spot'"
    "desired_size" = 2
    "max_size" = 5
    "mixed_instances_policy" = {
      "instances_distribution" = {
        "on_demand_base_capacity" = 0
        "on_demand_percentage_above_base_capacity" = 10
        "spot_allocation_strategy" = "capacity-optimized"
      }
      "override" = [
        {
          "instance_type" = "m5.large"
          "weighted_capacity" = "1"
        },
        {
          "instance_type" = "m6i.large"
          "weighted_capacity" = "2"
        },
      ]
    }
    "name" = "spot-1"
    "post_bootstrap_user_data" = <<-EOT
    cd /tmp
    sudo yum install -y https://s3.amazonaws.com/ec2-downloads-windows/SSMAgent/latest/linux_amd64/amazon-ssm-agent.rpm
    sudo systemctl enable amazon-ssm-agent
    sudo systemctl start amazon-ssm-agent
    
    EOT
    "pre_bootstrap_user_data" = <<-EOT
    echo "foo"
    export FOO=bar
    
    EOT
    "public_ip" = true
    "use_mixed_instances_policy" = true
  }
}
self_managed_node_groups_defaults = {
  "ami_id" = "ami-02b3f04ab50ffd9f1"
  "block_device_mappings" = {
    "xvda" = {
      "device_name" = "/dev/xvda"
      "ebs" = {
        "encrypted" = true
        "iops" = "3000"
        "volume_size" = 25
        "volume_type" = "gp3"
      }
    }
  }
  "bootstrap_extra_args" = "--kubelet-extra-args '--node-labels=node.kubernetes.io/lifecycle=spot'"
  "cluster_name" = "poc-kajuuc0e"
  "create_launch_template" = true
  "desired_size" = "1"
  "force_delete" = true
  "iam_role_additional_policies" = [
    "arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore",
  ]
  "launch_template_name" = "poc-kajuuc0e"
  "launch_template_use_name_prefix" = true
  "max_size" = "5"
  "min_size" = "1"
  "name" = "poc"
  "pre_bootstrap_user_data" = <<-EOT
  export CONTAINER_RUNTIME="containerd"
  export USE_MAX_PODS=false
  
  EOT
  "propagate_tags" = [
    {
      "key" = "k8s.io/cluster-autoscaler/enabled"
      "propagate_at_launch" = "true"
      "value" = "true"
    },
    {
      "key" = "k8s.io/cluster-autoscaler/poc-kajuuc0e"
      "propagate_at_launch" = "true"
      "value" = "poc-kajuuc0e"
    },
  ]
  "protect_from_scale_in" = true
  "update_default_version" = true
  "use_mixed_instances_policy" = true
  "use_name_prefix" = true
}


Tho deployed in AWS i see the following ASG.
As you can see there is 0% spot
And also the spot_allocation_strategy is not set.
image

I could be that i am missing something? not sure but it sure seems like @shaibs3 has a point

@bryantbiggs
Copy link
Member

yes @shaibs3 is correct - I've opened a PR to fix and am just going to double check the rest of the variables between the root module and the sub-module. thanks for reporting @shaibs3 !

@bryantbiggs
Copy link
Member

ok looks good with the fix in
image

@antonbabenko
Copy link
Member

This issue has been resolved in version 18.2.3 🎉

@github-actions
Copy link

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Nov 15, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants