Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Helm chart : expanderPriorities in values are object but string expected in template #3776

Closed
eddycharly opened this issue Dec 19, 2020 · 20 comments
Labels
area/cluster-autoscaler kind/bug Categorizes issue or PR as related to a bug.

Comments

@eddycharly
Copy link
Member

In values.yaml it looks like expanderPriorities should be passed as an object

expanderPriorities: {}

But the template expects it to be a string

{{ .Values.expanderPriorities | indent 4 }}

@eddycharly eddycharly added the kind/bug Categorizes issue or PR as related to a bug. label Dec 19, 2020
@eddycharly eddycharly changed the title Helm chart : expanderPriorities in values are object but string expected Helm chart : expanderPriorities in values are object but string expected in template Dec 19, 2020
@brennerm
Copy link

For anyone encountering this. A simple workaround is to use a YAML literal:

expanderPriorities: |-
  10:
    - asg10
  0:
    - asg0

@eddycharly
Copy link
Member Author

@brennerm why not change the template to:

 {{- toYaml .Values.expanderPriorities | nindent 4 }}

instead ?

@brennerm
Copy link

brennerm commented Feb 1, 2021

@eddycharly Cause the chart expects the value to be a string. Try passing the YAML object and it'll fail.

@eddycharly
Copy link
Member Author

@brennerm yeah sure, i meant changing the template (not the values).

That is, changing

{{ .Values.expanderPriorities | indent 4 }}
with:

 {{- toYaml .Values.expanderPriorities | nindent 4 }}

And then use an object in the values instead of a string literal:

expanderPriorities:
  10:
    - asg10
  0:
    - asg0

I feel like an object would be more natural than a string literal in this case.

@brennerm
Copy link

brennerm commented Feb 1, 2021

Ahh, yeah that would fix it but would introduce a breaking change. 👍

@eddycharly
Copy link
Member Author

Unfortunately yes.
Not sure if it is possible to check if the value is a string in the template and then adapt to support both string or map.
It would allow to change the template while preserving backward compatibility.
Will check that tomorrow.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 2, 2021
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jun 1, 2021
@stevehipwell
Copy link
Contributor

Either the docs or the template needs updating so this is consistent.

@stevehipwell
Copy link
Contributor

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Jun 9, 2021
@jbilliau-rcd
Copy link

much thanks @brennerm , i've spent 2 hours trying to figure out wtf is the syntax for this since it's not documented. Your suggestion of the YAML literal fixed it for me.

@bwhaley
Copy link
Contributor

bwhaley commented Sep 9, 2021

Just noticed this issue after opening the PR! Any chance of cutting a backward release to fix this? My use case is to use the chart with the Terraform Helm provider. The code looks like this:

variable "expander_priorities" {
  description = "If scaling_strategy is set to 'priority', you can use this variable to define cluster-autoscaler-priority-expander priorities. See: https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/expander/priority/readme.md"
  type        = map(list(string))
  default     = {}
  # Example:
  #
  # expander_priorities = {
  #   10 = [".*"]
  #   50 = ["foo.*"]
  # }
}

variable "priority_config_map_annotations" {
  description = "If scaling_strategy is set to 'priority', you can use this to specify annotations to add to the cluster-autoscaler-priority-expander ConfigMap."
  type        = map(string)
  default     = {}
}

resource "helm_release" "k8s_autoscaler" {
  repository = "https://kubernetes.github.io/autoscaler"
  name       = local.release_name
  chart      = local.chart_name
  version    = var.cluster_autoscaler_chart_version
  namespace  = local.chart_namespace

  values = [yamlencode(local.chart_values)]

  depends_on = [
    aws_eks_fargate_profile.cluster_autoscaler,
    null_resource.dependency_getter,
  ]
}

locals {
  chart_values = merge(
    {
      ... <snip> ...
      expanderPriorities           = var.expander_priorities
      priorityConfigMapAnnotations = var.priority_config_map_annotations

      extraArgs = merge(
        {
          expander                    = var.scaling_strategy
          balance-similar-node-groups = "true"
        },
        var.container_extra_args,
      )
    },
      ... <snip> ...  
  )
}

It's a nice clean interface with types for expander_priorities and priority_config_map_annotations. To make it work with this work around, I think I'll have to make expander_priorities a string and pass in the workaround as a heredoc, explain it in the docs, etc. e.g.

variable "expander_priorities" {
  description = "If scaling_strategy is set to 'priority', you can use this variable to define cluster-autoscaler-priority-expander priorities. See: https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/expander/priority/readme.md"
  type        = string
  default     = "
  # Example:
  #
  # expander_priorities = <<EOF
  #   10:
  #     -  ".*"
  #   50: 
  #     - "foo.*"
  # EOF
}

I think this will work, but not sure yet, and anyway it's not ideal as it will be easy to pass in the wrong value.

WDYT about toYaml?

@stevehipwell
Copy link
Contributor

I do this in terraform at the moment and just capture it as a string.

locals {
  expander_config = <<-EOT
    100:
      - .*-spot-.*
    10:
      - .*
  EOT
}

I stand by my comment above that either the docs or the implementation need changing; it wouldn't be too complicated to change the implementation to support both formats (see example).

@bwhaley
Copy link
Contributor

bwhaley commented Sep 10, 2021

Good point, @stevehipwell! Updated my PR with your idea.

I hope this will be merged. I burned a couple of hours on this issue.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 14, 2021
@bwhaley
Copy link
Contributor

bwhaley commented Dec 14, 2021

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 14, 2021
@gjtempleton
Copy link
Member

I believe this can be closed given your PR has been merged and the new version of the chart released, any objection @bwhaley ?

@bwhaley
Copy link
Contributor

bwhaley commented Dec 24, 2021

Yup, agreed!

@gjtempleton
Copy link
Member

/close

@k8s-ci-robot
Copy link
Contributor

@gjtempleton: Closing this issue.

In response to this:

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/cluster-autoscaler kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

No branches or pull requests

10 participants