Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

client: emit optional telemetry from prerun and prestart hooks. #24556

Merged
merged 3 commits into from
Dec 12, 2024

Conversation

jrasell
Copy link
Member

@jrasell jrasell commented Nov 27, 2024

Description

The Nomad client can now optionally emit telemetry data from the prerun and prestart hooks. This allows operators to monitor and alert on failures and time taken to complete.

The new datapoints are:

  • nomad.client.alloc_hook.prerun.success (counter)

  • nomad.client.alloc_hook.prerun.failed (counter)

  • nomad.client.alloc_hook.prerun.elapsed (sample)

  • nomad.client.task_hook.prestart.success (counter)

  • nomad.client.task_hook.prestart.failed (counter)

  • nomad.client.task_hook.prestart.elapsed (sample)

The hook execution time is useful to Nomad engineering and will help optimize code where possible and understand job specification impacts on hook performance.

Currently only the PreRun and PreStart hooks have telemetry enabled, so we limit the number of new metrics being produced.

Testing & Reproduction steps

I tested on a Debian 12 arm64 using a single agent in server/client mode with this configuration.

  1. Run the Nomad agent.
  2. Run the promana monitoring job
  3. Register the sleep batch parameterised job
  4. Run a loop to continually dispatch the job while true; do; nomad job dispatch sleep; sleep 2; done to generate metrics
  5. Navigate Prometheus and Grafana to look at the metrics generated
promana monitoring job spec:
job "promana" {
  group "promana" {
    network {
      mode = "bridge"
      port "prometheus" {
        to = 9090
      }
      port "grafana" {
        to = 3000
      }
    }

    service {
      name     = "prometheus-server"
      port     = "prometheus"
      provider = "nomad"
    }
    service {
      name     = "grafana-server"
      port     = "grafana"
      provider = "nomad"
    }

    task "prometheus" {
      driver = "docker"
      config {
        image = "prom/prometheus:v3.0.0"
        ports = ["prometheus"]
        args  = [
          "--config.file=${NOMAD_TASK_DIR}/config/prometheus.yml",
          "--storage.tsdb.path=/prometheus",
          "--web.listen-address=0.0.0.0:9090",
          "--web.console.libraries=/usr/share/prometheus/console_libraries",
          "--web.console.templates=/usr/share/prometheus/consoles",
        ]

        volumes = [
          "local/config:/etc/prometheus/config",
        ]
      }

      template {
        data = <<EOH
---
global:
  scrape_interval:     1s
  evaluation_interval: 1s

scrape_configs:
  - job_name: "nomad"
    metrics_path: "/v1/metrics"
    scheme: "http"
    params:
      format:
        - "prometheus"
    static_configs:
      - targets:
        - {{ env "attr.unique.network.ip-address" }}:4646
EOH

        change_mode   = "signal"
        change_signal = "SIGHUP"
        destination   = "local/config/prometheus.yml"
      }

      resources {
        cpu    = 500
        memory = 512
      }
    }

    task "grafana" {
      driver = "docker"

      config {
        image   = "grafana/grafana:11.3.1"
        volumes = [
          "local/datasources:/etc/grafana/provisioning/datasources",
        ]
      }

      template {
        data = <<EOH
apiVersion: 1
datasources:
- name: Prometheus
  type: prometheus
  access: proxy
  url: http://0.0.0.0:9090
  isDefault: true
  version: 1
  editable: false
EOH

        destination = "local/datasources/datasources.yaml"
      }

      resources {
        cpu    = 200
        memory = 256
      }
    }
  }
}
sleep job spec:
job "sleep" {
  type = "batch"
  parameterized {}
  group "sleep" {
    network {
      mode = "bridge"
    }
    task "sleep" {
      driver = "docker"
      config {
        image = "busybox"
        args  = [
          "sleep",
          "5s",
        ]
      }
      resources {
        cpu    = 50
        memory = 78 
      }
    }
  }
}

Links

Jira: https://hashicorp.atlassian.net/browse/NET-11237
RFC: https://go.hashi.co/rfc/nmd-206

Contributor Checklist

  • Changelog Entry If this PR changes user-facing behavior, please generate and add a
    changelog entry using the make cl command.
  • Testing Please add tests to cover any new functionality or to demonstrate bug fixes and
    ensure regressions will be caught.
  • Documentation If the change impacts user-facing functionality such as the CLI, API, UI,
    and job configuration, please update the Nomad website documentation to reflect this. Refer to
    the website README for docs guidelines. Please also consider whether the
    change requires notes within the upgrade guide.

Reviewer Checklist

  • Backport Labels Please add the correct backport labels as described by the internal
    backporting document.
  • Commit Type Ensure the correct merge method is selected which should be "squash and merge"
    in the majority of situations. The main exceptions are long-lived feature branches or merges where
    history should be preserved.
  • Enterprise PRs If this is an enterprise only PR, please add any required changelog entry
    within the public repository.

The Nomad client can now optionally emit telemetry data from the
prerun and prestart hooks. This allows operators to monitor and
alert on failures and time taken to complete.

The new datapoints are:
  - nomad.client.alloc_hook.prerun.success (counter)
  - nomad.client.alloc_hook.prerun.failed (counter)
  - nomad.client.alloc_hook.prerun.elapsed (sample)

  - nomad.client.task_hook.prestart.success (counter)
  - nomad.client.task_hook.prestart.failed (counter)
  - nomad.client.task_hook.prestart.elapsed (sample)

The hook execution time is useful to Nomad engineering and will
help optimize code where possible and understand job specification
impacts on hook performance.

Currently only the PreRun and PreStart hooks have telemetry
enabled, so we limit the number of new metrics being produced.
Comment on lines +192 to +199
// If the operator has disabled hook metrics, then don't call the time
// function to save 30ns per hook.
var hookExecutionStart time.Time

if !ar.clientConfig.DisableAllocationHookMetrics {
hookExecutionStart = time.Now()
}

Copy link
Member

@shoenig shoenig Dec 3, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just a thought, but what if the HookStatsHandler interface also required a Start() implementation that set the start time, which would be kept as internal state of the handler (and would be a noop of the noop handler).

Might keep some of that implementation detail out of this alloc runner hook code.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I did think of that while writing the code.

I decided against it as all the hooks that emit telemetry use a single HookStatsHandler within the task or alloc runner. It would need to assume each hook is called without any concurrency with a blind assumption the start time is meant for the hook when calling Emit. This felt wrong to me.

I also considered having a HookStatsHandler implementation per hook call, but did not go down this route to avoid overhead in setting this up per hook and the memory overhead with additional label setup and such.

Let me know your thoughts, I am happy to change the approach if we feel the advantages are worth it and our assumptions are correct.

Copy link
Member

@shoenig shoenig left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@jrasell jrasell merged commit 7d48aa2 into main Dec 12, 2024
28 checks passed
@jrasell jrasell deleted the f-NET-11237 branch December 12, 2024 14:43
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
backport/1.9.x backport to 1.9.x release line
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants