-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Task Manager] adds basic observability into Task Manager's runtime operations #77868
Changes from 78 commits
5f850db
a34dd73
c0a7038
7e27d7b
0a22c43
7c226e9
e1ee967
abec231
92cf5ad
bf0c3b4
acae863
6b81d56
4d34dac
e3ba8ad
e7c5da2
192782a
09bf68e
4a5d652
de5a7ac
62ceba3
1cc8260
7b14dff
2dca67f
ae15dc6
734cb12
db86986
4ca65c3
44cb578
d29c866
273d58d
2efb599
911c827
2ab5e73
37a4041
2f12de5
4863391
7cd7ca6
8223188
9fc7da6
7403903
e47b619
6dddf5c
7798875
a47c7aa
689d014
e9db32e
5ff008d
4f37202
9d92055
ac27d4f
311103e
8396812
177b6b9
3d2cde8
1c4c266
afcdfd6
bbd445a
02afe23
320ea57
475d49d
b0275e0
aa669e6
42e041f
77d4450
4e08766
1c21c6b
faefc37
c30d71b
12e52e6
6637ab4
40fd393
b310cb7
d3157fd
20d7e66
1906572
508a726
14faf60
600863f
c8b4879
1a5a907
12e7866
13981c0
5ea830c
9b48539
314f3bc
e59ebc0
5d8f750
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,255 @@ | ||
# Task Manager Monitoring | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. finally getting around to looking at this, sorry - I'm going to do an initial "comment" review of this doc, before looking at the code ... |
||
|
||
Task Manager has an internal monitoring mechanism in which keeps track of a variety of metrics which are exposed via a `health` api endpoint and Kibana Server Log debug messaging. | ||
|
||
## Exposed Metrics | ||
There are three different sections to the stats returned by the `health` api. | ||
- `configuration`: Summarizes Task Manager's current configuration. | ||
- `workload`: Summarizes the workload in the current deployment. | ||
- `runtime`: Tracks Task Manager's performance. | ||
|
||
### Configuring the Stats | ||
There are four new configurations: | ||
|
||
- `xpack.task_manager.monitored_stats_required_freshness` - The _required freshness_ of critical "Hot" stats, which means that if key stats (last polling cycle time, for example) haven't been refreshed within the specified duration, the `_health` endpoint and service will report an `Error` status. By default this is inferred from the configured `poll_interval` and is set to `poll_interval` plus a `1s` buffer. | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. A one second buffer seems too small to me - I'd be a little afraid of a Kibana under load still working "fine", but exceeding that, moving to an Error. I wonder if we should have two values, perhaps computed from one config value - a warning and an error. Where warning was using the config value, error used 2x the config value. Presumably the value can't be less than There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yeah, that's fair - this is just an initial stab. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yeah it does get interesting. Seems it could follow what the query timeout would be. Some queries against Elasticsearch can take a while to run and should be perfectly normal (part of make it slow project). |
||
- `xpack.task_manager.monitored_aggregated_stats_refresh_rate` - Dictates how often we refresh the "Cold" metrics. These metrics require an aggregation against Elasticsearch and add load to the system, hence we want to limit how often we execute these. We also inffer the _required freshness_ of these "Cold" metrics from this configuration, which means that if these stats have not been updated within the required duration then the `_health` endpoint and service will report an `Error` status. This covers the entire `workload` section of the stats. By default this is configured to `60s`, and as a result the _required freshness_ defaults to `61s` (refresh plus a `1s` buffer). | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. So this is the rate at which we do the queries, and there is also a required freshness for this? Isn't that freshness then just the latency to make the request and get the response? Also seems like we'd have some kind of timeout on the call, and retry, and log a warning when we had to retry. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. That's designed to catch the case where the workload hasn't been updated because something in the mechanism broke. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Think of this as two separate components - one that keeps the monitored stats up to date and one that looks at these monitored stats and assesses the health based on them. |
||
- `xpack.task_manager.monitored_stats_running_average_window`- Dictates the size of the window used to calculate the running average of various "Hot" stats, such as the time it takes to run a task, the _drift_ that tasks experience etc. These stats are collected throughout the lifecycle of tasks and this window will dictate how large the queue we keep in memory would be, and how many values we need to calculate the average against. We do not calculate the average on *every* new value, but rather only when the time comes to summarize the stats before logging them or returning them to the API endpoint. | ||
- `xpack.task_manager.monitored_task_execution_thresholds`- Configures the threshold of failed task executions at which point the `warn` or `error` health status will be set either at a default level or a custom level for specific task types. This will allow you to mark the health as `error` when any task type failes 90% of the time, but set it to `error` at 50% of the time for task types that you consider critical. This value can be set to any number between 0 to 100, and a threshold is hit when the value *exceeds* this number. This means that you can avoid setting the status to `error` by setting the threshold at 100, or hit `error` the moment any task failes by setting the threshold to 0 (as it will exceed 0 once a single failer occurs). | ||
|
||
For example: | ||
``` | ||
|
||
``` | ||
|
||
## Consuming Health Stats | ||
Task Manager exposes a `/api/task_manager/_health` api which returns the _latest_ stats. | ||
Calling this API is designed to be fast and doesn't actually perform any checks- rather it returns the result of the latest stats in the system, and is design in such a way that you could call it from an external service on a regular basis without worrying that you'll be adding substantial load to the system. | ||
|
||
Additionally, the metrics are logged out into Task Manager's `DEBUG` logger at a regular cadence (dictated by the Polling Interval). | ||
If you wish to enable DEBUG logging in your Kibana instance, you will need to add the following to your `Kibana.yml`: | ||
``` | ||
logging: | ||
loggers: | ||
- context: plugins.taskManager | ||
appenders: [console] | ||
level: debug | ||
``` | ||
|
||
Please bear in mind that these stats are logged as often as your `poll_interval` configuration, which means it could add substantial noise to your logs. | ||
We would recommend only enabling this level of logging temporarily. | ||
|
||
### Understanding the Exposed Stats | ||
|
||
As mentioned above, the `health` api exposes three sections: `configuration`, `workload` and `runtime`. | ||
Each section has a `timestamp` and a `status` which indicates when the last update to this setion took place and whether the health of this section was evaluated as `OK`, `Warning` or `Error`. | ||
|
||
The root has its own `status` which indicate the state of the system overall as infered from the `status` of the section. | ||
An `Error` status in any section will cause the whole system to display as `Error`. | ||
A `Warning` status in any section will cause the whole system to display as `Warning`. | ||
An `OK` status will only be displayed when all sections are marked as `OK`. | ||
|
||
The root `timestamp` is the time in which the summary was exposed (either to the DEBUG logger or the http api) and the `last_update` is the last time any one of the sections was updated. | ||
|
||
#### The Configuration Section | ||
The `configuration` section summarizes Task Manager's current configuration, including dynamic configurations which change over time, such as `poll_interval` and `max_workers` which adjust in reaction to changing load on the system. | ||
|
||
These are "Hot" stats which are updated whenever a change happens in the configuration. | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Perhaps overkill, but would it be useful to list the registered task types here, and their config? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Hmm, that's an interesting thought. I've been thinking we could have a verbose mode (in the http endpoint) that includes more data and that way we avoid putting too much info into the server log, and we'd only log the stuff that changes. In verbose mode we'd return things like this as well. |
||
|
||
#### The Workload Section | ||
The `workload` which summarizes the work load in the current deployment, listing the tasks in the system, their types and what their current status is. | ||
|
||
It includes three sub sections: | ||
- The number of tasks scheduled in the system, broken down by type and status. | ||
- The number of idle `overdue` tasks, whose `runAt` has expired. | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. the # of failed tasks would be good here as well :-) There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. It's already there :) that's the There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Should something be done about There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. oh, you mean overdue as their retryAt has been reached? 🤔 |
||
- Execution density in the next minute or so (configurable), which shows how many tasks are scheduled to execute in the scope of each polling interval. This can give us an idea of how much load there is on the current Kibana deployment. | ||
|
||
These are "Cold" stat which are updated at a regular cadence, configured by the `monitored_aggregated_stats_refresh_rate` config. | ||
|
||
#### The Runtime Section | ||
The `runtime` tracks Task Manager's performance as it runs, making note of task execution time, _drift_ etc. | ||
These include: | ||
- The time it takes a task to run (mean and median, using a configurable running average window, `50` by default) | ||
- The average _drift_ that tasks experience (mean and median, using the same configurable running average window as above). Drift tells us how long after a task's scheduled a task typically executes. | ||
gmmorris marked this conversation as resolved.
Show resolved
Hide resolved
|
||
- The polling rate (the timestamp of the last time a polling cycle completed) and the result [`No tasks | Filled task pool | Unexpectedly ran out of workers`] frequency the past 50 polling cycles (using the same window size as the one used for running averages) | ||
- The `Success | Retry | Failure ratio` by task type. This is different than the workload stats which tell you what's in the queue, but ca't keep track of retries and of non recurring tasks as they're wiped off the index when completed. | ||
|
||
These are "Hot" stats which are updated reactively as Tasks are executed and interacted with. | ||
|
||
### Example Stats | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. So I'm curious (again, without looking at the code yet!) how much of the There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Yeah, I actually intended on doing that and forgot :) There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
This isn't about Long term, we also want to shift the Health Status based on these things, but for now, in order to reach GA, this is all about getting some observability into that black box. |
||
|
||
For example, if you _curl_ the `/api/task_manager/_health` endpoint, you might get these stats: | ||
gmmorris marked this conversation as resolved.
Show resolved
Hide resolved
|
||
``` | ||
{ | ||
/* the time these stats were returned by the api */ | ||
"timestamp": "2020-10-05T18:26:11.346Z", | ||
/* the overall status of the system */ | ||
"status": "OK", | ||
/* last time any stat was updated in this output */ | ||
"last_update": "2020-10-05T17:57:55.411Z", | ||
"stats": { | ||
"configuration": { /* current configuration of TM */ | ||
"timestamp": "2020-10-05T17:56:06.507Z", | ||
"status": "OK", | ||
"value": { | ||
"max_workers": 10, | ||
"poll_interval": 3000, | ||
"request_capacity": 1000, | ||
"max_poll_inactivity_cycles": 10, | ||
"monitored_aggregated_stats_refresh_rate": 60000, | ||
"monitored_stats_running_average_window": 50 | ||
} | ||
}, | ||
"workload": { /* The workload of this deployment */ | ||
"timestamp": "2020-10-05T17:57:06.534Z", | ||
"status": "OK", | ||
"value": { | ||
"count": 6, /* count of tasks in the system */ | ||
"task_types": { /* what tasks are there and what status are they in */ | ||
"actions_telemetry": { | ||
"count": 1, | ||
"status": { | ||
"idle": 1 | ||
} | ||
}, | ||
"alerting_telemetry": { | ||
"count": 1, | ||
"status": { | ||
"idle": 1 | ||
} | ||
}, | ||
"apm-telemetry-task": { | ||
"count": 1, | ||
"status": { | ||
"idle": 1 | ||
} | ||
}, | ||
"endpoint:user-artifact-packager": { | ||
"count": 1, | ||
"status": { | ||
"idle": 1 | ||
} | ||
}, | ||
"lens_telemetry": { | ||
"count": 1, | ||
"status": { | ||
"idle": 1 | ||
} | ||
}, | ||
"session_cleanup": { | ||
"count": 1, | ||
"status": { | ||
"idle": 1 | ||
} | ||
} | ||
}, | ||
|
||
/* Frequency of recurring tasks schedules */ | ||
"schedule": [ | ||
["60s", 1], /* 1 task, every 60s */ | ||
["3600s", 3], /* 3 tasks every hour */ | ||
["720m", 1] | ||
], | ||
/* There are no overdue tasks in this system at the moment */ | ||
"overdue": 0, | ||
/* This is the schedule density, it shows a histogram of all the polling intervals in the next minute (or, if | ||
pollInterval is configured unusually high it will show a min of 2 refresh intervals into the future, and a max of 50 buckets). | ||
Here we see that on the 3rd polling interval from *now* (which is ~9 seconds from now, as pollInterval is `3s`) there is one task due to run. | ||
We also see that there are 5 due two intervals later, which is fine as we have a max workers of `10` | ||
*/ | ||
"estimated_schedule_density": [0, 0, 1, 0, 5, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] | ||
} | ||
}, | ||
"runtime": { | ||
"timestamp": "2020-10-05T17:57:55.411Z", | ||
"status": "OK", | ||
"value": { | ||
"polling": { | ||
/* When was the last polling cycle? */ | ||
"last_successful_poll": "2020-10-05T17:57:55.411Z", | ||
/* What is the frequency of polling cycle result? | ||
Here we see 94% of "NoTasksClaimed" and 6% "PoolFilled" */ | ||
"result_frequency_percent_as_number": { | ||
"NoTasksClaimed": 94, | ||
"RanOutOfCapacity": 0, /* This is a legacy result, we might want to rename - it tells us when a polling cycle resulted in claiming more tasks than we had workers for, butt he name doesn't make much sense outside of the context of the code */ | ||
"PoolFilled": 6 | ||
} | ||
}, | ||
/* on average, the tasks in this deployment run 1.7s after their scheduled time */ | ||
"drift": { | ||
"mean": 1720, | ||
"median": 2276 | ||
}, | ||
"execution": { | ||
"duration": { | ||
/* on average, the `endpoint:user-artifact-packager` tasks take 15ms to run */ | ||
"endpoint:user-artifact-packager": { | ||
"mean": 15, | ||
"median": 14.5 | ||
}, | ||
"session_cleanup": { | ||
"mean": 28, | ||
"median": 28 | ||
}, | ||
"lens_telemetry": { | ||
"mean": 100, | ||
"median": 100 | ||
}, | ||
"actions_telemetry": { | ||
"mean": 135, | ||
"median": 135 | ||
}, | ||
"alerting_telemetry": { | ||
"mean": 197, | ||
"median": 197 | ||
}, | ||
"apm-telemetry-task": { | ||
"mean": 1347, | ||
"median": 1347 | ||
} | ||
}, | ||
"result_frequency_percent_as_number": { | ||
/* and 100% of `endpoint:user-artifact-packager` have completed in success (within the running average window, so the past 50 runs (by default, configrable by `monitored_stats_running_average_window`) */ | ||
"endpoint:user-artifact-packager": { | ||
"status": "OK", | ||
"Success": 100, | ||
"RetryScheduled": 0, | ||
"Failed": 0 | ||
}, | ||
"session_cleanup": { | ||
/* `error` status as 90% of results are `Failed` */ | ||
"status": "error", | ||
"Success": 5, | ||
"RetryScheduled": 5, | ||
"Failed": 90 | ||
}, | ||
"lens_telemetry": { | ||
"status": "OK", | ||
"Success": 100, | ||
"RetryScheduled": 0, | ||
"Failed": 0 | ||
}, | ||
"actions_telemetry": { | ||
"status": "OK", | ||
"Success": 100, | ||
"RetryScheduled": 0, | ||
"Failed": 0 | ||
}, | ||
"alerting_telemetry": { | ||
"status": "OK", | ||
"Success": 100, | ||
"RetryScheduled": 0, | ||
"Failed": 0 | ||
}, | ||
"apm-telemetry-task": { | ||
"status": "OK", | ||
"Success": 100, | ||
"RetryScheduled": 0, | ||
"Failed": 0 | ||
} | ||
} | ||
} | ||
} | ||
} | ||
} | ||
} | ||
``` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
are these changes needed? I'm wondering if we will need to pre-req the apm plugin, if not now, in some future where the compilation depends on plugin deps.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We are investigating moving these types into the JS client (see #77720.) No timeframe on that, but worth considering.
I'm always in favor of explicitly declaring dependencies, though in this case we would create a circular one since APM has Task Manager as an optional dependency.
As for the type changes here I'd like to defer to @dgieselaar on that one before approving.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah I wasn't crazy about this, but checked with @dgieselaar and he gave a thumbsup.
He said he'd review on the PR 👍