Skip to content

Commit

Permalink
adjust line numbers in docs/components/receive.md to match updated code
Browse files Browse the repository at this point in the history
Signed-off-by: Remi Vichery <[email protected]>
  • Loading branch information
rvichery committed Dec 12, 2024
1 parent 4444cc2 commit 057fc42
Showing 1 changed file with 54 additions and 62 deletions.
116 changes: 54 additions & 62 deletions docs/components/receive.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,13 +34,13 @@ In order to enable this mode, you can use the `receive.replication-protocol=capn

```json
[
{
"endpoints": [
{"address": "node-1:10901", "capnproto_address": "node-1:19391"},
{"address": "node-2:10901", "capnproto_address": "node-2:19391"},
{"address": "node-3:10901", "capnproto_address": "node-3:19391"}
]
}
{
"endpoints": [
{ "address": "node-1:10901", "capnproto_address": "node-1:19391" },
{ "address": "node-2:10901", "capnproto_address": "node-2:19391" },
{ "address": "node-3:10901", "capnproto_address": "node-3:19391" }
]
}
]
```

Expand Down Expand Up @@ -82,7 +82,7 @@ The example of `remote_write` Prometheus configuration:

```yaml
remote_write:
- url: http://<thanos-receive-container-ip>:10908/api/v1/receive
- url: http://<thanos-receive-container-ip>:10908/api/v1/receive
```
where `<thanos-receive-containter-ip>` is an IP address reachable by Prometheus Server.
Expand Down Expand Up @@ -121,13 +121,9 @@ The example content of `hashring.json`:

```json
[
{
"endpoints": [
"127.0.0.1:10907",
"127.0.0.1:11907",
"127.0.0.1:12907"
]
}
{
"endpoints": ["127.0.0.1:10907", "127.0.0.1:11907", "127.0.0.1:12907"]
}
]
```

Expand All @@ -137,30 +133,22 @@ It is possible to only match certain `tenant`s inside of a hashring file. For ex

```json
[
{
"tenants": ["foobar"],
"endpoints": [
"127.0.0.1:1234",
"127.0.0.1:12345",
"127.0.0.1:1235"
]
}
{
"tenants": ["foobar"],
"endpoints": ["127.0.0.1:1234", "127.0.0.1:12345", "127.0.0.1:1235"]
}
]
```

The specified endpoints will be used if the tenant is set to `foobar`. It is possible to use glob matching through the parameter `tenant_matcher_type`. It can have the value `glob`. In this case, the strings inside the array are taken as glob patterns and matched against the `tenant` inside of a remote-write request. For instance:

```json
[
{
"tenants": ["foo*"],
"tenant_matcher_type": "glob",
"endpoints": [
"127.0.0.1:1234",
"127.0.0.1:12345",
"127.0.0.1:1235"
]
}
{
"tenants": ["foo*"],
"tenant_matcher_type": "glob",
"endpoints": ["127.0.0.1:1234", "127.0.0.1:12345", "127.0.0.1:1235"]
}
]
```

Expand All @@ -172,34 +160,34 @@ In order to ensure even spread for replication over nodes in different availabil

```json
[
{
"endpoints": [
{
"address": "127.0.0.1:10907",
"az": "A"
},
{
"address": "127.0.0.1:11907",
"az": "B"
},
{
"address": "127.0.0.1:12907",
"az": "C"
},
{
"address": "127.0.0.1:13907",
"az": "A"
},
{
"address": "127.0.0.1:14907",
"az": "B"
},
{
"address": "127.0.0.1:15907",
"az": "C"
}
]
}
{
"endpoints": [
{
"address": "127.0.0.1:10907",
"az": "A"
},
{
"address": "127.0.0.1:11907",
"az": "B"
},
{
"address": "127.0.0.1:12907",
"az": "C"
},
{
"address": "127.0.0.1:13907",
"az": "A"
},
{
"address": "127.0.0.1:14907",
"az": "B"
},
{
"address": "127.0.0.1:15907",
"az": "C"
}
]
}
]
```

Expand Down Expand Up @@ -282,7 +270,7 @@ These limits are applied per request and can be configured within the `request`
- `series_limit`: the maximum amount of series in a single remote write request.
- `samples_limit`: the maximum amount of samples in a single remote write request (summed from all series).

Any request above these limits will cause an 413 HTTP response (*Entity Too Large*) and should not be retried without modifications.
Any request above these limits will cause an 413 HTTP response (_Entity Too Large_) and should not be retried without modifications.

Currently a 413 HTTP response will cause data loss at the client, as none of them (Prometheus included) will break down 413 responses into smaller requests. The recommendation is to monitor these errors in the client and contact the owners of your Receive instance for more information on its configured limits.

Expand All @@ -296,6 +284,7 @@ By default, all these limits are disabled.
### Remote write request gates

The available request gates in Thanos Receive can be configured within the `global` key:

- `max_concurrency`: the maximum amount of remote write requests that will be concurrently worked on. Any request request that would exceed this limit will be accepted, but wait until the gate allows it to be processed.

## Active Series Limiting (experimental)
Expand All @@ -307,14 +296,17 @@ Every Receive Router/RouterIngestor node, queries meta-monitoring for active ser
To use the feature, one should specify the following limiting config options:

Under `global`:

- `meta_monitoring_url`: Specifies Prometheus Query API compatible meta-monitoring endpoint.
- `meta_monitoring_limit_query`: Option to specify PromQL query to execute against meta-monitoring. If not specified it is set to `sum(prometheus_tsdb_head_series) by (tenant)` by default.
- `meta_monitoring_http_client`: Optional YAML field specifying HTTP client config for meta-monitoring.

Under `default` and per `tenant`:

- `head_series_limit`: Specifies the total number of active (head) series for any tenant, across all replicas (including data replication), allowed by Thanos Receive. Set to 0 for unlimited.

NOTE:

- It is possible that Receive ingests more active series than the specified limit, as it relies on meta-monitoring, which may not have the latest data for current number of active series of a tenant at all times.
- Thanos Receive performs best-effort limiting. In case meta-monitoring is down/unreachable, Thanos Receive will not impose limits and only log errors for meta-monitoring being unreachable. Similarly to when one receiver cannot be scraped.
- Support for different limit configuration for different tenants is planned for the future.
Expand All @@ -331,7 +323,7 @@ Please see the metric `thanos_receive_forward_delay_seconds` to see if you need

The following formula is used for calculating quorum:

```go mdox-exec="sed -n '1012,1022p' pkg/receive/handler.go"
```go mdox-exec="sed -n '1015,1025p' pkg/receive/handler.go"
// writeQuorum returns minimum number of replicas that has to confirm write success before claiming replication success.
func (h *Handler) writeQuorum() int {
// NOTE(GiedriusS): this is here because otherwise RF=2 doesn't make sense as all writes
Expand Down

0 comments on commit 057fc42

Please sign in to comment.