diff --git a/site/content/how_to/any_remote_storage.md b/site/content/how_to/any_remote_storage.md new file mode 100644 index 0000000000..74305d2164 --- /dev/null +++ b/site/content/how_to/any_remote_storage.md @@ -0,0 +1,222 @@ +--- +title: "M3 Aggregation for any Prometheus remote write storage" +--- + +### Prometheus Remote Write + +As mentioned in our integrations guide, M3 Coordinator and M3 Aggregator can be configured to write to any +[Prometheus Remote Write](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#remote_write) receiver. + +### Sidecar M3 Coordinator setup + +In this setup we show how to run M3 Coordinator with in process M3 Aggregator as a sidecar to receive and send metrics to a Prometheus instance via remote write protocol. + +{{% notice tip %}} +It is just a matter of endpoint configuration to use any other backend in place of Prometheus such as Thanos or Cortex. +{{% /notice %}} + +We are going to setup: +- 1 Prometheus instance with `remote-write-receiver` enabled. + - It will be used as a storage and query engine. +- 1 Prometheus instance scraping M3 Coordinator and Prometheus TSDB. +- 1 M3 Coordinator with in process M3 Aggregator that is aggregating and downsampling metrics. +- Finally, we are going define some aggregation and downsampling rules as an example. + +For simplicity lets put all config files in one directory and export env variable: +```shell +export CONFIG_DIR="" +``` + +First lets run a Prometheus instance with `remote-write-receiver` enabled: + +`prometheus.yml` +{{< codeinclude file="docs/includes/integrations/prometheus/prometheus.yml" language="yaml" >}} + +Now run: + +```shell +docker pull prom/prometheus:latest +docker run -p 9090:9090 --name prometheus \ + -v "$CONFIG_DIR/prometheus.yml:/etc/prometheus/prometheus.yml" prom/prometheus:latest \ + --config.file=/etc/prometheus/prometheus.yml \ + --storage.tsdb.path=/prometheus \ + --web.console.libraries=/usr/share/prometheus/console_libraries \ + --web.console.templates=/usr/share/prometheus/consoles \ + --enable-feature=remote-write-receiver +``` + +Next we configure and run M3 Coordinator: + +`m3_coord_simple.yml` +{{< codeinclude file="docs/includes/integrations/prometheus/m3_coord_simple.yml" language="yaml" >}} + +Run: + +```shell +docker pull quay.io/m3db/m3coordinator:latest +docker run -p 7201:7201 -p 3030:3030 --name m3coordinator \ + -v "$CONFIG_DIR/m3_coord_simple.yml:/etc/m3coordinator/m3coordinator.yml" \ + quay.io/m3db/m3coordinator:latest +``` + +Finally, we configure and run another Prometheus instance that is scraping M3 Coordinator and Prometheus TSDB: + +`prometheus-scraper.yml` +{{< codeinclude file="docs/includes/integrations/prometheus/prometheus-scraper.yml" language="yaml" >}} + +Now run: + +```shell +docker run --name prometheus-scraper \ + -v "$CONFIG_DIR/prometheus-scraper.yml:/etc/prometheus/prometheus.yml" prom/prometheus:latest +``` + +To explore metrics we can use Grafana: + +`datasource.yml` +{{< codeinclude file="docs/includes/integrations/prometheus/datasource.yml" language="yaml" >}} + +Now run: + +```shell +docker pull grafana/grafana:latest +docker run -p 3000:3000 --name grafana \ + -v "$CONFIG_DIR/datasource.yml:/etc/grafana/provisioning/datasources/datasource.yml" grafana/grafana:latest +``` + +You should be able to access Grafana on `http://localhost:3000` and explore some metrics. + +### Using rollup and mapping rules + +So far our setup is just forwarding metrics to a single unaggregated Prometheus instance as a passthrough. This is not really useful. + +Let's make use of the in process M3 Aggregator in our M3 Coordinator and add some rollup and mapping rules. + +Let's change the M3 Coordinator's configuration file by adding a new endpoint configuration for aggregated metrics: + +```yaml +prometheusRemoteBackend: + endpoints: + ... + - name: aggregated + address: "http://host.docker.internal:9090/api/v1/write" + storagePolicy: + retention: 1440h + resolution: 5m + downsample: + all: false +``` + +**Note:** For the simplicity of this example we are adding endpoint to same Prometheus instance. For production deployments it is likely that this will be a different Prometheus instance with different characteristics for aggregated metrics. + +Next we will add some mappings rules. We will make metric `coordinator_ingest_latency_count` be written at different resolution - `5m` and drop the rest. + +```yaml +downsample: + rules: + mappingRules: + - name: "drop metric from unaggregated endpoint" + filter: "__name__:coordinator_ingest_latency_count" + drop: True + - name: "metric @ different resolution" + filter: "__name__:coordinator_ingest_latency_count" + aggregations: ["Last"] + storagePolicies: + - retention: 1440h + resolution: 5m +``` + +Final M3 Coordinator configuration: + +`m3_coord_rules.yml` +{{< codeinclude file="docs/includes/integrations/prometheus/m3_coord_rules.yml" language="yaml" >}} + +Now restart M3 Coordinator to pick up new configuration: +``` +docker run -p 7201:7201 -p 3030:3030 --name m3coordinator \ + -v "$CONFIG_DIR/m3_coord_rules.yml:/etc/m3coordinator/m3coordinator.yml" \ + quay.io/m3db/m3coordinator:latest +``` + +Navigate to grafana explore tab `http://localhost:3000/explore` and enter `coordinator_ingest_latency_count`. +After a while you should see that metric emits at `5m` intervals. + +### Running in production + +The following sections describe deployment scenarios that can be used to run M3 Coordinator and remote M3 Aggregator in production. + +#### M3 Coordinator sidecar + +In this setup each metrics scraper has an M3 Coordinator as a sidecar with in process M3 Aggregator. + +Every instance of scraper is living its own life and is unaware of other M3 Coordinators. + +This is a pretty straightforward setup however it has limitations: +- Coupling M3 Coordinator to a scraper means we can only run as many M3 Coordinators as we have scrapers +- M3 Coordinator is likely to require more resources than an individual scraper + +#### Fleet of M3 Coordinators and M3 Aggregators + +With this setup we are able to scale M3 Coordinators and M3 Aggregators independently. + +This requires running 4 components separately: +- Instance of M3 Coordinator Admin to administer the cluster +- Fleet of stateless M3 Coordinators +- Fleet of in-memory stateful M3 Aggregators +- [Etcd](https://etcd.io/) cluster + +**Setup external Etcd cluster** + +The best documentation is to follow the official [etcd docs](https://github.com/etcd-io/etcd/tree/master/Documentation). + +**Run a M3 Coordinator in Admin mode** + +Refer to [Running M3 Coordinator in Admin mode](/docs/how_to/m3coordinator_admin). + +**Configure Remote Write Endpoints** + +Add configuration to M3 Coordinators that will be accepting metrics from scrapers. + +Configuration should be similar to: +```yaml +backend: prom-remote + +prometheusRemoteBackend: + endpoints: + # This points to a Prometheus started with `--storage.tsdb.retention.time=720h` + - name: unaggregated + address: "http://prometheus-raw:9090/api/v1/write" + # This points to a Prometheus started with `--storage.tsdb.retention.time=1440h` + - name: aggregated + address: "http://prometheus-agg:9090/api/v1/write" + storagePolicy: + # Should match retention of a Prometheus instance. Coordinator uses it for routing metrics correctly. + retention: 1440h + # Resolution instructs M3Aggregator to downsample incoming metrics at given rate. + # By tuning resolution we can control how much storage Prometheus needs at the cost of query accuracy as range shrinks. + resolution: 5m + # when ommited defaults to + #downsample: + # all: true + # Another example of prometheus configured for a very long retention but with 1h resolution + # Because of downsample: all == false metrics are downsampled based on mapping and rollup rules. + - name: historical + address: "http://prometheus-hist:9090/api/v1/write" + storagePolicy: + retention: 8760h + resolution: 1h + downsample: + all: false +``` + +**Configure Remote M3 Aggregator** + +Refer to [Aggregate Metrics with M3 Aggregator](/docs/how_to/m3aggregator) on details how to setup M3 Coordinator with Remote M3 Aggregator. + +For administrative operations when configuring topology use M3 Coordinator Admin address from previous step. + +**Configure scrapers to send metrics to M3** + +At this point you should have a running fleet of M3 Coordinators and M3 Aggregators. + +You should configure your load balancer to route to M3 Coordinators in round-robin fashion. diff --git a/site/content/how_to/m3coordinator_admin.md b/site/content/how_to/m3coordinator_admin.md new file mode 100644 index 0000000000..a287fa2bcd --- /dev/null +++ b/site/content/how_to/m3coordinator_admin.md @@ -0,0 +1,44 @@ +--- +title: "Running M3 Coordinator in Admin mode" +--- + +Sometimes it is useful to run M3 Coordinator in "admin mode". Usually it is enough o have a single dedicated instance that is used to perform various administration tasks: +- M3DB placement management +- M3 Aggregator placement management +- Namespace operations + +To run M3 Coordinator in admin mode simply start it with `noop-etcd` backend: + +```yaml +backend: noop-etcd +``` + +Usually for production clusters we run external Etcd. Set it up under `clusterManagement` configuration key. + +Final configuration might look as follows: + +```yaml +listenAddress: 0.0.0.0:7201 + +metrics: + scope: + prefix: "coordinator-admin" + prometheus: + handlerPath: /metrics + listenAddress: 0.0.0.0:3030 + sanitization: prometheus + samplingRate: 1.0 + +backend: noop-etcd + +clusterManagement: + etcd: + env: default_env + zone: embedded + service: m3db + cacheDir: /var/lib/m3kv + etcdClusters: + - zone: embedded + endpoints: + - etcd01:2379 +``` \ No newline at end of file diff --git a/site/content/includes/integrations/prometheus/datasource.yml b/site/content/includes/integrations/prometheus/datasource.yml new file mode 100644 index 0000000000..0de5e6fe20 --- /dev/null +++ b/site/content/includes/integrations/prometheus/datasource.yml @@ -0,0 +1,5 @@ +datasources: + - name: Prometheus + type: prometheus + access: proxy + url: http://host.docker.internal:9090 diff --git a/site/content/includes/integrations/prometheus/m3_coord_rules.yml b/site/content/includes/integrations/prometheus/m3_coord_rules.yml new file mode 100644 index 0000000000..be8cad6784 --- /dev/null +++ b/site/content/includes/integrations/prometheus/m3_coord_rules.yml @@ -0,0 +1,39 @@ +listenAddress: 0.0.0.0:7201 + +metrics: + scope: + prefix: "coordinator" + prometheus: + handlerPath: /metrics + listenAddress: 0.0.0.0:3030 + sanitization: prometheus + samplingRate: 1.0 + +backend: prom-remote + +prometheusRemoteBackend: + endpoints: + - name: unaggregated + address: "http://host.docker.internal:9090/api/v1/write" + - name: aggregated + address: "http://host.docker.internal:9090/api/v1/write" + storagePolicy: + retention: 1440h + resolution: 5m + downsample: + all: false + +downsample: + matcher: + requireNamespaceWatchOnInit: false + rules: + mappingRules: + - name: "drop metric from unaggregated endpoint" + filter: "__name__:coordinator_ingest_latency_count" + drop: True + - name: "metric @ different resolution" + filter: "__name__:coordinator_ingest_latency_count" + aggregations: ["Last"] + storagePolicies: + - retention: 1440h + resolution: 5m diff --git a/site/content/includes/integrations/prometheus/m3_coord_simple.yml b/site/content/includes/integrations/prometheus/m3_coord_simple.yml new file mode 100644 index 0000000000..2acefd59d5 --- /dev/null +++ b/site/content/includes/integrations/prometheus/m3_coord_simple.yml @@ -0,0 +1,17 @@ +listenAddress: 0.0.0.0:7201 + +metrics: + scope: + prefix: "coordinator" + prometheus: + handlerPath: /metrics + listenAddress: 0.0.0.0:3030 + sanitization: prometheus + samplingRate: 1.0 + +backend: prom-remote + +prometheusRemoteBackend: + endpoints: + - name: unaggregated + address: "http://host.docker.internal:9090/api/v1/write" diff --git a/site/content/includes/integrations/prometheus/prometheus-scraper.yml b/site/content/includes/integrations/prometheus/prometheus-scraper.yml new file mode 100644 index 0000000000..6d4a79f4c6 --- /dev/null +++ b/site/content/includes/integrations/prometheus/prometheus-scraper.yml @@ -0,0 +1,17 @@ +global: + scrape_interval: 15s + external_labels: + monitor: 'self-scraping-monitor' + +scrape_configs: + - job_name: 'prometheus' + static_configs: + - targets: ['host.docker.internal:9090'] + - job_name: 'm3 coordinator' + static_configs: + - targets: ['host.docker.internal:3030'] + +# 7201 is a port exposed by M3 Coordinator. +remote_write: + - url: http://host.docker.internal:7201/api/v1/prom/remote/write + remote_timeout: 30s diff --git a/site/content/includes/integrations/prometheus/prometheus.yml b/site/content/includes/integrations/prometheus/prometheus.yml new file mode 100644 index 0000000000..0f40e63c45 --- /dev/null +++ b/site/content/includes/integrations/prometheus/prometheus.yml @@ -0,0 +1,3 @@ +global: + external_labels: + monitor: 'prom-storage' diff --git a/site/content/includes/m3query/annotated_config.yaml b/site/content/includes/m3query/annotated_config.yaml index 99e4e9b2d1..d5acd40a31 100644 --- a/site/content/includes/m3query/annotated_config.yaml +++ b/site/content/includes/m3query/annotated_config.yaml @@ -312,10 +312,31 @@ rpc: # Configures methods to contact remote coordinators to distribute M3 clusters across data centers -# Backend store for query service, valid options: [grpc, m3db, noop-etcd]. +# Backend store for query service, valid options: [grpc, m3db, noop-etcd, prom-remote]. # Default = m3db backend: +# Configures Prometheus Remote backend when prom-remote backend is used. +prometheusRemoteBackend: + # Array of Prometheus remote write compatible endpoints. + # If storage policy is specified for endpoint only aggregated data matching policy will be sent to it. + # Endpoints which do not specify storagePolicy will receive unaggregated writes. + endpoints: + # Unique endpoint name + - name: + # HTTP url of and endpoint that accepts Prometheus remote writes. + address: + # Optional configuration to configure + storagePolicy: + # How long to store metrics data. This is only used to filter endpoints. + retention: + # Metrics sampling resolution. This is only used to filter endpoints. + resolution: + # Configuration for downsampling options on an aggregated endpoint + # If not specified will default to all=true + downsample: + all: + # The worker pool policy for read requests readWorkerPoolPolicy: # Worker pool automatically grows to capacity diff --git a/site/content/integrations/grafana.md b/site/content/integrations/grafana.md index 88d32ce000..ccfaeabbfa 100644 --- a/site/content/integrations/grafana.md +++ b/site/content/integrations/grafana.md @@ -1,6 +1,6 @@ --- title: "Grafana" -weight: 3 +weight: 4 --- diff --git a/site/content/integrations/graphite.md b/site/content/integrations/graphite.md index 5f9fa9e385..47aa6ac5a0 100644 --- a/site/content/integrations/graphite.md +++ b/site/content/integrations/graphite.md @@ -1,6 +1,6 @@ --- title: "Graphite" -weight: 2 +weight: 3 --- This document is a getting started guide to integrating the M3 stack with Graphite. diff --git a/site/content/integrations/influx.md b/site/content/integrations/influx.md index 7e03f071b6..d2589b0439 100644 --- a/site/content/integrations/influx.md +++ b/site/content/integrations/influx.md @@ -1,6 +1,6 @@ --- title: "InfluxDB" -weight: 4 +weight: 5 --- diff --git a/site/content/integrations/prometheus.md b/site/content/integrations/prometheus.md index 6a64cd4055..6d1a6a16bd 100644 --- a/site/content/integrations/prometheus.md +++ b/site/content/integrations/prometheus.md @@ -1,9 +1,9 @@ --- -title: "Prometheus" +title: "Prometheus: Storage, aggregation and query with M3" weight: 1 --- -This document is a getting started guide to integrating M3DB with Prometheus. +This document is a getting started guide to using M3DB as remote storage for Prometheus. ## M3 Coordinator configuration @@ -11,7 +11,7 @@ To write to a remote M3DB cluster the simplest configuration is to run `m3coordi Start by downloading the [config template](https://github.com/m3db/m3/blob/master/src/query/config/m3coordinator-cluster-template.yml). Update the `namespaces` and the `client` section for a new cluster to match your cluster's configuration. -You'll need to specify the static IPs or hostnames of your M3DB seed nodes, and the name and retention values of the namespace you set up. You can leave the namespace storage metrics type as `unaggregated` since it's required by default to have a cluster that receives all Prometheus metrics unaggregated. In the future you might also want to aggregate and downsample metrics for longer retention, and you can come back and update the config once you've setup those clusters. You can read more about our aggregation functionality [here](/docs/how_to/m3query). +You'll need to specify the static IPs or hostnames of your M3DB seed nodes, and the name and retention values of the namespace you set up. You can leave the namespace storage metrics type as `unaggregated` since it's required by default to have a cluster that receives all Prometheus metrics unaggregated. In the future you might also want to aggregate and downsample metrics for longer retention, and you can come back and update the config once you've setup those clusters. You can read more about our aggregation functionality [here](/docs/how_to/m3query). It should look something like: diff --git a/site/content/integrations/prometheus_aggregation.md b/site/content/integrations/prometheus_aggregation.md new file mode 100644 index 0000000000..7e10e1a980 --- /dev/null +++ b/site/content/integrations/prometheus_aggregation.md @@ -0,0 +1,53 @@ +--- +title: "Prometheus: Aggregation for Prometheus, Thanos or other remote write storage with M3" +weight: 2 +--- + +This document is a getting started guide to using M3 Coordinator or both +M3 Coordinator and M3 Aggregator roles to aggregate metrics for a compatible +Prometheus remote write storage backend. + +What's required is any Prometheus storage backend that supports the [Prometheus +Remote write protocol](https://docs.google.com/document/d/1LPhVRSFkGNSuU1fBd81ulhsCPR4hkSZyyBj1SZ8fWOM/). + +## Testing with docker compose + +To test out a full end-to-end example you can clone the M3 repository and use the corresponding guide for the [M3 and Prometheus remote write stack docker compose development stack](https://github.com/m3db/m3/blob/master/scripts/development/m3_prom_remote_stack/). + +## Basic guide with single M3 Coordinator sidecar aggregation + +Start by downloading the [M3 Coordinator config template](https://github.com/m3db/m3/blob/91db5e12cd34a95658cc00fa44ed9ae14d512710/src/query/config/m3coordinator-prom-remote-template.yml). + +Update the endpoints with your Prometheus Remote Write compatible storage setup. You should endup with config similar to: + +```yaml +backend: prom-remote + +prometheusRemoteBackend: + endpoints: + # This points to a Prometheus started with `--storage.tsdb.retention.time=720h` + - name: unaggregated + address: "http://prometheus-raw:9090/api/v1/write" + # This points to a Prometheus started with `--storage.tsdb.retention.time=1440h` + - name: aggregated + address: "http://prometheus-agg:9090/api/v1/write" + storagePolicy: + # Should match retention of a Prometheus instance. Coordinator uses it for routing metrics correctly. + retention: 1440h + # Resolution instructs M3Aggregator to downsample incoming metrics at given rate. + # By tuning resolution we can control how much storage Prometheus needs at the cost of query accuracy as range shrinks. + resolution: 5m + # Another example of Prometheus configured for a very long retention but with 1h resolution + # Because of downsample: all == false metrics are downsampled based on mapping and rollup rules. + - name: historical + address: "http://prometheus-hist:9090/api/v1/write" + storagePolicy: + retention: 8760h + resolution: 1h + downsample: + all: false +``` + +## More advanced deployments + +Refer to the [M3 Aggregation for any Prometheus remote write storage](/docs/how_to/any_remote_storage) for more details on more advanced deployment options. diff --git a/src/query/config/m3coordinator-prom-remote-template.yml b/src/query/config/m3coordinator-prom-remote-template.yml new file mode 100644 index 0000000000..24fc740ad4 --- /dev/null +++ b/src/query/config/m3coordinator-prom-remote-template.yml @@ -0,0 +1,16 @@ +backend: prom-remote + +prometheusRemoteBackend: + # Setup as many endpoints as you need. + # You can start with a single endpoint without any storage policy. It will receive all metrics. + # If you setup local or remote downsampling add endpoints with appropriate storage policies. + endpoints: + - name: raw + address: "http://prometheus-raw:9090/api/v1/write" + - name: aggregated + address: "http://prometheus-agg:9090/api/v1/write" + storagePolicy: + retention: 1440h + resolution: 1m + downsample: + all: true