Skip to content

Commit

Permalink
adding README
Browse files Browse the repository at this point in the history
adding sample app

adding examples readme

fixing lint errors

linting examples

updating readme tls_config example

excluding examples

adding examples to exclude in all linters

adding isort.cfg skip

changing isort to path

ignoring yml only

adding it to excluded directories in pylintrc

only adding exclude to directory

removing readme.rst and adding explicit file names to ignore

adding the rest of the files

adding readme.rst back

adding to ignore glob instead

reverting back to ignore list

converting README.md to README.rst
  • Loading branch information
Azfaar Qureshi committed Dec 9, 2020
1 parent 6514f37 commit 5358eb6
Show file tree
Hide file tree
Showing 10 changed files with 583 additions and 17 deletions.
2 changes: 1 addition & 1 deletion .flake8
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ ignore =
F401 # unused import, defer to pylint
W503 # allow line breaks before binary ops
W504 # allow line breaks after binary ops
E203 # allow whitespace before ':' (https://github.com/psf/black#slices)
E203 # allow whitespace before ':' (https://github.com/psf/black#slices)
exclude =
.bzr
.git
Expand Down
2 changes: 1 addition & 1 deletion .pylintrc
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ extension-pkg-whitelist=

# Add list of files or directories to be excluded. They should be base names, not
# paths.
ignore=CVS,gen
ignore=CVS,gen,Dockerfile,docker-compose.yml,README.md,requirements.txt,cortex-config.yml

# Add files or directories matching the regex patterns to be excluded. The
# regex matches against base names, not paths.
Expand Down
2 changes: 2 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,8 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
([#237](https://github.com/open-telemetry/opentelemetry-python-contrib/pull/237))
- Add Prometheus Remote Write Exporter integration tests in opentelemetry-docker-tests
([#216](https://github.com/open-telemetry/opentelemetry-python-contrib/pull/216))
- Add README and example app for Prometheus Remote Write Exporter
([#227](https://github.com/open-telemetry/opentelemetry-python-contrib/pull/227]))

### Changed
- `opentelemetry-instrumentation-asgi`, `opentelemetry-instrumentation-wsgi` Return `None` for `CarrierGetter` if key not found
Expand Down
251 changes: 236 additions & 15 deletions exporter/opentelemetry-exporter-prometheus-remote-write/README.rst
Original file line number Diff line number Diff line change
@@ -1,27 +1,248 @@
OpenTelemetry Prometheus Remote Write Exporter
==============================================
OpenTelemetry Python SDK Prometheus Remote Write Exporter
=========================================================

This library allows exporting metric data to `Prometheus Remote Write Integrated Backends
<https://prometheus.io/docs/operating/integrations/>`_. Latest `types.proto
<https://github.com/prometheus/prometheus/blob/master/prompb/types.proto>` and `remote.proto
<https://github.com/prometheus/prometheus/blob/master/prompb/remote.proto>` Protocol Buffers
used to create WriteRequest objects were taken from Prometheus repository. Development is
currently in progress.
This package contains an exporter to send `OTLP`_ metrics from the
Python SDK directly to a Prometheus Remote Write integrated backend
(such as Cortex or Thanos) without having to run an instance of the
Prometheus server. The image below shows the two Prometheus exporters in
the OpenTelemetry Python SDK.

Pipeline 1 illustrates the setup required for a Prometheus "pull"
exporter.

Pipeline 2 illustrates the setup required for the Prometheus Remote
Write exporter.

|Prometheus SDK pipelines|

The Prometheus Remote Write Exporter is a "push" based exporter and only
works with the OpenTelemetry `push controller`_. The controller
periodically collects data and passes it to the exporter. This exporter
then converts the data into `timeseries`_ and sends it to the Remote
Write integrated backend through HTTP POST requests. The metrics
collection datapath is shown below:

|controller_datapath_final|

See the ``example`` folder for a demo usage of this exporter

Table of Contents
=================

- `Summary`_
- `Table of Contents`_

- `Installation`_
- `Quickstart`_
- `Configuring the Exporter`_
- `Securing the Exporter`_

- `Authentication`_
- `TLS`_

- `Supported Aggregators`_
- `Error Handling`_
- `Retry Logic`_
- `Contributing`_

- `Design Doc`_

Installation
------------
Prerequisite
~~~~~~~~~~~~
1. Install the snappy c-library
**DEB**: `sudo apt-get install libsnappy-dev`
**RPM**: `sudo yum install libsnappy-devel`
**OSX/Brew**: `brew install snappy`
2. Install python-snappy
`pip install python-snappy`

Exporter
~~~~~~~~

- To install from the latest PyPi release, run
``pip install opentelemetry-exporter-prometheus-remote-write``
- To install from the local repository, run
``pip install -e exporter/opentelemetry-exporter-prometheus-remote-write/``
in the project root

Quickstart
----------

.. code:: python
from opentelemetry import metrics
from opentelemetry.sdk.metrics import MeterProvider
from opentelemetry.exporter.prometheus_remote_write import (
PrometheusRemoteWriteMetricsExporter
)
# Sets the global MeterProvider instance
metrics.set_meter_provider(MeterProvider())
# The Meter is responsible for creating and recording metrics. Each meter has a unique name, which we set as the module's name here.
meter = metrics.get_meter(__name__)
exporter = PrometheusRemoteWriteMetricsExporter(endpoint="endpoint_here") # add other params as needed
metrics.get_meter_provider().start_pipeline(meter, exporter, 5)
Configuring the Exporter
------------------------

The exporter can be configured through parameters passed to the
constructor. Here are all the options:

- ``endpoint``: url where data will be sent **(Required)**
- ``basic_auth``: username and password for authentication
**(Optional)**
- ``headers``: additional headers for remote write request as
determined by the remote write backend's API **(Optional)**
- ``timeout``: timeout for requests to the remote write endpoint in
seconds **(Optional)**
- ``proxies``: dict mapping request proxy protocols to proxy urls
**(Optional)**
- ``tls_config``: configuration for remote write TLS settings
**(Optional)**

Example with all the configuration options:

.. code:: python
exporter = PrometheusRemoteWriteMetricsExporter(
endpoint="http://localhost:9009/api/prom/push",
timeout=30,
basic_auth={
"username": "user",
"password": "pass123",
},
headers={
"X-Scope-Org-ID": "5",
"Authorization": "Bearer mytoken123",
},
proxies={
"http": "http://10.10.1.10:3000",
"https": "http://10.10.1.10:1080",
},
tls_config={
"cert_file": "path/to/file",
"key_file": "path/to/file",
"ca_file": "path_to_file",
"insecure_skip_verify": true, # for developing purposes
}
)
Securing the Exporter
---------------------

Authentication
~~~~~~~~~~~~~~

The exporter provides two forms of authentication which are shown below.
Users can add their own custom authentication by setting the appropriate
values in the ``headers`` dictionary

1. Basic Authentication Basic authentication sets a HTTP Authorization
header containing a base64 encoded username/password pair. See `RFC
7617`_ for more information. This

.. code:: python
exporter = PrometheusRemoteWriteMetricsExporter(
basic_auth={"username": "base64user", "password": "base64pass"}
)
2. Bearer Token Authentication This custom configuration can be achieved
by passing in a custom ``header`` to the constructor. See `RFC 6750`_
for more information.

.. code:: python
header = {
"Authorization": "Bearer mytoken123"
}
TLS
~~~

Users can add TLS to the exporter's HTTP Client by providing certificate
and key files in the ``tls_config`` parameter.

Supported Aggregators
---------------------

- Sum
- MinMaxSumCount
- Histogram
- LastValue
- ValueObserver

Error Handling
--------------

In general, errors are raised by the calling function. The exception is
for failed requests where any error status code is logged as a warning
instead.

This is because the exporter does not implement any retry logic as it
sends cumulative metrics data. This means that data will be preserved
even if some exports fail.

For example, consider a situation where a user increments a Counter
instrument 5 times and an export happens between each increment. If the
exports happen like so:

::

pip install opentelemetry-exporter-prometheus-remote-write
SUCCESS FAIL FAIL SUCCESS SUCCESS
1 2 3 4 5

Then the recieved data will be:

.. _Prometheus: https://prometheus.io/
.. _OpenTelemetry: https://github.com/open-telemetry/opentelemetry-python/
::

1 4 5

References
----------
The end result is the same since the aggregations are cumulative
Contributing
------------

This exporter's datapath is as follows:

|Exporter datapath| *Entites with ``*`` after their name are not actual
classes but rather logical groupings of functions within the exporter.*

If you would like to learn more about the exporter's structure and
design decisions please view the design document below

Design Doc
~~~~~~~~~~

`Design Document`_

This document is stored elsewhere as it contains large images which will
significantly increase the size of this repo.

* `Prometheus <https://prometheus.io/>`_
* `OpenTelemetry Project <https://opentelemetry.io/>`_
.. _Design Document: https://github.com/open-o11y/docs/tree/master/python-prometheus-remote-write
.. |Exporter datapath| image:: https://user-images.githubusercontent.com/20804975/100285717-604c7280-2f3f-11eb-9b73-bdf70afce9dd.png
.. _OTLP: https://github.com/open-telemetry/opentelemetry-specification/blob/master/specification/protocol/otlp.md
.. _push controller: https://github.com/open-telemetry/opentelemetry-python/blob/master/opentelemetry-sdk/src/opentelemetry/sdk/metrics/export/controller.py
.. _`timeseries`: https://prometheus.io/docs/concepts/data_model/
.. _Summary: #opentelemetry-python-sdk-prometheus-remote-write-exporter
.. _Table of Contents: #table-of-contents
.. _Installation: #installation
.. _Quickstart: #quickstart
.. _Configuring the Exporter: #configuring-the-exporter
.. _Securing the Exporter: #securing-the-exporter
.. _Authentication: #authentication
.. _TLS: #tls
.. _Supported Aggregators: #supported-aggregators
.. _Error Handling: #error-handling
.. _Retry Logic: #retry-logic
.. _Contributing: #contributing
.. _Design Doc: #design-doc
.. |Prometheus SDK pipelines| image:: https://user-images.githubusercontent.com/20804975/100285430-e320fd80-2f3e-11eb-8217-a562c559153c.png
.. |controller_datapath_final| image:: https://user-images.githubusercontent.com/20804975/100486582-79d1f380-30d2-11eb-8d17-d3e58e5c34e9.png
.. _RFC 7617: https://tools.ietf.org/html/rfc7617
.. _RFC 6750: https://tools.ietf.org/html/rfc6750
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
FROM python:3.7
WORKDIR /code

COPY . .
RUN apt-get update -y && apt-get install libsnappy-dev -y
RUN pip install -e .
RUN pip install -r ./examples/requirements.txt
CMD ["python", "./examples/sampleapp.py"]
Original file line number Diff line number Diff line change
@@ -0,0 +1,42 @@
# Prometheus Remote Write Exporter Example
This example uses [Docker Compose](https://docs.docker.com/compose/) to set up:

1. A Python program that creates 5 instruments with 5 unique
aggregators and a randomized load generator
2. An instance of [Cortex](https://cortexmetrics.io/) to recieve the metrics
data
3. An instance of [Grafana](https://grafana.com/) to visualizse the exported
data

## Requirements
* Have Docker Compose [installed](https://docs.docker.com/compose/install/)

*Users do not need to install Python as the app will be run in the Docker Container*

## Instructions
1. Run `docker-compose up -d` in the the `examples/` directory

The `-d` flag causes all services to run in detached mode and frees up your
terminal session. This also causes no logs to show up. Users can attach themselves to the service's logs manually using `docker logs ${CONTAINER_ID} --follow`

2. Log into the Grafana instance at [http://localhost:3000](http://localhost:3000)
* login credentials are `username: admin` and `password: admin`
* There may be an additional screen on setting a new password. This can be skipped and is optional

3. Navigate to the `Data Sources` page
* Look for a gear icon on the left sidebar and select `Data Sources`

4. Add a new Prometheus Data Source
* Use `http://cortex:9009/api/prom` as the URL
* (OPTIONAl) set the scrape interval to `2s` to make updates appear quickly
* click `Save & Test`

5. Go to `Metrics Explore` to query metrics
* Look for a compass icon on the left sidebar
* click `Metrics` for a dropdown list of all the available metrics
* (OPTIONAL) Adjust time range by clicking the `Last 6 hours` button on the upper right side of the graph
* (OPTIONAL) Set up auto-refresh by selecting an option under the dropdown next to the refresh button on the upper right side of the graph
* Click the refresh button and data should show up on hte graph

6. Shutdown the services when finished
* Run `docker-compose down` in the examples directory
Loading

0 comments on commit 5358eb6

Please sign in to comment.