Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add instructions to send traces in SPM Dev Env #3996

Merged
merged 8 commits into from
Oct 31, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions docker-compose/monitor/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
.venv
58 changes: 51 additions & 7 deletions docker-compose/monitor/README.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,15 @@
# Service Performance Monitoring (SPM) Development/Demo Environment

Service Performance Monitoring (SPM) is an opt-in feature introduced to Jaeger that provides Request, Error and Duration (RED) metrics grouped by service name and operation that are derived from span data. These metrics are programmatically available through an API exposed by jaeger-query along with a "Monitor" UI tab that visualizes these metrics as graphs. For more details on this feature, please refer to the [tracking Issue](https://github.com/jaegertracing/jaeger/issues/2954) documenting the proposal and status.
Service Performance Monitoring (SPM) is an opt-in feature introduced to Jaeger that provides Request, Error and Duration
(RED) metrics grouped by service name and operation that are derived from span data. These metrics are programmatically
available through an API exposed by jaeger-query along with a "Monitor" UI tab that visualizes these metrics as graphs.

The motivation for providing this environment is to allow developers to either test Jaeger UI or their own applications against jaeger-query's metrics query API, as well as a quick and simple way for users to bring up the entire stack required to visualize RED metrics from simulated traces (or their own).
For more details on this feature, please refer to the [tracking Issue](https://github.com/jaegertracing/jaeger/issues/2954)
documenting the proposal and status.

The motivation for providing this environment is to allow developers to either test Jaeger UI or their own applications
against jaeger-query's metrics query API, as well as a quick and simple way for users to bring up the entire stack
required to visualize RED metrics from simulated traces or from their own application.

This environment consists the following backend components:

Expand Down Expand Up @@ -41,7 +48,44 @@ docker rmi -f prom/prometheus:latest
docker rmi -f grafana/grafana:latest
```

## Example 1
## Sending traces

It is possible to send traces to this SPM Development Environment from your own application and viewing their RED metrics.

For the purposes of this example, the Opentelemetry Collector of the [docker-compose.yml](./docker-compose.yml) file
has been configured to listen on port `14278` for Thrift formatted traces sent directly from applications to the
collector over HTTP.

An example Python script has been provided to demonstrate sending individual traces to the Opentelemetry Collector running in
this SPM Development Environment.

### Setup

Run the following commands to setup the Python virtual environment and install the Opentelemetry SDK:
```shell
python -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
```

Then run this example a number of times to generate some traces:

```shell
./jaeger_example.py
```

Navigate to Jaeger UI at http://localhost:16686/ and you should be able to see traces from this demo application
under the `my_service` service:

![My Service Traces](images/my_service_traces.png)

Then navigate to the Monitor tab at http://localhost:16686/monitor to view the RED metrics:

![My Service RED Metrics](images/my_service_metrics.png)

## Querying the HTTP API

### Example 1
Fetch call rates for both the driver and frontend services, grouped by operation, from now,
looking back 1 second with a sliding rate-calculation window of 1m and step size of 1 millisecond

Expand All @@ -50,27 +94,27 @@ curl "http://localhost:16686/api/metrics/calls?service=driver&service=frontend&g
```


## Example 2
### Example 2
Fetch P95 latencies for both the driver and frontend services from now,
looking back 1 second with a sliding rate-calculation window of 1m and step size of 1 millisecond, where the span kind is either "server" or "client".

```bash
curl "http://localhost:16686/api/metrics/latencies?service=driver&service=frontend&quantile=0.95&endTs=$(date +%s)000&lookback=1000&step=100&ratePer=60000&spanKind=server&spanKind=client" | jq .
```

## Example 3
### Example 3
Fetch error rates for both driver and frontend services using default parameters.
```bash
curl "http://localhost:16686/api/metrics/errors?service=driver&service=frontend" | jq .
```

## Example 4
### Example 4
Fetch the minimum step size supported by the underlying metrics store.
```bash
curl "http://localhost:16686/api/metrics/minstep" | jq .
```

# HTTP API
# HTTP API Specification
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I considered defining this as a swagger spec, but given its only "user" is Jaeger UI, I don't think it's worth the effort.


## Query Metrics

Expand Down
6 changes: 1 addition & 5 deletions docker-compose/monitor/docker-compose.yml
Original file line number Diff line number Diff line change
Expand Up @@ -11,11 +11,7 @@ services:
- METRICS_STORAGE_TYPE=prometheus
- PROMETHEUS_SERVER_URL=http://prometheus:9090
ports:
- "14250:14250"
- "14268:14268"
- "6831:6831/udp"
- "16686:16686"
- "16685:16685"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

isn't one of these ports still needed to receive data from OTEL collector?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, all ports are accessible between containers. Only the ports that need to be exposed to the "outside world" should be listed under ports.

otel_collector:
networks:
- backend
Expand All @@ -24,7 +20,7 @@ services:
- "./otel-collector-config.yml:/etc/otelcol/otel-collector-config.yml"
command: --config /etc/otelcol/otel-collector-config.yml
ports:
- "14278:14278"
- "4317:4317"
depends_on:
- jaeger
microsim:
Expand Down
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
7 changes: 6 additions & 1 deletion docker-compose/monitor/otel-collector-config.yml
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,11 @@ receivers:
thrift_http:
endpoint: "0.0.0.0:14278"

otlp:
protocols:
grpc:
http:

# Dummy receiver that's never used, because a pipeline is required to have one.
otlp/spanmetrics:
protocols:
Expand All @@ -27,7 +32,7 @@ processors:
service:
pipelines:
traces:
receivers: [jaeger]
receivers: [otlp, jaeger]
processors: [spanmetrics, batch]
exporters: [jaeger]
# The exporter name in this pipeline must match the spanmetrics.metrics_exporter name.
Expand Down
28 changes: 28 additions & 0 deletions docker-compose/monitor/otlp_exporter_example.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
#!/usr/bin/env python3

from opentelemetry import trace
from opentelemetry.trace import SpanKind
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk.trace.export import BatchSpanProcessor
from opentelemetry.sdk.resources import Resource

resource = Resource(attributes={
"service.name": "my_service"
})

trace.set_tracer_provider(TracerProvider(resource=resource))

otlp_exporter = OTLPSpanExporter(endpoint="http://localhost:4317", insecure=True)


trace.get_tracer_provider().add_span_processor(
BatchSpanProcessor(otlp_exporter)
)

tracer = trace.get_tracer(__name__)

with tracer.start_as_current_span("foo", kind=SpanKind.SERVER):
with tracer.start_as_current_span("bar", kind=SpanKind.SERVER):
with tracer.start_as_current_span("baz", kind=SpanKind.SERVER):
print("Hello world from OpenTelemetry Python!")
21 changes: 21 additions & 0 deletions docker-compose/monitor/requirements.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
backoff==2.2.1
certifi==2022.9.24
charset-normalizer==2.1.1
Deprecated==1.2.13
googleapis-common-protos==1.56.2
grpcio==1.50.0
idna==3.4
opentelemetry-api==1.13.0
opentelemetry-exporter-otlp==1.13.0
opentelemetry-exporter-otlp-proto-grpc==1.13.0
opentelemetry-exporter-otlp-proto-http==1.13.0
opentelemetry-proto==1.13.0
opentelemetry-sdk==1.13.0
opentelemetry-semantic-conventions==0.34b0
protobuf==3.20.3
requests==2.28.1
six==1.16.0
thrift==0.16.0
typing_extensions==4.4.0
urllib3==1.26.12
wrapt==1.14.1