Skip to content

Commit

Permalink
docs: Replace Remedy with Solution (#1623)
Browse files Browse the repository at this point in the history
  • Loading branch information
NHingerl authored Nov 21, 2024
1 parent 1b19a1e commit 3576e63
Show file tree
Hide file tree
Showing 3 changed files with 18 additions and 17 deletions.
6 changes: 3 additions & 3 deletions docs/user/02-logs.md
Original file line number Diff line number Diff line change
Expand Up @@ -492,7 +492,7 @@ You cannot enable the following plugins, because they potentially harm the stabi

**Cause**: Incorrect backend endpoint configuration (for example, using the wrong authentication credentials) or the backend being unreachable.

**Remedy**:
**Solution**:

- Check the `telemetry-fluent-bit` Pods for error logs by calling `kubectl logs -n kyma-system {POD_NAME}`.
- Check if the backend is up and reachable.
Expand All @@ -506,7 +506,7 @@ You cannot enable the following plugins, because they potentially harm the stabi

**Cause**: It can happen due to a variety of reasons. For example, a possible reason may be that the backend is limiting the ingestion rate, or the backend is refusing logs because they are too large.

**Remedy**:
**Solution**:

1. Check the `telemetry-fluent-bit` Pods for error logs by calling `kubectl logs -n kyma-system {POD_NAME}`. Also, check your observability backend to investigate potential causes.
2. If backend is limiting the rate by refusing logs, try the options described in [Agent Buffer Filling Up](#agent-buffer-filling-up).
Expand All @@ -518,7 +518,7 @@ You cannot enable the following plugins, because they potentially harm the stabi

**Cause**: The backend export rate is too low compared to the log collection rate.

**Remedy**:
**Solution**:

- Option 1: Increase maximum backend ingestion rate. For example, by scaling out the SAP Cloud Logging instances.

Expand Down
12 changes: 6 additions & 6 deletions docs/user/03-traces.md
Original file line number Diff line number Diff line change
Expand Up @@ -443,7 +443,7 @@ To detect and fix such situations, check the pipeline status and check out [Trou

**Cause**: Incorrect backend endpoint configuration (such as using the wrong authentication credentials), or the backend is unreachable.

**Remedy**:
**Solution**:

1. Check the `telemetry-trace-gateway` Pods for error logs by calling `kubectl logs -n kyma-system {POD_NAME}`.
2. Check if the backend is up and reachable.
Expand All @@ -458,7 +458,7 @@ To detect and fix such situations, check the pipeline status and check out [Trou

**Cause**: It can happen due to a variety of reasons - for example, the backend is limiting the ingestion rate.

**Remedy**:
**Solution**:

1. Check the `telemetry-trace-gateway` Pods for error logs by calling `kubectl logs -n kyma-system {POD_NAME}`. Also, check your observability backend to investigate potential causes.
2. If the backend is limiting the rate by refusing spans, try the options desribed in [Gateway Buffer Filling Up](#gateway-buffer-filling-up).
Expand All @@ -468,7 +468,7 @@ To detect and fix such situations, check the pipeline status and check out [Trou

**Cause**: Your SDK version is incompatible with the OTel Collector version.

**Remedy**:
**Solution**:

1. Check which SDK version you are using for instrumentation.
2. Investigate whether it is compatible with the OTel Collector version.
Expand All @@ -478,7 +478,7 @@ To detect and fix such situations, check the pipeline status and check out [Trou

**Cause**: By [default](#istio), only 1% of the requests are sent to the trace backend for trace recording.

**Remedy**:
**Solution**:

To see more traces in the trace backend, increase the percentage of requests by changing the default settings.
If you just want to see traces for one particular request, you can manually force sampling:
Expand Down Expand Up @@ -508,7 +508,7 @@ If you just want to see traces for one particular request, you can manually forc

**Cause**: The backend export rate is too low compared to the gateway ingestion rate.

**Remedy**:
**Solution**:

- Option 1: Increase the maximum backend ingestion rate - for example, by scaling out the SAP Cloud Logging instances.
- Option 2: Reduce the emitted spans in your applications.
Expand All @@ -522,4 +522,4 @@ If you just want to see traces for one particular request, you can manually forc

**Cause**: Gateway cannot receive spans at the given rate.

**Remedy**: Manually scale out the gateway by increasing the number of replicas for the trace gateway. See [Module Configuration and Status](https://kyma-project.io/#/telemetry-manager/user/01-manager?id=module-configuration).
**Solution**: Manually scale out the gateway by increasing the number of replicas for the trace gateway. See [Module Configuration and Status](https://kyma-project.io/#/telemetry-manager/user/01-manager?id=module-configuration).
17 changes: 9 additions & 8 deletions docs/user/04-metrics.md
Original file line number Diff line number Diff line change
Expand Up @@ -714,7 +714,7 @@ To detect and fix such situations, check the pipeline status and check out [Trou

**Cause**: Incorrect backend endpoint configuration (such as using the wrong authentication credentials) or the backend is unreachable.

**Remedy**:
**Solution**:

1. Check the `telemetry-metric-gateway` Pods for error logs by calling `kubectl logs -n kyma-system {POD_NAME}`.
2. Check if the backend is up and reachable.
Expand All @@ -729,7 +729,7 @@ To detect and fix such situations, check the pipeline status and check out [Trou

**Cause**: It can happen due to a variety of reasons - for example, the backend is limiting the ingestion rate.

**Remedy**:
**Solution**:

1. Check the `telemetry-metric-gateway` Pods for error logs by calling `kubectl logs -n kyma-system {POD_NAME}`. Also, check your observability backend to investigate potential causes.
2. If backend is limiting the rate by refusing metrics, try the options desribed in [Gateway Buffer Filling Up](#gateway-buffer-filling-up).
Expand All @@ -741,7 +741,7 @@ To detect and fix such situations, check the pipeline status and check out [Trou

**Cause**: Your SDK version is incompatible with the OTel Collector version.

**Remedy**:
**Solution**:

1. Check which SDK version you are using for instrumentation.
2. Investigate whether it is compatible with the OTel Collector version.
Expand All @@ -757,20 +757,21 @@ To detect and fix such situations, check the pipeline status and check out [Trou
<!-- markdown-link-check-disable-next-line -->
**Cause 1**: The workload is not configured to use 'STRICT' mTLS mode. For details, see [Activate Prometheus-based metrics](#_4-activate-prometheus-based-metrics).

**Remedy 1**: You can either set up 'STRICT' mTLS mode or HTTP scraping:
**Solution 1**: You can either set up 'STRICT' mTLS mode or HTTP scraping:

- Configure the workload using “STRICT” mTLS mode (for example, by applying a corresponding PeerAuthentication).
- Set up scraping through HTTP by applying the `prometheus.io/scheme=http` annotation.
<!-- markdown-link-check-disable-next-line -->
**Cause 2**: The Service definition enabling the scrape with Prometheus annotations does not reveal the application protocol to use in the port definition. For details, see [Activate Prometheus-based metrics](#_4-activate-prometheus-based-metrics).

**Remedy 2**: Define the application protocol in the Service port definition by either prefixing the port name with the protocol, like in `http-metrics` or define the `appProtocol` attribute.
**Solution 2**: Define the application protocol in the Service port definition by either prefixing the port name with the protocol, like in `http-metrics` or define the `appProtocol` attribute.

**Cause 3**: A deny-all `NetworkPolicy` was created in the workload namespace, which prevents that the agent can scrape metrics from annotated workloads.

**Remedy 3**: Create a separate `NetworkPolicy` to explicitly let the agent scrape your workload using the `telemetry.kyma-project.io/metric-scrape` label.
**Solution 3**: Create a separate `NetworkPolicy` to explicitly let the agent scrape your workload using the `telemetry.kyma-project.io/metric-scrape` label.

For example, see the following `NetworkPolicy` configuration:

```yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
Expand Down Expand Up @@ -798,7 +799,7 @@ spec:

**Cause**: The backend export rate is too low compared to the gateway ingestion rate.

**Remedy**:
**Solution**:

- Option 1: Increase maximum backend ingestion rate. For example, by scaling out the SAP Cloud Logging instances.

Expand All @@ -812,4 +813,4 @@ spec:

**Cause**: Gateway cannot receive metrics at the given rate.

**Remedy**: Manually scale out the gateway by increasing the number of replicas for the Metric gateway. See [Module Configuration and Status](https://kyma-project.io/#/telemetry-manager/user/01-manager?id=module-configuration).
**Solution**: Manually scale out the gateway by increasing the number of replicas for the Metric gateway. See [Module Configuration and Status](https://kyma-project.io/#/telemetry-manager/user/01-manager?id=module-configuration).

0 comments on commit 3576e63

Please sign in to comment.