Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add scrape configs endpoint #1124

Merged
Prev Previous commit
Next Next commit
Reuse and docs
jaronoff97 committed Oct 3, 2022
commit 1e44ab8ade41ecb923e7bad03a22108133581316
35 changes: 35 additions & 0 deletions cmd/otel-allocator/README.md
Original file line number Diff line number Diff line change
@@ -12,7 +12,42 @@ This configuration will be resolved to target configurations and then split acro
TargetAllocators expose the results as [HTTP_SD endpoints](https://prometheus.io/docs/prometheus/latest/http_sd/)
split by collector.

Currently, the Target Allocator handles the sharding of targets. The operator sets the `$SHARD` variable to 0 to allow
collectors to keep targets generated by a Prometheus CRD. Using Prometheus sharding and target allocator sharding is not
recommended currently and may lead to unknown results.
[See this thread for more information](https://github.com/open-telemetry/opentelemetry-operator/pull/1124#discussion_r984683577)

#### Endpoints
`/scrape_configs`:

```json
{
"job1": {
"follow_redirects": true,
"honor_timestamps": true,
"job_name": "job1",
"metric_relabel_configs": [],
"metrics_path": "/metrics",
"scheme": "http",
"scrape_interval": "1m",
"scrape_timeout": "10s",
"static_configs": []
},
"job2": {
"follow_redirects": true,
"honor_timestamps": true,
"job_name": "job2",
"metric_relabel_configs": [],
"metrics_path": "/metrics",
"relabel_configs": [],
"scheme": "http",
"scrape_interval": "1m",
"scrape_timeout": "10s",
"kubernetes_sd_configs": []
}
}
```

`/jobs`:

```json
15 changes: 6 additions & 9 deletions cmd/otel-allocator/main.go
Original file line number Diff line number Diff line change
@@ -214,17 +214,17 @@ func (s *server) ScrapeConfigsHandler(w http.ResponseWriter, r *http.Request) {
configs := s.discoveryManager.GetScrapeConfigs()
configBytes, err := yaml2.Marshal(configs)
if err != nil {
s.errorHandler(err, w)
s.errorHandler(w, err)
}
jsonConfig, err := yaml.YAMLToJSON(configBytes)
if err != nil {
s.errorHandler(err, w)
s.errorHandler(w, err)
}
// We don't use the jsonHandler method because we don't want our bytes to be re-encoded
w.Header().Set("Content-Type", "application/json")
_, err = w.Write(jsonConfig)
if err != nil {
s.errorHandler(err, w)
s.errorHandler(w, err)
}
}

@@ -257,7 +257,7 @@ func (s *server) TargetsHandler(w http.ResponseWriter, r *http.Request) {
params := mux.Vars(r)
jobId, err := url.QueryUnescape(params["job_id"])
if err != nil {
s.errorHandler(err, w)
s.errorHandler(w, err)
return
}

@@ -276,12 +276,9 @@ func (s *server) TargetsHandler(w http.ResponseWriter, r *http.Request) {
}
}

func (s *server) errorHandler(err error, w http.ResponseWriter) {
func (s *server) errorHandler(w http.ResponseWriter, err error) {
w.WriteHeader(500)
jsonErr := json.NewEncoder(w).Encode(err)
if jsonErr != nil {
s.logger.Error(jsonErr, "failed to encode error message")
}
s.jsonHandler(w, err)
}

func (s *server) jsonHandler(w http.ResponseWriter, data interface{}) {