You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I noticed some inconsistent scraping behavior, even after the fixes from #1038 and updating my scrape config to remove the duplicate scrapes. Sometimes the target would be scraped multiple times in the scrape interval or not at all.
In the setup I am monitoring, the Prometheus exporter runs as a sidecar and the Prometheus discovery creates multiple targets with the same address but different LabelSets because of the logic to additionally infer targets from the underlying pods. When I compared the output from /jobs/{jobID}/targets?collector_id={collectorID} with the results from allocation, there were cases where targets sharing the same address would overwrite each others labels in GetAllTargetsByCollectorAndJob. The output would then contain multiple entries of the same target address + labels and whether the collector was able to scrape the target depended on if the correct LabelSet was the last to overwrite the labelSet map value.
This is because the labelSet map only considers the TargetURL for uniqueness:
This is a great find! The only problem i could imagine would arise is how you would set the TargetURL in the resultant JSON object if you remove it as the key. This could probably be solved by changing how the targetgroupJSON is created
Thanks! And while I was testing locally, I used a hardcoded method of getting the target address for the targetGroupJSON by pulling the labelSet[v][model.AddressLabel]. But for a more formal solution, possibly we could update the group map to hold a struct that contains the TargetURL and hash? Open to other suggestions though
We could maybe just use the targetItem object itself... @kristinapathak and I were talking earlier about how the current patterns don't lend themselves well to returning this http data and i wonder if there's a way we can use the existing (or a new) structure more effectively
I noticed some inconsistent scraping behavior, even after the fixes from #1038 and updating my scrape config to remove the duplicate scrapes. Sometimes the target would be scraped multiple times in the scrape interval or not at all.
In the setup I am monitoring, the Prometheus exporter runs as a sidecar and the Prometheus discovery creates multiple targets with the same address but different LabelSets because of the logic to additionally infer targets from the underlying pods. When I compared the output from
/jobs/{jobID}/targets?collector_id={collectorID}
with the results from allocation, there were cases where targets sharing the same address would overwrite each others labels inGetAllTargetsByCollectorAndJob
. The output would then contain multiple entries of the same target address + labels and whether the collector was able to scrape the target depended on if the correct LabelSet was the last to overwrite thelabelSet
map value.This is because the
labelSet
map only considers the TargetURL for uniqueness:opentelemetry-operator/cmd/otel-allocator/allocation/http.go
Lines 54 to 59 in b708157
I would like to propose a change where the maps use
targetItem.hash()
instead oftargetItem.TargetURL
, as hash() accounts for the labels.The text was updated successfully, but these errors were encountered: