-
Notifications
You must be signed in to change notification settings - Fork 271
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
k8sattributes Returns information from Alloy, not from originating pod #1336
Comments
This issue has not had any activity in the past 30 days, so the |
@jseiser How are you populating your substitution values i.e - faro-${cluster_number}-${environment}.${base_domain} . Is this possible in Alloy .config files? |
Terraform is doing it, so the file are interpolated by the time the helm command is ran. |
Is there any other information I can provide here? We have tried as a deployment, as a daemonset. With and without Alloy being in the service mesh. We have even hardcoded the OTEL Attributes and removes the k8s attributes, but you still end up with the traces from linkerd not being matched. We have not been able to find a working example of AWS EKS + Grafana Alloy. The issue also appears to extend to the actual OTLP Collector itself, open-telemetry/opentelemetry-collector-contrib#29630 (comment) |
This is still an issue on the latest stable release. |
@jseiser |
I ended up ripping out linkerd-tracing, Ill get it back into place, and try your suggestion. We were hitting another issue, where if we removed Grafana Alloy from the mesh, linkerd would freak out and was unable to send any traces.
So i just ripped the tracing out all together. Ill get it back in place and test, can you confirm what your Config roughly looks like? |
We're not using EKS, we just happen to also be using linkerd. Also we're using the k8s-monitoring-helm chart. Here's the relevant part of our values file:
|
Ugh, turned everything back on, marked inbound skip, and linkerd just freaks out.
|
Any chance you do anything else? like marking those ports opaque or anything at the namespace level? |
@jseiser That annotation was the only thing required for our configuration. The linkerd sidecar is running in the Alloy pods and joining the mesh. This allowed the pods to send telemetry via OTLP but it was labeled with the kubernetes data of the alloy pod itself. The skip-inbound-ports setting allows the source linkerd sidecar to send directly to the pod instead of the linkerd sidecar. This way the source of the connection has the correct IP and alloy can look up the correct information. Hopefully this additional context helps. |
Ya, Im assuming there is a bug in linkerd at this point. Since everything works, except the actual linkerd-proxy sending traces to Alloy. If alloy is fully meshed, it sends, but as you know you get the wrong information. If its removed from the mesh, marked opaque or marked to skip, it breaks. Thank for atleast confirming it should work if linkerd operates correctly. |
Running into this as well. Really annoying to work around. I can't see the aforementioned |
While we have not been able to make this work at all, I was able to start getting linkerd's traces to contain some proper information. linkerd/linkerd2#13427 (comment) They still do not associate with the other traces like they are supposed too, but I guess its progress. |
What's wrong?
When enabling
k8sattributes
on grafana alloy running in EKS, you end up getting information fromAlloy
, not from the originating pod.So you end up with worthless attributes. Not the log at the end, is from an
nginx ingress
pod, in the name spacenginx-ingress-internal
, but all the attributes are for a grafana alloy pod.You can see the
ip
for the pod is correct in the trace below, but nothing else./e.g.
Steps to reproduce
System information
No response
Software version
v1.2.1
Configuration
Logs
The text was updated successfully, but these errors were encountered: