-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
B3 HTTP headers not propagated to ext_authz server in 1.10.0 #6520
Comments
As @enbohm mentions, this doesn't look like a tracing issue as it is isolated to |
@objectiser I've tried with |
@enbohm Did you look at trace logs to see if the filter is receiving these headers? |
@enbohm Are you able to determine from the full log for the request where the x-b3 headers go missing? |
@objectiser from what I can see so far, they are not added in the request when Envoy calls the external auth. server. If I revert to 1.9.0 I can see them in the request. |
@enbohm would it be possible to provide the full log for the request? ie. not just for the ext_authz filter but the full chain from where the request arrives at the envoy proxy. |
Please refer to below debug-level logs from when the request is initiated.
|
@enbohm @mattklein123 I think this is a consequence of #5661 All outbound requests will need to inject the trace context. So the problem is how this is handled for the ext_authz filter - either trace context is automatically injected into the outbound request to the external auth server, so no need for user to copy the headers from the original request, or the trace context needs to be injected into the original request prior to the headers being copied. Looking back at the original issue #5504, it would still cause a problem if such a filter extension occurred before the health check. Although not sure a health check would be used in conjunction with filters such as ext_authz. |
ok - thanks for clarification! But is this a bug or it works as expected? If it is not an issue, how can I inject these headers/trace context to the outbound request? |
Interesting. Great find. cc @cgilmour. I don't have time to dig into this for a while. @objectiser do you potentially have time to analyze and make a recommendation? |
@enbohm Its a bug unfortunately. @mattklein123 Will try to find some time. Do you have a feel for how the health check vs filters like ext_authz are likely to be used? Can we assume that a health check filter will not be used with ext_authz (and similar), or that the health check will always be used first in the chain? A generic solution, that would fix the original problem and not be dependent on filter order, would be upfront check with each filter whether they want to disable tracing based on the request. Based on this decision, the trace context could be injected in its original location. But this may have a slight performance impact - although given that most filters are not concerned with sampling (so a no-op), might not be too bad? In fact - the decision in the case of the health check filter is not dependent upon the request, so as part of initializing the filter chain, could determine if any filters would disable tracing, and just cache that decision with the chain? So would not have any performance impact on the request. |
I think this also the culprit for the problem that we are seeing in Ambassador. We plumbed tracing for the ext_authz HTTP client and it was working fine. After the latest upgrade, traces start to appear in islands instead of being part of the same request. See emissary-ingress/emissary#1414 |
Hi @objectizer, just want to chime in with some thoughts. The original reason for fixing a problem here was a combination of one filter (healthcheck) interacting badly with one tracer (datadog). Because of some planned changes, the datadog tracer might have to be less-strict about changes to the sampling state after the headers have been injected. I wouldn't like the idea of a fairly widespread change to all filter extensions if the only justification is the way the original problem was solved. If you generally think the behavior it has now is more correct, that's great - and the idea to check with filters before finalizing a sampling decision sounds reasonable. |
Hi @cgilmour In general I think the sampling decision should be done upfront, as modifying after the span has already been created relies on the tracer impl being able to handle it. However as you mention currently this is only datadog, so if you have an alternative minimal change, or the modification to datadog's approach is likely to happen soon, then we should explore them first. |
I've been looking into the code around all of this, and noticed that the One solution for the problem in this issue is to have the same behavior for both grpc and http here, where a child span is spawned off that parent span and injected This wouldn't have a big consequence for the datadog tracer, it'd just be preferable to have the healthcheck filter occur first. If this is OK, I can work on a fix ASAP. |
@cgilmour Your proposed solution sounds good to me.
Sorry don't know. |
Sorry for the delay in catching up here. In general, I think moving injection to the router filter is still the correct change, since that is the point in which we actually forward the headers. I think the real issue is the one pointed out by @cgilmour which is that there is a bug in the extauth code. We should be creating a new child span there and injecting it into the outbound headers IMO. This is what is done for other outbound calls such as ratelimit. @objectiser @gsagula WDYT? |
@mattklein123 I haven't had a chance to dive deeper into this one yet, but we had it working for the gRPC client prior to 1.10. Tracing has never been supported by the HTTP client, but I had it implemented and working in our fork. As I said, I need to look closer to see where the bug is. If you have any idea, please let me know. |
@gsagula see #6520 (comment) above. That's the issue IMO. |
Sorry, I totally missed this comment. I have it implemented already. Just need to move it upstream. |
Is there an ETA for 1.11.0 release that contains the fix for this? |
@enbohm @objectiser @cgilmour Can you provide some clarification regarding on which versions of envoy using http_service.authorization_request.allowed_headers is working? I understand from the conversation there is no issue with grpc_service in all versions of envoy as well as there is no issue with http_service with older versions of envoy. Is that right? |
Thank you for your reply. I am using Istio/istio:1.0.3 --> envoyproxy/go-control-plane:0.6.0 --> envoyproxy/envoy:1.8.0 and I am facing the same issue. I am not able to receive allowed_headers mentioned in http_service to external auth service however some basic headers like x-forwarded-for, Host, Content-Length are being passed on to auth service. Can someone provide info whether this is an issue? |
This PR adds tracing support to the HTTP client in the external authorization filter. So far, only gRPC client was able to trace requests from the filter to an authorization service. Risk Level: Low Testing: yes Docs Changes: N/A Release Notes: Added Fixes #6520 Signed-off-by: Gabriel Linden Sagula <[email protected]>
This PR adds tracing support to the HTTP client in the external authorization filter. So far, only gRPC client was able to trace requests from the filter to an authorization service. Risk Level: Low Testing: yes Docs Changes: N/A Release Notes: Added Fixes envoyproxy#6520 Signed-off-by: Gabriel Linden Sagula <[email protected]>
This PR adds tracing support to the HTTP client in the external authorization filter. So far, only gRPC client was able to trace requests from the filter to an authorization service. Risk Level: Low Testing: yes Docs Changes: N/A Release Notes: Added Fixes envoyproxy#6520 Signed-off-by: Gabriel Linden Sagula <[email protected]>
This PR adds tracing support to the HTTP client in the external authorization filter. So far, only gRPC client was able to trace requests from the filter to an authorization service. Risk Level: Low Testing: yes Docs Changes: N/A Release Notes: Added Fixes envoyproxy#6520 Signed-off-by: Gabriel Linden Sagula <[email protected]>
@gsagula Today I updated from 1.9.0 to 1.12.0 but still couldn't get the external auth service to pick up the B3 headers. Using Envoy 1.9.0 I get these headers to my external auth service:
But after upgrading to 1.12.0 only these are present:
which makes it difficult for the external auth service to be part of the whole span (trace). Do I have to make any specific configuration to make it work? Maybe you have an example config to share? |
@kumarsparkz Have you tried with the latest version on envoy? Does it work with your istio-setup or it is still an issue? |
...or @objectiser have you had any chance to test this one? Can you see that the B3-header are propagated to an external HTTP auth service? |
@mattklein123 I'm not sure this issue has been resolved (=external http auth service does not get the required tracing header as in version 1.9.0). Should it be re-opened or shall I create a new issue? |
I tried to modify the config of jaeger-tracing example as follows: diff --git a/examples/jaeger-tracing/docker-compose.yaml b/examples/jaeger-tracing/docker-compose.yaml
index 6c353fada..efd9ec85b 100644
--- a/examples/jaeger-tracing/docker-compose.yaml
+++ b/examples/jaeger-tracing/docker-compose.yaml
@@ -61,5 +61,14 @@ services:
- "9411:9411"
- "16686:16686"
+ authz:
+ image: dio123/dump:1.0.0
+ networks:
+ envoymesh:
+ aliases:
+ - authz
+ expose:
+ - "8000"
+
networks:
envoymesh: {}
diff --git a/examples/jaeger-tracing/service1-envoy-jaeger.yaml b/examples/jaeger-tracing/service1-envoy-jaeger.yaml
index 7f06fe2da..f401d1422 100644
--- a/examples/jaeger-tracing/service1-envoy-jaeger.yaml
+++ b/examples/jaeger-tracing/service1-envoy-jaeger.yaml
@@ -56,6 +56,17 @@ static_resources:
decorator:
operation: checkStock
http_filters:
+ - name: envoy.ext_authz
+ config:
+ http_service:
+ server_uri:
+ uri: http://authz:8080
+ cluster: authz
+ timeout: 2s
+ authorization_request:
+ allowed_headers:
+ patterns:
+ prefix: "x-b3"
- name: envoy.router
typed_config: {}
clusters:
@@ -86,6 +97,19 @@ static_resources:
socket_address:
address: service2
port_value: 80
+ - name: authz
+ connect_timeout: 1s
+ type: strict_dns
+ lb_policy: round_robin
+ load_assignment:
+ cluster_name: authz
+ endpoints:
+ - lb_endpoints:
+ - endpoint:
+ address:
+ socket_address:
+ address: authz
+ port_value: 8000
- name: jaeger
connect_timeout: 1s
type: strict_dns The
|
@dio Thanks for sharing. I have the exact same configuration
but does not get these headers to propagate in v.12.0. Will cont. to investigate and debug during the day - will get back within shortly! |
...and this is my whole Envoy v.1.12.0 config. Will try to create a example project where I dump the headers received and see if anyone can understand why they are not propagated. What is a bit interesting is that I seem to get headers that shouldn't be propagated, but filtered out by the allowed_headers-section. For instance, if I add a custom header in the request, e.g. 'MyCustomHeader', that is propagated to the external auth service, which is wrong since I explicit only allow headers with prefix 'x-b3' - strange.
|
I've now created an example with a frontend envoy proxy (v.1.12.0), one auth-service and one simple backend service (see https://github.com/enbohm/envoy-tracing) which shows that the B3-header are not propagated to the external auth service. I can see that Envoy is part of the trace along with the simple backend service (see screenshot in the example repo). BUT no B3-headers are propagated to the external auth service and the open tracing lib can't pick them up and hence the auth-service will be shown as a separate trace in Jaeger UI. Please feel free to adjust this example if anyone can get it to work (I'd be more than happy :)). NOTE! This does work like a charm with Envoy v.1.9.0 but not with Envoy v.1.10.0 and above. |
I've tried confirming that this is now working but I can't even get Envoy start (using envoyproxy/envoy:latest). Keep getting these logs
I have also tried using this image
which starts as expected but no headers are unfortunately present in the external auth service. |
Try with this image: |
@cgilmour it works - victory :) Thank you all for a great job! |
B3 HTTP headers are not propagated to ext_authz server
Description:
Upgrading from Envoy 1.9.0 to 1.10.0 omits the Zipkin trace context propagation (B3 headers) to an external HTTP authorization server and hence is not visible in a distributed trace.
Using Envoy 1.9.0 the following config propagates B3 trace context (x-b3-traceid and x-b3-spanid headers) to an external auth server so it can join a distributed trace.
With Envoy 1.10.0 this configuration has changed (see https://www.envoyproxy.io/docs/envoy/v1.10.0/intro/version_history) and the corresponding config looks like
However, in 1.10.0 none of the x-b3-* keys are added in HTTP headers and hence the external authz. server can't be part of the trace. I've tried several patterns (exact, prefix, regex) to see if there are any issues with the StringMatcher but without any luck :(
Note that it is only the external auth.server that doesn't get the HTTP headers, I can see the trace which is initiated by Envoy in my tracing UI (Jeager) but without the external auth.server so some parts of the trace is working as expected.
The text was updated successfully, but these errors were encountered: