Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

B3 HTTP headers not propagated to ext_authz server in 1.10.0 #6520

Closed
enbohm opened this issue Apr 9, 2019 · 37 comments · Fixed by #8142 or #8961
Closed

B3 HTTP headers not propagated to ext_authz server in 1.10.0 #6520

enbohm opened this issue Apr 9, 2019 · 37 comments · Fixed by #8142 or #8961
Assignees
Labels
Milestone

Comments

@enbohm
Copy link

enbohm commented Apr 9, 2019

B3 HTTP headers are not propagated to ext_authz server

Description:
Upgrading from Envoy 1.9.0 to 1.10.0 omits the Zipkin trace context propagation (B3 headers) to an external HTTP authorization server and hence is not visible in a distributed trace.

Using Envoy 1.9.0 the following config propagates B3 trace context (x-b3-traceid and x-b3-spanid headers) to an external auth server so it can join a distributed trace.

- name: envoy.ext_authz
     config:
       http_service:
         server_uri:
           uri: http://authorization-service:8080
           cluster: ext-authz
           timeout: 2s
         allowed_request_headers: ["x-b3-traceid", "x-b3-spanid"]

With Envoy 1.10.0 this configuration has changed (see https://www.envoyproxy.io/docs/envoy/v1.10.0/intro/version_history) and the corresponding config looks like

- name: envoy.ext_authz
     config:
       http_service:
         server_uri:
           uri: http://authorization-service:8080
           cluster: ext-authz
           timeout: 2s
         authorization_request:
           allowed_headers:
             patterns:
               prefix: "x-b3"

However, in 1.10.0 none of the x-b3-* keys are added in HTTP headers and hence the external authz. server can't be part of the trace. I've tried several patterns (exact, prefix, regex) to see if there are any issues with the StringMatcher but without any luck :(

Note that it is only the external auth.server that doesn't get the HTTP headers, I can see the trace which is initiated by Envoy in my tracing UI (Jeager) but without the external auth.server so some parts of the trace is working as expected.

@mattklein123 mattklein123 added the question Questions that are neither investigations, bugs, nor enhancements label Apr 9, 2019
@mattklein123
Copy link
Member

cc @gsagula @objectiser

@objectiser
Copy link
Contributor

As @enbohm mentions, this doesn't look like a tracing issue as it is isolated to ext_authz. However wondering what happens if regex of * is specified?

@enbohm
Copy link
Author

enbohm commented Apr 9, 2019

@objectiser I've tried with regex: ".*" but unfortunately no luck here either. A few params are propagated to the external server such as x-request-id, x-forwarded-proto and some "standard" ones like refererer, content-lenght and user-agent but no x-b3-* ones.

@gsagula
Copy link
Member

gsagula commented Apr 9, 2019

@enbohm Did you look at trace logs to see if the filter is receiving these headers?

@enbohm
Copy link
Author

enbohm commented Apr 9, 2019

Doesn't look like it does according to the logs.

envoy_ext_auth_headers

But I can see that the ext.authz filter, in case of a HTTP status 200, adds these headers to the targeting cluster (and this is also visible in the trace, see below trace)

envoy_cluster

@objectiser
Copy link
Contributor

@enbohm Are you able to determine from the full log for the request where the x-b3 headers go missing?

@enbohm
Copy link
Author

enbohm commented Apr 10, 2019

@objectiser from what I can see so far, they are not added in the request when Envoy calls the external auth. server. If I revert to 1.9.0 I can see them in the request.

@objectiser
Copy link
Contributor

objectiser commented Apr 10, 2019

@enbohm would it be possible to provide the full log for the request? ie. not just for the ext_authz filter but the full chain from where the request arrives at the envoy proxy.

@enbohm
Copy link
Author

enbohm commented Apr 10, 2019

Please refer to below debug-level logs from when the request is initiated.

core_envoy.0.72eipe4nwa4u@docker-desktop    | [2019-04-10 08:57:52.283][13][debug][http] [source/common/http/conn_manager_impl.cc:243] [C13] new stream
core_envoy.0.72eipe4nwa4u@docker-desktop    | [2019-04-10 08:57:52.284][13][debug][http] [source/common/http/conn_manager_impl.cc:580] [C13][S14152072273648759430] request headers complete (end_stream=true):
core_envoy.0.72eipe4nwa4u@docker-desktop    | ':authority', '127.0.0.1'
core_envoy.0.72eipe4nwa4u@docker-desktop    | ':path', '/identity/user/100'
core_envoy.0.72eipe4nwa4u@docker-desktop    | ':method', 'GET'
core_envoy.0.72eipe4nwa4u@docker-desktop    | 'connection', 'keep-alive'
core_envoy.0.72eipe4nwa4u@docker-desktop    | 'cache-control', 'max-age=0'
core_envoy.0.72eipe4nwa4u@docker-desktop    | 'upgrade-insecure-requests', '1'
core_envoy.0.72eipe4nwa4u@docker-desktop    | 'user-agent', 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.86 Safari/537.36'
core_envoy.0.72eipe4nwa4u@docker-desktop    | 'accept', 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3'
core_envoy.0.72eipe4nwa4u@docker-desktop    | 'authorization', 'Bearer eyJraWQiOiIxIiwiYWxnIjoiUlMyNTYifQ.eyJzdWIiOiI2MzM0MjkzODI4MDM1NDI0MjU2IiwiZW1haWxfdmVyaWZpZWQiOnRydWUsImFtciI6WyJVSURfUFdEIl0sImlzcyI6Imh0dHBzOlwvXC9jb25uZWN0LnN0YWdpbmcudGVsZW5vcmRpZ2l0YWwuY29tXC9vYXV0aCIsInBob25lX251bWJlcl92ZXJpZmllZCI6ZmFsc2UsInRkX3NscyI6ZmFsc2UsImxvY2FsZSI6ImVuIiwiYWNyIjoiMiIsImF1ZCI6WyJ0ZWxlbm9yY29ubmV4aW9uLXRlbGVub3Jpb3Rwb3J0YWwtd2ViIl0sInRkX2F1IjoiNDY3Njg1ODMxOTQiLCJhdXRoX3RpbWUiOjE1NTQ4ODY0MzgsInBob25lX251bWJlciI6IjQ2NzY4NTgzMTk0IiwiZXhwIjoxNTU0ODkwMzM5LCJpYXQiOjE1NTQ4ODY0MzksImVtYWlsIjoiYW5kcmVhcy5lbmJvaG1AdGVsZW5vcmNvbm5leGlvbi5jb20ifQ.aAcpY5cc-13dIDI09Yg3AGi2NY5irjSNuxT2hqyDchJpBmWhM6K2Mfaxx6AjVLlITiZBemvMO5rPb2f_XkMtdOmm6TqTrz9CMfhq9iiwnNB_mGLIygQkva8w_tp_h1iKqH9NLBV4QjtTc8Lx1Lpro3rWju7Dj_mUhLqSE8_lLg8'
core_envoy.0.72eipe4nwa4u@docker-desktop    | 'accept-encoding', 'gzip, deflate, br'
core_envoy.0.72eipe4nwa4u@docker-desktop    | 'accept-language', 'sv-SE,sv;q=0.9,en-US;q=0.8,en;q=0.7,nb;q=0.6,de;q=0.5'
core_envoy.0.72eipe4nwa4u@docker-desktop    | 'cookie', '_ga=GA1.1.2019047718.1544793364'
core_envoy.0.72eipe4nwa4u@docker-desktop    | 
core_envoy.0.72eipe4nwa4u@docker-desktop    | [2019-04-10 08:57:52.284][13][debug][http] [source/common/http/conn_manager_impl.cc:1037] [C13][S14152072273648759430] request end stream
core_envoy.0.72eipe4nwa4u@docker-desktop    | [2019-04-10 08:57:52.284][13][debug][filter] [source/extensions/filters/http/jwt_authn/filter.cc:89] Called Filter : setDecoderFilterCallbacks
core_envoy.0.72eipe4nwa4u@docker-desktop    | [2019-04-10 08:57:52.284][13][debug][filter] [source/extensions/filters/http/jwt_authn/filter.cc:24] Called Filter : decodeHeaders
core_envoy.0.72eipe4nwa4u@docker-desktop    | [2019-04-10 08:57:52.284][13][debug][filter] [source/extensions/filters/http/jwt_authn/matcher.cc:71] Prefix requirement '/' matched.
core_envoy.0.72eipe4nwa4u@docker-desktop    | [2019-04-10 08:57:52.284][13][debug][filter] [source/extensions/filters/http/jwt_authn/authenticator.cc:102] Jwt authentication starts
core_envoy.0.72eipe4nwa4u@docker-desktop    | [2019-04-10 08:57:52.284][13][debug][filter] [source/extensions/filters/http/jwt_authn/authenticator.cc:243] Jwt authentication completed with: OK
core_envoy.0.72eipe4nwa4u@docker-desktop    | [2019-04-10 08:57:52.284][13][debug][filter] [source/extensions/filters/http/jwt_authn/filter.cc:50] Called Filter : check complete 0
core_envoy.0.72eipe4nwa4u@docker-desktop    | [2019-04-10 08:57:52.285][13][debug][router] [source/common/router/router.cc:320] [C0][S1909674969945453086] cluster 'ext-authz' match for URL '/identity/user/100'
core_envoy.0.72eipe4nwa4u@docker-desktop    | [2019-04-10 08:57:52.285][13][debug][router] [source/common/router/router.cc:381] [C0][S1909674969945453086] router decoding headers:
core_envoy.0.72eipe4nwa4u@docker-desktop    | ':method', 'GET'
core_envoy.0.72eipe4nwa4u@docker-desktop    | ':path', '/identity/user/100'
core_envoy.0.72eipe4nwa4u@docker-desktop    | ':authority', '127.0.0.1'
core_envoy.0.72eipe4nwa4u@docker-desktop    | ':scheme', 'http'
core_envoy.0.72eipe4nwa4u@docker-desktop    | 'content-length', '0'
core_envoy.0.72eipe4nwa4u@docker-desktop    | 'x-request-id', '365831ee-68d4-9e63-b657-a850d51947a8'
core_envoy.0.72eipe4nwa4u@docker-desktop    | 'authorization', 'Bearer eyJraWQiOiIxIiwiYWxnIjoiUlMyNTYifQ.eyJzdWIiOiI2MzM0MjkzODI4MDM1NDI0MjU2IiwiZW1haWxfdmVyaWZpZWQiOnRydWUsImFtciI6WyJVSURfUFdEIl0sImlzcyI6Imh0dHBzOlwvXC9jb25uZWN0LnN0YWdpbmcudGVsZW5vcmRpZ2l0YWwuY29tXC9vYXV0aCIsInBob25lX251bWJlcl92ZXJpZmllZCI6ZmFsc2UsInRkX3NscyI6ZmFsc2UsImxvY2FsZSI6ImVuIiwiYWNyIjoiMiIsImF1ZCI6WyJ0ZWxlbm9yY29ubmV4aW9uLXRlbGVub3Jpb3Rwb3J0YWwtd2ViIl0sInRkX2F1IjoiNDY3Njg1ODMxOTQiLCJhdXRoX3RpbWUiOjE1NTQ4ODY0MzgsInBob25lX251bWJlciI6IjQ2NzY4NTgzMTk0IiwiZXhwIjoxNTU0ODkwMzM5LCJpYXQiOjE1NTQ4ODY0MzksImVtYWlsIjoiYW5kcmVhcy5lbmJvaG1AdGVsZW5vcmNvbm5leGlvbi5jb20ifQ.aAcpY5cc-13dIDI09Yg3AGi2NY5irjSNuxT2hqyDchJpBmWhM6K2Mfaxx6AjVLlITiZBemvMO5rPb2f_XkMtdOmm6TqTrz9CMfhq9iiwnNB_mGLIygQkva8w_tp_h1iKqH9NLBV4QjtTc8Lx1Lpro3rWju7Dj_mUhLqSE8_lLg8'
core_envoy.0.72eipe4nwa4u@docker-desktop    | 'x-forwarded-proto', 'http'
core_envoy.0.72eipe4nwa4u@docker-desktop    | 'x-envoy-internal', 'true'
core_envoy.0.72eipe4nwa4u@docker-desktop    | 'x-forwarded-for', '10.0.1.137'
core_envoy.0.72eipe4nwa4u@docker-desktop    | 'x-envoy-expected-rq-timeout-ms', '2000'
core_envoy.0.72eipe4nwa4u@docker-desktop    | 
core_envoy.0.72eipe4nwa4u@docker-desktop    | [2019-04-10 08:57:52.285][13][debug][pool] [source/common/http/http1/conn_pool.cc:88] creating a new connection
core_envoy.0.72eipe4nwa4u@docker-desktop    | [2019-04-10 08:57:52.285][13][debug][client] [source/common/http/codec_client.cc:26] [C17] connecting
core_envoy.0.72eipe4nwa4u@docker-desktop    | [2019-04-10 08:57:52.285][13][debug][connection] [source/common/network/connection_impl.cc:644] [C17] connecting to 10.0.1.122:8080
core_envoy.0.72eipe4nwa4u@docker-desktop    | [2019-04-10 08:57:52.285][13][debug][connection] [source/common/network/connection_impl.cc:653] [C17] connection in progress
core_envoy.0.72eipe4nwa4u@docker-desktop    | [2019-04-10 08:57:52.285][13][debug][pool] [source/common/http/conn_pool_base.cc:20] queueing request due to no available connections
core_envoy.0.72eipe4nwa4u@docker-desktop    | [2019-04-10 08:57:52.286][13][debug][connection] [source/common/network/connection_impl.cc:517] [C17] connected
core_envoy.0.72eipe4nwa4u@docker-desktop    | [2019-04-10 08:57:52.286][13][debug][client] [source/common/http/codec_client.cc:64] [C17] connected
core_envoy.0.72eipe4nwa4u@docker-desktop    | [2019-04-10 08:57:52.286][13][debug][pool] [source/common/http/http1/conn_pool.cc:245] [C17] attaching to next request
core_envoy.0.72eipe4nwa4u@docker-desktop    | [2019-04-10 08:57:52.286][13][debug][router] [source/common/router/router.cc:1165] [C0][S1909674969945453086] pool ready
core_envoy.0.72eipe4nwa4u@docker-desktop    | [2019-04-10 08:57:52.299][12][debug][main] [source/server/connection_handler_impl.cc:257] [C18] new connection
core_envoy.0.72eipe4nwa4u@docker-desktop    | [2019-04-10 08:57:52.826][13][debug][router] [source/common/router/router.cc:717] [C0][S1909674969945453086] upstream headers complete: end_stream=false
core_envoy.0.72eipe4nwa4u@docker-desktop    | [2019-04-10 08:57:52.826][13][debug][http] [source/common/http/async_client_impl.cc:94] async http request response headers (end_stream=false):
core_envoy.0.72eipe4nwa4u@docker-desktop    | ':status', '200'
core_envoy.0.72eipe4nwa4u@docker-desktop    | 'date', 'Wed, 10 Apr 2019 08:57:52 GMT'
core_envoy.0.72eipe4nwa4u@docker-desktop    | 'transfer-encoding', 'chunked'
core_envoy.0.72eipe4nwa4u@docker-desktop    | 'connection', 'keep-alive'
core_envoy.0.72eipe4nwa4u@docker-desktop    | 'x-envoy-upstream-service-time', '574'
core_envoy.0.72eipe4nwa4u@docker-desktop    | 
core_envoy.0.72eipe4nwa4u@docker-desktop    | [2019-04-10 08:57:52.836][13][debug][client] [source/common/http/codec_client.cc:95] [C17] response complete
core_envoy.0.72eipe4nwa4u@docker-desktop    | [2019-04-10 08:57:52.836][13][debug][filter] [source/extensions/filters/http/ext_authz/ext_authz.cc:175] [C13][S14152072273648759430] ext_authz filter accepted the request
core_envoy.0.72eipe4nwa4u@docker-desktop    | [2019-04-10 08:57:52.836][13][debug][router] [source/common/router/router.cc:320] [C13][S14152072273648759430] cluster 'identity_service' match for URL '/identity/user/100'
core_envoy.0.72eipe4nwa4u@docker-desktop    | [2019-04-10 08:57:52.836][13][debug][router] [source/common/router/router.cc:381] [C13][S14152072273648759430] router decoding headers:
core_envoy.0.72eipe4nwa4u@docker-desktop    | ':authority', '127.0.0.1'
core_envoy.0.72eipe4nwa4u@docker-desktop    | ':path', '/user/100'
core_envoy.0.72eipe4nwa4u@docker-desktop    | ':method', 'GET'
core_envoy.0.72eipe4nwa4u@docker-desktop    | ':scheme', 'http'
core_envoy.0.72eipe4nwa4u@docker-desktop    | 'cache-control', 'max-age=0'
core_envoy.0.72eipe4nwa4u@docker-desktop    | 'upgrade-insecure-requests', '1'
core_envoy.0.72eipe4nwa4u@docker-desktop    | 'user-agent', 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.86 Safari/537.36'
core_envoy.0.72eipe4nwa4u@docker-desktop    | 'accept', 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3'
core_envoy.0.72eipe4nwa4u@docker-desktop    | 'authorization', 'Bearer eyJraWQiOiIxIiwiYWxnIjoiUlMyNTYifQ.eyJzdWIiOiI2MzM0MjkzODI4MDM1NDI0MjU2IiwiZW1haWxfdmVyaWZpZWQiOnRydWUsImFtciI6WyJVSURfUFdEIl0sImlzcyI6Imh0dHBzOlwvXC9jb25uZWN0LnN0YWdpbmcudGVsZW5vcmRpZ2l0YWwuY29tXC9vYXV0aCIsInBob25lX251bWJlcl92ZXJpZmllZCI6ZmFsc2UsInRkX3NscyI6ZmFsc2UsImxvY2FsZSI6ImVuIiwiYWNyIjoiMiIsImF1ZCI6WyJ0ZWxlbm9yY29ubmV4aW9uLXRlbGVub3Jpb3Rwb3J0YWwtd2ViIl0sInRkX2F1IjoiNDY3Njg1ODMxOTQiLCJhdXRoX3RpbWUiOjE1NTQ4ODY0MzgsInBob25lX251bWJlciI6IjQ2NzY4NTgzMTk0IiwiZXhwIjoxNTU0ODkwMzM5LCJpYXQiOjE1NTQ4ODY0MzksImVtYWlsIjoiYW5kcmVhcy5lbmJvaG1AdGVsZW5vcmNvbm5leGlvbi5jb20ifQ.aAcpY5cc-13dIDI09Yg3AGi2NY5irjSNuxT2hqyDchJpBmWhM6K2Mfaxx6AjVLlITiZBemvMO5rPb2f_XkMtdOmm6TqTrz9CMfhq9iiwnNB_mGLIygQkva8w_tp_h1iKqH9NLBV4QjtTc8Lx1Lpro3rWju7Dj_mUhLqSE8_lLg8'
core_envoy.0.72eipe4nwa4u@docker-desktop    | 'accept-encoding', 'gzip, deflate, br'
core_envoy.0.72eipe4nwa4u@docker-desktop    | 'accept-language', 'sv-SE,sv;q=0.9,en-US;q=0.8,en;q=0.7,nb;q=0.6,de;q=0.5'
core_envoy.0.72eipe4nwa4u@docker-desktop    | 'cookie', '_ga=GA1.1.2019047718.1544793364'
core_envoy.0.72eipe4nwa4u@docker-desktop    | 'x-forwarded-proto', 'http'
core_envoy.0.72eipe4nwa4u@docker-desktop    | 'x-request-id', '365831ee-68d4-9e63-b657-a850d51947a8'
core_envoy.0.72eipe4nwa4u@docker-desktop    | 'x-envoy-expected-rq-timeout-ms', '15000'
core_envoy.0.72eipe4nwa4u@docker-desktop    | 'x-b3-traceid', 'b7c187d6ed0e1e64'
core_envoy.0.72eipe4nwa4u@docker-desktop    | 'x-b3-spanid', 'b7c187d6ed0e1e64'
core_envoy.0.72eipe4nwa4u@docker-desktop    | 'x-b3-sampled', '1'
core_envoy.0.72eipe4nwa4u@docker-desktop    | 'x-envoy-original-path', '/identity/user/100'
core_envoy.0.72eipe4nwa4u@docker-desktop    | 
core_envoy.0.72eipe4nwa4u@docker-desktop    | [2019-04-10 08:57:52.836][13][debug][pool] [source/common/http/http1/conn_pool.cc:97] [C16] using existing connection
core_envoy.0.72eipe4nwa4u@docker-desktop    | [2019-04-10 08:57:52.836][13][debug][router] [source/common/router/router.cc:1165] [C13][S14152072273648759430] pool ready
core_envoy.0.72eipe4nwa4u@docker-desktop    | [2019-04-10 08:57:52.836][13][debug][pool] [source/common/http/http1/conn_pool.cc:202] [C17] response complete
core_envoy.0.72eipe4nwa4u@docker-desktop    | [2019-04-10 08:57:52.836][13][debug][pool] [source/common/http/http1/conn_pool.cc:240] [C17] moving to ready
core_envoy.0.72eipe4nwa4u@docker-desktop    | [2019-04-10 08:57:52.923][13][debug][router] [source/common/router/router.cc:717] [C13][S14152072273648759430] upstream headers complete: end_stream=false
core_envoy.0.72eipe4nwa4u@docker-desktop    | [2019-04-10 08:57:52.923][13][debug][http] [source/common/http/conn_manager_impl.cc:1278] [C13][S14152072273648759430] encoding headers via codec (end_stream=false):
core_envoy.0.72eipe4nwa4u@docker-desktop    | ':status', '200'
core_envoy.0.72eipe4nwa4u@docker-desktop    | 'content-type', 'application/json'
core_envoy.0.72eipe4nwa4u@docker-desktop    | 'content-length', '207'
core_envoy.0.72eipe4nwa4u@docker-desktop    | 'date', 'Wed, 10 Apr 2019 08:57:52 GMT'
core_envoy.0.72eipe4nwa4u@docker-desktop    | 'x-envoy-upstream-service-time', '87'
core_envoy.0.72eipe4nwa4u@docker-desktop    | 'server', 'tip'
core_envoy.0.72eipe4nwa4u@docker-desktop    | 
core_envoy.0.72eipe4nwa4u@docker-desktop    | [2019-04-10 08:57:52.924][13][debug][client] [source/common/http/codec_client.cc:95] [C16] response complete
core_envoy.0.72eipe4nwa4u@docker-desktop    | [2019-04-10 08:57:52.924][13][debug][filter] [source/extensions/filters/http/jwt_authn/filter.cc:17] Called Filter : onDestroy
core_envoy.0.72eipe4nwa4u@docker-desktop    | [2019-04-10 08:57:52.924][13][debug][pool] [source/common/http/http1/conn_pool.cc:202] [C16] response complete
core_envoy.0.72eipe4nwa4u@docker-desktop    | [2019-04-10 08:57:52.924][13][debug][pool] [source/common/http/http1/conn_pool.cc:240] [C16] moving to ready
core_envoy.0.72eipe4nwa4u@docker-desktop    | [2019-04-10 08:57:52.954][13][debug][router] [source/common/router/router.cc:320] [C0][S12042644501986607486] cluster 'jaeger' match for URL '/api/v1/spans'
core_envoy.0.72eipe4nwa4u@docker-desktop    | [2019-04-10 08:57:52.955][13][debug][router] [source/common/router/router.cc:381] [C0][S12042644501986607486] router decoding headers:
core_envoy.0.72eipe4nwa4u@docker-desktop    | ':method', 'POST'
core_envoy.0.72eipe4nwa4u@docker-desktop    | ':path', '/api/v1/spans'
core_envoy.0.72eipe4nwa4u@docker-desktop    | ':authority', 'jaeger'
core_envoy.0.72eipe4nwa4u@docker-desktop    | ':scheme', 'http'
core_envoy.0.72eipe4nwa4u@docker-desktop    | 'content-type', 'application/json'
core_envoy.0.72eipe4nwa4u@docker-desktop    | 'x-envoy-internal', 'true'
core_envoy.0.72eipe4nwa4u@docker-desktop    | 'x-forwarded-for', '10.0.1.137'
core_envoy.0.72eipe4nwa4u@docker-desktop    | 'x-envoy-expected-rq-timeout-ms', '5000'
core_envoy.0.72eipe4nwa4u@docker-desktop    | 
core_envoy.0.72eipe4nwa4u@docker-desktop    | [2019-04-10 08:57:52.955][13][debug][pool] [source/common/http/http1/conn_pool.cc:97] [C3] using existing connection
core_envoy.0.72eipe4nwa4u@docker-desktop    | [2019-04-10 08:57:52.955][13][debug][router] [source/common/router/router.cc:1165] [C0][S12042644501986607486] pool ready
core_envoy.0.72eipe4nwa4u@docker-desktop    | [2019-04-10 08:57:52.957][1][debug][main] [source/server/server.cc:147] flushing stats
core_envoy.0.72eipe4nwa4u@docker-desktop    | [2019-04-10 08:57:52.958][13][debug][client] [source/common/http/codec_client.cc:95] [C3] response complete
core_envoy.0.72eipe4nwa4u@docker-desktop    | [2019-04-10 08:57:52.958][13][debug][router] [source/common/router/router.cc:717] [C0][S12042644501986607486] upstream headers complete: end_stream=true
core_envoy.0.72eipe4nwa4u@docker-desktop    | [2019-04-10 08:57:52.958][13][debug][http] [source/common/http/async_client_impl.cc:94] async http request response headers (end_stream=true):
core_envoy.0.72eipe4nwa4u@docker-desktop    | ':status', '202'
core_envoy.0.72eipe4nwa4u@docker-desktop    | 'date', 'Wed, 10 Apr 2019 08:57:52 GMT'
core_envoy.0.72eipe4nwa4u@docker-desktop    | 'content-length', '0'
core_envoy.0.72eipe4nwa4u@docker-desktop    | 'x-envoy-upstream-service-time', '3'
core_envoy.0.72eipe4nwa4u@docker-desktop    | 
core_envoy.0.72eipe4nwa4u@docker-desktop    | [2019-04-10 08:57:52.958][13][debug][pool] [source/common/http/http1/conn_pool.cc:202] [C3] response complete
core_envoy.0.72eipe4nwa4u@docker-desktop    | [2019-04-10 08:57:52.958][13][debug][pool] [source/common/http/http1/conn_pool.cc:240] [C3] moving to ready

@objectiser
Copy link
Contributor

objectiser commented Apr 10, 2019

@enbohm @mattklein123 I think this is a consequence of #5661

All outbound requests will need to inject the trace context. So the problem is how this is handled for the ext_authz filter - either trace context is automatically injected into the outbound request to the external auth server, so no need for user to copy the headers from the original request, or the trace context needs to be injected into the original request prior to the headers being copied.

Looking back at the original issue #5504, it would still cause a problem if such a filter extension occurred before the health check. Although not sure a health check would be used in conjunction with filters such as ext_authz.

@enbohm
Copy link
Author

enbohm commented Apr 10, 2019

ok - thanks for clarification! But is this a bug or it works as expected? If it is not an issue, how can I inject these headers/trace context to the outbound request?

@mattklein123
Copy link
Member

@enbohm @mattklein123 I think this is a consequence of #5661

Interesting. Great find. cc @cgilmour. I don't have time to dig into this for a while. @objectiser do you potentially have time to analyze and make a recommendation?

@objectiser
Copy link
Contributor

objectiser commented Apr 10, 2019

@enbohm Its a bug unfortunately.

@mattklein123 Will try to find some time. Do you have a feel for how the health check vs filters like ext_authz are likely to be used? Can we assume that a health check filter will not be used with ext_authz (and similar), or that the health check will always be used first in the chain?

A generic solution, that would fix the original problem and not be dependent on filter order, would be upfront check with each filter whether they want to disable tracing based on the request. Based on this decision, the trace context could be injected in its original location. But this may have a slight performance impact - although given that most filters are not concerned with sampling (so a no-op), might not be too bad?

In fact - the decision in the case of the health check filter is not dependent upon the request, so as part of initializing the filter chain, could determine if any filters would disable tracing, and just cache that decision with the chain? So would not have any performance impact on the request.

@gsagula
Copy link
Member

gsagula commented Apr 10, 2019

I think this also the culprit for the problem that we are seeing in Ambassador. We plumbed tracing for the ext_authz HTTP client and it was working fine. After the latest upgrade, traces start to appear in islands instead of being part of the same request. See emissary-ingress/emissary#1414

@cgilmour
Copy link
Contributor

Hi @objectizer, just want to chime in with some thoughts.

The original reason for fixing a problem here was a combination of one filter (healthcheck) interacting badly with one tracer (datadog).
Other tracers weren't affected.
And as you mention, it revolves around the case where sampling is disabled, after headers have been injected.

Because of some planned changes, the datadog tracer might have to be less-strict about changes to the sampling state after the headers have been injected.
That would mean the original problem wouldn't exist anymore - the healthcheck would be able to change sampling to disabled and have that change respected by the datadog tracer.
(This isn't certain yet, but quite likely. And with consideration about the technical "correctness" of that in terms of the overall trace.)

I wouldn't like the idea of a fairly widespread change to all filter extensions if the only justification is the way the original problem was solved.

If you generally think the behavior it has now is more correct, that's great - and the idea to check with filters before finalizing a sampling decision sounds reasonable.
But if you're on the fence about that behavior, then we can discuss other ways of fixing the combination of datadog tracer, healthchecks and ext_authz in a minimal way.

@objectiser
Copy link
Contributor

Hi @cgilmour

In general I think the sampling decision should be done upfront, as modifying after the span has already been created relies on the tracer impl being able to handle it.

However as you mention currently this is only datadog, so if you have an alternative minimal change, or the modification to datadog's approach is likely to happen soon, then we should explore them first.

@cgilmour
Copy link
Contributor

I've been looking into the code around all of this, and noticed that the parent span is passed as part of the ext_authz::Client::check call, but only the grpc impl uses it and http does not.

One solution for the problem in this issue is to have the same behavior for both grpc and http here, where a child span is spawned off that parent span and injected
This might be generally necessary for other filters that do async http requests.

This wouldn't have a big consequence for the datadog tracer, it'd just be preferable to have the healthcheck filter occur first.

If this is OK, I can work on a fix ASAP.
Also it'd help if there's a readily-available ext_authz service to run to perform e2e verification. Do you know if one exists?

@objectiser
Copy link
Contributor

@cgilmour Your proposed solution sounds good to me.

Also it'd help if there's a readily-available ext_authz service to run to perform e2e verification. Do you know if one exists?

Sorry don't know.

@mattklein123
Copy link
Member

Sorry for the delay in catching up here. In general, I think moving injection to the router filter is still the correct change, since that is the point in which we actually forward the headers. I think the real issue is the one pointed out by @cgilmour which is that there is a bug in the extauth code. We should be creating a new child span there and injecting it into the outbound headers IMO. This is what is done for other outbound calls such as ratelimit. @objectiser @gsagula WDYT?

@gsagula
Copy link
Member

gsagula commented Apr 17, 2019

@mattklein123 I haven't had a chance to dive deeper into this one yet, but we had it working for the gRPC client prior to 1.10. Tracing has never been supported by the HTTP client, but I had it implemented and working in our fork. As I said, I need to look closer to see where the bug is. If you have any idea, please let me know.

@mattklein123
Copy link
Member

@gsagula see #6520 (comment) above. That's the issue IMO.

@gsagula
Copy link
Member

gsagula commented Apr 17, 2019

Sorry, I totally missed this comment. I have it implemented already. Just need to move it upstream.

@mattklein123 mattklein123 added bug and removed question Questions that are neither investigations, bugs, nor enhancements labels Apr 17, 2019
@mattklein123 mattklein123 added this to the 1.11.0 milestone Apr 17, 2019
@kumarsparkz
Copy link

Is there an ETA for 1.11.0 release that contains the fix for this?

@kumarsparkz
Copy link

@enbohm @objectiser @cgilmour Can you provide some clarification regarding on which versions of envoy using http_service.authorization_request.allowed_headers is working? I understand from the conversation there is no issue with grpc_service in all versions of envoy as well as there is no issue with http_service with older versions of envoy. Is that right?

@kumarsparkz
Copy link

@kumarsparkz http_service did work in 1.9.0 but not in 1.10.0

Thank you for your reply. I am using Istio/istio:1.0.3 --> envoyproxy/go-control-plane:0.6.0 --> envoyproxy/envoy:1.8.0 and I am facing the same issue. I am not able to receive allowed_headers mentioned in http_service to external auth service however some basic headers like x-forwarded-for, Host, Content-Length are being passed on to auth service. Can someone provide info whether this is an issue?

@mattklein123 mattklein123 added the help wanted Needs help! label Jun 5, 2019
@alyssawilk alyssawilk modified the milestones: 1.11.0, 1.12.0 Jul 10, 2019
zuercher pushed a commit that referenced this issue Sep 17, 2019
This PR adds tracing support to the HTTP client in the external authorization filter. So far, only gRPC client was able to trace requests from the filter to an authorization service.

Risk Level: Low
Testing: yes
Docs Changes: N/A
Release Notes: Added
Fixes #6520

Signed-off-by: Gabriel Linden Sagula <[email protected]>
danzh2010 pushed a commit to danzh2010/envoy that referenced this issue Sep 24, 2019
This PR adds tracing support to the HTTP client in the external authorization filter. So far, only gRPC client was able to trace requests from the filter to an authorization service.

Risk Level: Low
Testing: yes
Docs Changes: N/A
Release Notes: Added
Fixes envoyproxy#6520

Signed-off-by: Gabriel Linden Sagula <[email protected]>
danzh2010 pushed a commit to danzh2010/envoy that referenced this issue Oct 4, 2019
This PR adds tracing support to the HTTP client in the external authorization filter. So far, only gRPC client was able to trace requests from the filter to an authorization service.

Risk Level: Low
Testing: yes
Docs Changes: N/A
Release Notes: Added
Fixes envoyproxy#6520

Signed-off-by: Gabriel Linden Sagula <[email protected]>
danzh2010 pushed a commit to danzh2010/envoy that referenced this issue Oct 4, 2019
This PR adds tracing support to the HTTP client in the external authorization filter. So far, only gRPC client was able to trace requests from the filter to an authorization service.

Risk Level: Low
Testing: yes
Docs Changes: N/A
Release Notes: Added
Fixes envoyproxy#6520

Signed-off-by: Gabriel Linden Sagula <[email protected]>
@enbohm
Copy link
Author

enbohm commented Nov 1, 2019

@gsagula Today I updated from 1.9.0 to 1.12.0 but still couldn't get the external auth service to pick up the B3 headers. Using Envoy 1.9.0 I get these headers to my external auth service:

x-envoy-internal value true
x-request-id value 9cabdb54-ba8c-9b31-970c-69bd9bbd7c32
content-length value 0
x-b3-spanid value 253f1f523d5ebede
x-b3-traceid value 253f1f523d5ebede
x-b3-sampled value 1
host value 127.0.0.1
x-envoy-expected-rq-timeout-ms value 2000
x-forwarded-for value 10.0.2.153

But after upgrading to 1.12.0 only these are present:

host value 127.0.0.1
x-envoy-internal value true
content-length value 0
x-envoy-expected-rq-timeout-ms value 2000
x-forwarded-for value 10.0.0.158

which makes it difficult for the external auth service to be part of the whole span (trace). Do I have to make any specific configuration to make it work? Maybe you have an example config to share?

@enbohm
Copy link
Author

enbohm commented Nov 1, 2019

@kumarsparkz Have you tried with the latest version on envoy? Does it work with your istio-setup or it is still an issue?

@enbohm
Copy link
Author

enbohm commented Nov 4, 2019

...or @objectiser have you had any chance to test this one? Can you see that the B3-header are propagated to an external HTTP auth service?

@enbohm
Copy link
Author

enbohm commented Nov 7, 2019

@mattklein123 I'm not sure this issue has been resolved (=external http auth service does not get the required tracing header as in version 1.9.0). Should it be re-opened or shall I create a new issue?

@mattklein123 mattklein123 reopened this Nov 7, 2019
@dio
Copy link
Member

dio commented Nov 8, 2019

@enbohm

I tried to modify the config of jaeger-tracing example as follows:

diff --git a/examples/jaeger-tracing/docker-compose.yaml b/examples/jaeger-tracing/docker-compose.yaml
index 6c353fada..efd9ec85b 100644
--- a/examples/jaeger-tracing/docker-compose.yaml
+++ b/examples/jaeger-tracing/docker-compose.yaml
@@ -61,5 +61,14 @@ services:
       - "9411:9411"
       - "16686:16686"

+  authz:
+    image: dio123/dump:1.0.0
+    networks:
+      envoymesh:
+        aliases:
+          - authz
+    expose:
+      - "8000"
+
 networks:
   envoymesh: {}
diff --git a/examples/jaeger-tracing/service1-envoy-jaeger.yaml b/examples/jaeger-tracing/service1-envoy-jaeger.yaml
index 7f06fe2da..f401d1422 100644
--- a/examples/jaeger-tracing/service1-envoy-jaeger.yaml
+++ b/examples/jaeger-tracing/service1-envoy-jaeger.yaml
@@ -56,6 +56,17 @@ static_resources:
                 decorator:
                   operation: checkStock
           http_filters:
+          - name: envoy.ext_authz
+            config:
+              http_service:
+                server_uri:
+                  uri: http://authz:8080
+                  cluster: authz
+                  timeout: 2s
+                authorization_request:
+                  allowed_headers:
+                    patterns:
+                      prefix: "x-b3"
           - name: envoy.router
             typed_config: {}
   clusters:
@@ -86,6 +97,19 @@ static_resources:
               socket_address:
                 address: service2
                 port_value: 80
+  - name: authz
+    connect_timeout: 1s
+    type: strict_dns
+    lb_policy: round_robin
+    load_assignment:
+      cluster_name: authz
+      endpoints:
+      - lb_endpoints:
+        - endpoint:
+            address:
+              socket_address:
+                address: authz
+                port_value: 8000
   - name: jaeger
     connect_timeout: 1s
     type: strict_dns

The dio123/dump:1.0.0 acts as an external authz service and dumping the headers being sent to it. When I do curl localhost:10000/trace/2 I can see the headers are forwarded to it.

authz_1        | /trace/2
authz_1        | { host: 'localhost:9000',
authz_1        |   'content-length': '0',
authz_1        |   'x-b3-spanid': '50d3c77a17de7252',
authz_1        |   'x-b3-sampled': '1',
authz_1        |   'x-b3-parentspanid': 'c5094dcf627e6e32',
authz_1        |   'x-b3-traceid': 'c5094dcf627e6e32',
authz_1        |   'x-envoy-internal': 'true',
authz_1        |   'x-forwarded-for': '192.168.48.4',
authz_1        |   'x-envoy-expected-rq-timeout-ms': '2000' }
authz_1        |

@enbohm
Copy link
Author

enbohm commented Nov 8, 2019

@dio Thanks for sharing. I have the exact same configuration

- name: envoy.ext_authz
            config:
              http_service:
                server_uri:
                  uri: http://authorization-service:8080
                  cluster: authz_service
                  timeout: 2s
                authorization_request:
                  allowed_headers:
                    patterns:
                      prefix: "x-b3"

but does not get these headers to propagate in v.12.0. Will cont. to investigate and debug during the day - will get back within shortly!

@enbohm
Copy link
Author

enbohm commented Nov 8, 2019

...and this is my whole Envoy v.1.12.0 config. Will try to create a example project where I dump the headers received and see if anyone can understand why they are not propagated. What is a bit interesting is that I seem to get headers that shouldn't be propagated, but filtered out by the allowed_headers-section. For instance, if I add a custom header in the request, e.g. 'MyCustomHeader', that is propagated to the external auth service, which is wrong since I explicit only allow headers with prefix 'x-b3' - strange.

tracing:
  http:
    name: envoy.zipkin
    config:
      collector_cluster: jaeger
      collector_endpoint: "/api/v1/spans"
      shared_span_context: true

static_resources:
  listeners:
  - address:
      socket_address:
        address: 0.0.0.0
        port_value: 80
    filter_chains:
    - filters:
      - name: envoy.http_connection_manager
        typed_config:
          "@type": type.googleapis.com/envoy.config.filter.network.http_connection_manager.v2.HttpConnectionManager
          tracing:
            operation_name: egress
          server_name: tip
          codec_type: auto
          generate_request_id: true
          stat_prefix: ingress_http
          route_config:
            name: local_route
            virtual_hosts:
            - name: backend
              domains:
              - "*"
              routes:
              - match:
                  prefix: "/backend-service/"
                route:
                  prefix_rewrite: "/"
                  cluster: backend_service

          http_filters:
          - name: envoy.ext_authz
            config:
              http_service:
                server_uri:
                  uri: http://authorization-service:8080
                  cluster: authz_service
                  timeout: 2s
                authorization_request:
                  allowed_headers:
                    patterns:
                      prefix: "x-b3"

          - name: envoy.router
            config: {}
  clusters:
  - name: authz_service
    connect_timeout: 3s
    common_http_protocol_options: {idle_timeout: 30s}
    type: STRICT_DNS
    dns_lookup_family: V4_ONLY
    lb_policy: ROUND_ROBIN
    hosts:
      - socket_address: { address: authorization-service, port_value: 8080 }

  - name: backend_service
    connect_timeout: 1s
    type: LOGICAL_DNS
    dns_lookup_family: V4_ONLY
    lb_policy: ROUND_ROBIN
    hosts: [{ socket_address: { address: backend-service, port_value: 8080 }}]

  - name: jaeger
    connect_timeout: 1s
    type: LOGICAL_DNS
    lb_policy: round_robin
    dns_lookup_family: V4_ONLY
    hosts:
    - socket_address:
        address: core_jaeger-all-in-one
        port_value: 9411

@enbohm
Copy link
Author

enbohm commented Nov 8, 2019

I've now created an example with a frontend envoy proxy (v.1.12.0), one auth-service and one simple backend service (see https://github.com/enbohm/envoy-tracing) which shows that the B3-header are not propagated to the external auth service.

I can see that Envoy is part of the trace along with the simple backend service (see screenshot in the example repo). BUT no B3-headers are propagated to the external auth service and the open tracing lib can't pick them up and hence the auth-service will be shown as a separate trace in Jaeger UI. Please feel free to adjust this example if anyone can get it to work (I'd be more than happy :)). NOTE! This does work like a charm with Envoy v.1.9.0 but not with Envoy v.1.10.0 and above.

@enbohm
Copy link
Author

enbohm commented Nov 9, 2019

I've tried confirming that this is now working but I can't even get Envoy start (using envoyproxy/envoy:latest). Keep getting these logs

[2019-11-09 19:57:52.194][000007][critical][main] [source/server/server.cc:85] error initializing configuration '/etc/front-envoy.yaml': 
Unable to parse JSON as proto (INVALID_ARGUMENT:(http_service) authorization_request: Cannot find field.): {"http_service":{"authorization_request":{"allowed_headers":{"patterns":{"prefix":"x-b3"}}},"server_uri":{"uri":"http://authorization-service:8080","cluster":"authz_service","timeout":"2s"}}}

I have also tried using this image

dio123/envoy-6520

which starts as expected but no headers are unfortunately present in the external auth service.
Can anyone confirm with a screenshot or logs that this is working? Thx in advance.

@cgilmour
Copy link
Contributor

cgilmour commented Nov 9, 2019

Try with this image: envoyproxy/envoy-dev:2c830f1aace9970e132440ade31f3c460f250d23

@enbohm
Copy link
Author

enbohm commented Nov 9, 2019

@cgilmour it works - victory :) Thank you all for a great job!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment