We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Describe the bug A clear and concise description of what the bug is.
To Reproduce Steps to reproduce the behavior:
getambassador.io/config: | --- apiVersion: ambassador/v0 kind: Mapping name: quality-control-be-prod-mapping prefix: /api/ rewrite: "" service: quality-control-be-prod.default timeout_ms: 60000 --- apiVersion: ambassador/v0 kind: Mapping name: kong-new-prod-openresty prefix: /api/ rewrite: "" service: kong-new-prod-openresty.default shadow: true
58] Setting DNS resolution timer for 5000 milliseconds ambassador-59f6966d5-twph5 ambassador [2018-12-20 10:41:39.009][000028][debug][upstream] [source/common/network/dns_impl.cc:158] Setting DNS resolution timer for 5000 milliseconds ambassador-59f6966d5-twph5 ambassador [2018-12-20 10:41:39.010][000028][debug][upstream] [source/common/upstream/upstream_impl.cc:1184] async DNS resolution complete for kong-new-prod-openresty.default ambassador-59f6966d5-twph5 ambassador [2018-12-20 10:41:39.148][000045][debug][main] [source/server/connection_handler_impl.cc:235] [C3343] new connection ambassador-59f6966d5-twph5 ambassador [2018-12-20 10:41:39.148][000044][debug][main] [source/server/connection_handler_impl.cc:235] [C3342] new connection ambassador-59f6966d5-twph5 ambassador [2018-12-20 10:41:39.148][000044][debug][http] [source/common/http/conn_manager_impl.cc:200] [C3342] new stream ambassador-59f6966d5-twph5 ambassador [2018-12-20 10:41:39.148][000045][debug][http] [source/common/http/conn_manager_impl.cc:200] [C3343] new stream ambassador-59f6966d5-twph5 ambassador [2018-12-20 10:41:39.148][000044][debug][http] [source/common/http/conn_manager_impl.cc:234] [C3342] dispatch error: http/1.1 protocol error: HPE_INVALID_METHOD ambassador-59f6966d5-twph5 ambassador [2018-12-20 10:41:39.148][000045][debug][http] [source/common/http/conn_manager_impl.cc:234] [C3343] dispatch error: http/1.1 protocol error: HPE_INVALID_METHOD ambassador-59f6966d5-twph5 ambassador [2018-12-20 10:41:39.148][000045][debug][connection] [source/common/network/connection_impl.cc:101] [C3343] closing data_to_write=66 type=2 ambassador-59f6966d5-twph5 ambassador [2018-12-20 10:41:39.148][000045][debug][connection] [source/common/network/connection_impl.cc:153] [C3343] setting delayed close timer with timeout 1000 ms ambassador-59f6966d5-twph5 ambassador [2018-12-20 10:41:39.148][000045][debug][connection] [source/common/network/connection_impl.cc:501] [C3343] remote close
Here is the envoy's generated configuration
kubectl exec -it ambassador-59f6966d5-7cg2f -- cat envoy/envoy.json { "@type": "/envoy.config.bootstrap.v2.Bootstrap", "static_resources": { "clusters": [ { "connect_timeout": "3s", "lb_policy": "ROUND_ROBIN", "load_assignment": { "cluster_name": "cluster_127_0_0_1_8877", "endpoints": [ { "lb_endpoints": [ { "endpoint": { "address": { "socket_address": { "address": "127.0.0.1", "port_value": 8877, "protocol": "TCP" } } } } ] } ] }, "name": "cluster_127_0_0_1_8877", "type": "STRICT_DNS" }, { "connect_timeout": "3s", "lb_policy": "ROUND_ROBIN", "load_assignment": { "cluster_name": "cluster_quality_control_be_prod_default", "endpoints": [ { "lb_endpoints": [ { "endpoint": { "address": { "socket_address": { "address": "quality-control-be-prod.default", "port_value": 80, "protocol": "TCP" } } } } ] } ] }, "name": "cluster_quality_control_be_prod_default", "type": "STRICT_DNS" }, { "connect_timeout": "3s", "lb_policy": "ROUND_ROBIN", "load_assignment": { "cluster_name": "cluster_shadow_kong_new_prod_openresty_default", "endpoints": [ { "lb_endpoints": [ { "endpoint": { "address": { "socket_address": { "address": "kong-new-prod-openresty.default", "port_value": 80, "protocol": "TCP" } } } } ] } ] }, "name": "cluster_shadow_kong_new_prod_openresty_default", "type": "STRICT_DNS" }, { "connect_timeout": "3s", "lb_policy": "ROUND_ROBIN", "load_assignment": { "cluster_name": "cluster_tracing_jaeger_collector_9411", "endpoints": [ { "lb_endpoints": [ { "endpoint": { "address": { "socket_address": { "address": "jaeger-collector", "port_value": 9411, "protocol": "TCP" } } } } ] } ] }, "name": "cluster_tracing_jaeger_collector_9411", "type": "STRICT_DNS" } ], "listeners": [ { "address": { "socket_address": { "address": "0.0.0.0", "port_value": 80, "protocol": "TCP" } }, "filter_chains": [ { "filters": [ { "config": { "access_log": [ { "config": { "format": "ACCESS [%START_TIME%] \"%REQ(:METHOD)% %REQ(X-ENVOY-ORIGINAL-PATH?:PATH)% %PROTOCOL%\" %RESPONSE_CODE% %RESPONSE_FLAGS% %BYTES_RECEIVED% %BYTES_SENT% %DURATION% %RESP(X-ENVOY-UPSTREAM-SERVICE-TIME)% \"%REQ(X-FORWARDED-FOR)%\" \"%REQ(USER-AGENT)%\" \"%REQ(X-REQUEST-ID)%\" \"%REQ(:AUTHORITY)%\" \"%UPSTREAM_HOST%\"\n", "path": "/dev/fd/1" }, "name": "envoy.file_access_log" } ], "generate_request_id": true, "http_filters": [ { "name": "envoy.cors" }, { "config": { "start_child_span": true }, "name": "envoy.router" } ], "route_config": { "virtual_hosts": [ { "domains": [ "*" ], "name": "backend", "routes": [ { "match": { "case_sensitive": true, "prefix": "/ambassador/v0/check_ready" }, "route": { "prefix_rewrite": "/ambassador/v0/check_ready", "priority": null, "timeout": "3.000s", "weighted_clusters": { "clusters": [ { "name": "cluster_127_0_0_1_8877", "weight": 100.0 } ] } } }, { "match": { "case_sensitive": true, "prefix": "/ambassador/v0/check_alive" }, "route": { "prefix_rewrite": "/ambassador/v0/check_alive", "priority": null, "timeout": "3.000s", "weighted_clusters": { "clusters": [ { "name": "cluster_127_0_0_1_8877", "weight": 100.0 } ] } } }, { "match": { "case_sensitive": true, "prefix": "/api/" }, "route": { "priority": null, "timeout": "60.000s", "weighted_clusters": { "clusters": [ { "name": "cluster_quality_control_be_prod_default", "weight": 100.0 } ] } } } ] } ] }, "stat_prefix": "ingress_http", "tracing": { "operation_name": "egress" }, "use_remote_address": true }, "name": "envoy.http_connection_manager" } ] } ], "name": "ir.listener" } ] } }
Here are the api version list
admissionregistration.k8s.io/v1beta1 apiextensions.k8s.io/v1beta1 apiregistration.k8s.io/v1 apiregistration.k8s.io/v1beta1 apps/v1 apps/v1beta1 apps/v1beta2 authentication.k8s.io/v1 authentication.k8s.io/v1beta1 authorization.k8s.io/v1 authorization.k8s.io/v1beta1 autoscaling/v1 autoscaling/v2beta1 batch/v1 batch/v1beta1 certificates.k8s.io/v1beta1 events.k8s.io/v1beta1 extensions/v1beta1 networking.k8s.io/v1 policy/v1beta1 rbac.authorization.k8s.io/v1 rbac.authorization.k8s.io/v1beta1 storage.k8s.io/v1 storage.k8s.io/v1beta1 v1
Expected behavior A clear and concise description of what you expected to happen.
Versions :
Additional context the diagnostics page
I tried to call the kong-new-prod-openresty service from another pod without any issue.
* Trying 100.67.53.161... * TCP_NODELAY set * Connected to kong-new-prod-openresty.default (100.67.53.161) port 80 (#0) > GET /api/abc.txt HTTP/1.1 > Host: kong-new-prod-openresty.default > User-Agent: curl/7.52.1 > Accept: */* > < HTTP/1.1 404 Not Found < Date: Thu, 20 Dec 2018 10:44:03 GMT < Content-Type: text/html; charset=UTF-8 < Content-Length: 166 < Connection: keep-alive < Server: Never-Let-You-Down < <html> <head><title>404 Not Found</title></head> <body bgcolor="white"> <center><h1>404 Not Found</h1></center> <hr><center>openresty</center> </body> </html> * Curl_http_done: called premature == 0 * Connection #0 to host kong-new-prod-openresty.default left intact
The text was updated successfully, but these errors were encountered:
No branches or pull requests
Describe the bug
A clear and concise description of what the bug is.
To Reproduce
Steps to reproduce the behavior:
Here is the envoy's generated configuration
Here are the api version list
Expected behavior
A clear and concise description of what you expected to happen.
Versions :
Additional context
the diagnostics page
I tried to call the kong-new-prod-openresty service from another pod without any issue.
The text was updated successfully, but these errors were encountered: