Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OCPBUGS-14914: Set tunnel and server timeouts at backend level #536

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

alebedev87
Copy link
Contributor

@alebedev87 alebedev87 commented Nov 7, 2023

This PR removes the tunnel and server timeouts in the global default section and sets them on all the backends. The route annotations for the timeouts continue to work as they did before.

As suggested in the upstream issue, the middle backends (be_sni and be_no_sni) are set with the maximum of all the route timeouts to avoid the following warning message:

I1130 10:49:26.557528       1 router.go:649] template "msg"="router reloaded" "output"="[NOTICE]   (18) : haproxy version is 2.6.13-234aa6d\n[NOTICE]   (18) : path to executable is /usr/sbin/haproxy\n[WARNING]  (18) : config : missing timeouts for backend 'be_sni'.\n   | While not properly invalid, you will certainly encounter various problems\n   | with such a configuration. To fix this, please ensure that all following\n   | timeouts are set to a non-zero value: 'client', 'connect', 'server'.\n[WARNING]  (18) : config : missing timeouts for backend 'be_no_sni'.\n   | While not properly invalid, you will certainly encounter various problems\n   | with such a configuration. To fix this, please ensure that all following\n   | timeouts are set to a non-zero value: 'client', 'connect', 'server'.\n

Test using haproxy-timeout-tunnel repository:

$ oc -n openshift-ingress get deploy router-default -o yaml | yq .spec.template.spec.containers[0].image 
quay.io/alebedev/openshift-router:12.4.110

$ oc -n openshift-ingress get deploy router-default -o yaml | yq .spec.template.spec.containers[0].env | grep -A1 TUNNEL
- name: ROUTER_DEFAULT_TUNNEL_TIMEOUT
  value: 5s

$ oc -n tunnel-timeout get route hello-openshift -o yaml | yq .metadata.annotations
haproxy.router.openshift.io/timeout-tunnel: 15s

$ ROUTE_HOST=$(oc -n tunnel-timeout get route hello-openshift --template='{{.spec.host}}')
$ time HOST=${ROUTE_HOST} PORT=443 SCHEME=wss ./websocket-cli.js
WebSocket Client Connected
echo-protocol Connection Closed

real	0m15.721s
user	0m0.126s
sys	0m0.021s

$ oc -n openshift-ingress exec -ti deploy/router-default -- cat /var/lib/haproxy/conf/haproxy.config | grep 'timeout tunnel '
  timeout tunnel  5s
  timeout tunnel  5s
  timeout tunnel  5s
  timeout tunnel  5s
  timeout tunnel  5s
  timeout tunnel  5s
  timeout tunnel  5s
  timeout tunnel  5s
  timeout tunnel  15s

The e2e test: openshift/cluster-ingress-operator#1013.
Presentation which summarizes the issue: link (Red Hat only).

@openshift-ci-robot openshift-ci-robot added the jira/severity-important Referenced Jira bug's severity is important for the branch this PR is targeting. label Nov 7, 2023
@openshift-ci openshift-ci bot added the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Nov 7, 2023
@openshift-ci-robot openshift-ci-robot added jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. jira/invalid-bug Indicates that a referenced Jira bug is invalid for the branch this PR is targeting. labels Nov 7, 2023
@openshift-ci-robot
Copy link
Contributor

@alebedev87: This pull request references Jira Issue OCPBUGS-14914, which is invalid:

  • expected the bug to target only the "4.15.0" version, but multiple target versions were set

Comment /jira refresh to re-evaluate validity if changes to the Jira bug are made, or edit the title of this pull request to link to a different bug.

The bug has been updated to refer to the pull request using the external bug tracker.

In response to this:

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@openshift-ci openshift-ci bot requested review from candita and gcs278 November 7, 2023 20:15
Copy link
Contributor

openshift-ci bot commented Nov 7, 2023

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please ask for approval from alebedev87. For more information see the Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@alebedev87 alebedev87 changed the title [WIP] OCPBUGS-14914: set timeouts on the backend level OCPBUGS-14914: set timeouts on the backend level Nov 14, 2023
@openshift-ci openshift-ci bot removed the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Nov 14, 2023
@alebedev87
Copy link
Contributor Author

/jira refresh

@openshift-ci-robot openshift-ci-robot added the jira/valid-bug Indicates that a referenced Jira bug is valid for the branch this PR is targeting. label Nov 14, 2023
@openshift-ci-robot
Copy link
Contributor

@alebedev87: This pull request references Jira Issue OCPBUGS-14914, which is valid. The bug has been moved to the POST state.

3 validation(s) were run on this bug
  • bug is open, matching expected state (open)
  • bug target version (4.15.0) matches configured target version for branch (4.15.0)
  • bug is in the state ASSIGNED, which is one of the valid states (NEW, ASSIGNED, POST)

Requesting review from QA contact:
/cc @ShudiLi

The bug has been updated to refer to the pull request using the external bug tracker.

In response to this:

/jira refresh

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@openshift-ci-robot openshift-ci-robot removed the jira/invalid-bug Indicates that a referenced Jira bug is invalid for the branch this PR is targeting. label Nov 14, 2023
@openshift-ci openshift-ci bot requested a review from ShudiLi November 14, 2023 22:31
@openshift-ci-robot
Copy link
Contributor

@alebedev87: This pull request references Jira Issue OCPBUGS-14914, which is valid.

3 validation(s) were run on this bug
  • bug is open, matching expected state (open)
  • bug target version (4.15.0) matches configured target version for branch (4.15.0)
  • bug is in the state POST, which is one of the valid states (NEW, ASSIGNED, POST)

Requesting review from QA contact:
/cc @ShudiLi

In response to this:

This PR removes the tunnel and server timeouts in the global default section ans sets them on all the backends. The route annotations for the timeouts continue to work as they did before.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@ShudiLi
Copy link
Member

ShudiLi commented Nov 17, 2023

Tested it with 4.15.0-0.ci.test-2023-11-16-093712-ci-ln-dr45512-latest
`
1.
% oc get clusterversion
NAME VERSION AVAILABLE PROGRESSING SINCE STATUS
version 4.15.0-0.ci.test-2023-11-16-093712-ci-ln-dr45512-latest True False 50m Cluster version is 4.15.0-0.ci.test-2023-11-16-093712-ci-ln-dr45512-latest

% oc get route
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
myedge myedge-default.apps.ci-ln-9r4m3fk-72292.origin-ci-int-gce.dev.rhcloud.com unsec-server3 http edge None

% oc annotate route myedge haproxy.router.openshift.io/timeout-tunnel=70m
route.route.openshift.io/myedge annotate

% oc -n openshift-ingress rsh router-default-d78ddc6f-bprk5
sh-4.4$ grep -A7 "timeout server " haproxy-config.template
timeout server {{ $value }}
{{- else }}
timeout server {{ firstMatch $timeSpecPattern (env "ROUTER_DEFAULT_SERVER_TIMEOUT") "30s" }}
{{- end }}
{{- with $value := clipHAProxyTimeoutValue (firstMatch $timeSpecPattern (index $cfg.Annotations "haproxy.router.openshift.io/timeout-tunnel")) }}
timeout tunnel {{ $value }}
{{- else }}
timeout tunnel {{ firstMatch $timeSpecPattern (env "ROUTER_DEFAULT_TUNNEL_TIMEOUT") "1h" }}
{{- end }}

sh-4.4$

  1. let client get a large size file with speed 1000bytes/s, started time at about 2023-11-17 03:18:56 and terminated time about 2023-11-17 04:38:15, more than 1h

sh-4.4# wget --no-check-certificate --limit-rate=1000 --delete-after https://myedge-default.apps.ci-ln-9r4m3fk-72292.origin-ci-int-gce.dev.rhcloud.com/oc -k
--2023-11-17 03:18:56-- https://myedge-default.apps.ci-ln-9r4m3fk-72292.origin-ci-int-gce.dev.rhcloud.com/oc
Resolving myedge-default.apps.ci-ln-9r4m3fk-72292.origin-ci-int-gce.dev.rhcloud.com (myedge-default.apps.ci-ln-9r4m3fk-72292.origin-ci-int-gce.dev.rhcloud.com)... 146.148.109.119
Connecting to myedge-default.apps.ci-ln-9r4m3fk-72292.origin-ci-int-gce.dev.rhcloud.com (myedge-default.apps.ci-ln-9r4m3fk-72292.origin-ci-int-gce.dev.rhcloud.com)|146.148.109.119|:443... connected.
WARNING: The certificate of 'myedge-default.apps.ci-ln-9r4m3fk-72292.origin-ci-int-gce.dev.rhcloud.com' is not trusted.
WARNING: The certificate of 'myedge-default.apps.ci-ln-9r4m3fk-72292.origin-ci-int-gce.dev.rhcloud.com' hasn't got a known issuer.
HTTP request sent, awaiting response... 200 OK
Length: 121359408 (116M)
Saving to: 'oc.tmp'

oc.tmp 0%[ ] 239.88K 1000 B/s in 4m 6s

--2023-11-17 04:38:15-- (try:20) https://myedge-default.apps.ci-ln-9r4m3fk-72292.origin-ci-int-gce.dev.rhcloud.com/oc
Connecting to myedge-default.apps.ci-ln-9r4m3fk-72292.origin-ci-int-gce.dev.rhcloud.com (myedge-default.apps.ci-ln-9r4m3fk-72292.origin-ci-int-gce.dev.rhcloud.com)|146.148.109.119|:443... connected.
WARNING: The certificate of 'myedge-default.apps.ci-ln-9r4m3fk-72292.origin-ci-int-gce.dev.rhcloud.com' is not trusted.
WARNING: The certificate of 'myedge-default.apps.ci-ln-9r4m3fk-72292.origin-ci-int-gce.dev.rhcloud.com' hasn't got a known issuer.
HTTP request sent, awaiting response... 206 Partial Content
Length: 121359408 (116M), 116746470 (111M) remaining
Saving to: 'oc.tmp.tmp.tmp.tmp.tmp.tmp.tmp.tmp.tmp.tmp.tmp.tmp.tmp.tmp.tmp.tmp.tmp.tmp.tmp.tmp'

oc.tmp.tmp.tmp.tmp.tmp.tmp.tmp.tmp.tmp.tmp.tmp.tmp.tmp.tmp.tmp.tmp 3%[++++++ ] 4.62M 1000 B/s in 3m 55s

2023-11-17 04:42:09 (1000 B/s) - Read error at byte 4847481/121359408 (The TLS connection was non-properly terminated.). Giving up.

sh-4.4#

  1. check the captured packets and can see the session is terminated after more than 1h
    tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
    03:19:09.684312 IP (tos 0x0, ttl 64, id 2327, offset 0, flags [DF], proto TCP (6), length 52)
    146.148.109.119.https > 10.129.2.17.43942: Flags [.], cksum 0x0cc4 (incorrect -> 0x0a65), ack 1908667883, win 506, options [nop,nop,TS val 1841440389 ecr 1595459286], length 0
    03:19:09.684337 IP (tos 0x0, ttl 64, id 12471, offset 0, flags [DF], proto TCP (6), length 52)
    10.129.2.17.43942 > 146.148.109.119.https: Flags [.], cksum 0x0cc4 (incorrect -> 0x189e), ack 1, win 0, options [nop,nop,TS val 1595462614 ecr 1841433925], length 0
    03:19:16.340278 IP (tos 0x0, ttl 64, id 2328, offset 0, flags [DF], proto TCP (6), length 52)
    146.148.109.119.https > 10.129.2.17.43942: Flags [.], cksum 0x0cc4 (incorrect -> 0xe364), ack 1, win 506, options [nop,nop,TS val 1841447045 ecr 1595462614], length 0
    03:19:16.340316 IP (tos 0x0, ttl 64, id 12472, offset 0, flags [DF], proto TCP (6), length 52)
    10.129.2.17.43942 > 146.148.109.119.https: Flags [.], cksum 0x0cc4 (incorrect -> 0xfe9d), ack 1, win 0, options [nop,nop,TS val 1595469270 ecr 1841433925], length 0
    ...

04:41:53.345283 IP (tos 0x0, ttl 62, id 24165, offset 0, flags [DF], proto TCP (6), length 628)
146.148.109.119.https > 10.129.2.17.45884: Flags [P.], cksum 0xccdd (correct), seq 235640:236216, ack 904, win 505, options [nop,nop,TS val 927357442 ecr 1600426274], length 576
04:41:53.345307 IP (tos 0x0, ttl 62, id 24166, offset 0, flags [DF], proto TCP (6), length 628)
146.148.109.119.https > 10.129.2.17.45884: Flags [.], cksum 0x0157 (correct), seq 236216:236792, ack 904, win 505, options [nop,nop,TS val 927357442 ecr 1600426274], length 576
04:41:53.345371 IP (tos 0x0, ttl 64, id 62346, offset 0, flags [DF], proto TCP (6), length 52)
`

images/router/haproxy/conf/haproxy-config.template Outdated Show resolved Hide resolved
images/router/haproxy/conf/haproxy-config.template Outdated Show resolved Hide resolved
hack/Dockerfile.debug Outdated Show resolved Hide resolved
@alebedev87 alebedev87 changed the title OCPBUGS-14914: set timeouts on the backend level [WIP] OCPBUGS-14914: set timeouts on the backend level Nov 30, 2023
@openshift-ci openshift-ci bot added the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Nov 30, 2023
@alebedev87
Copy link
Contributor Author

Back to WIP to address the config warning about missing timeout server in the middle backends:

I1130 10:49:26.557528       1 router.go:649] template "msg"="router reloaded" "output"="[NOTICE]   (18) : haproxy version is 2.6.13-234aa6d\n[NOTICE]   (18) : path to executable is /usr/sbin/haproxy\n[WARNING]  (18) : config : missing timeouts for backend 'be_sni'.\n   | While not properly invalid, you will certainly encounter various problems\n   | with such a configuration. To fix this, please ensure that all following\n   | timeouts are set to a non-zero value: 'client', 'connect', 'server'.\n[WARNING]  (18) : config : missing timeouts for backend 'be_no_sni'.\n   | While not properly invalid, you will certainly encounter various problems\n   | with such a configuration. To fix this, please ensure that all following\n   | timeouts are set to a non-zero value: 'client', 'connect', 'server'.\n

@alebedev87 alebedev87 force-pushed the timeouts-in-backends branch 2 times, most recently from 4113831 to 6096a39 Compare December 1, 2023 16:33
@alebedev87
Copy link
Contributor Author

/test e2e-agnostic

@alebedev87
Copy link
Contributor Author

/test e2e-metal-ipi-ovn-ipv6

@openshift-ci-robot
Copy link
Contributor

@alebedev87: This pull request references Jira Issue OCPBUGS-14914, which is valid.

3 validation(s) were run on this bug
  • bug is open, matching expected state (open)
  • bug target version (4.15.0) matches configured target version for branch (4.15.0)
  • bug is in the state POST, which is one of the valid states (NEW, ASSIGNED, POST)

Requesting review from QA contact:
/cc @ShudiLi

The bug has been updated to refer to the pull request using the external bug tracker.

In response to this:

This PR removes the tunnel and server timeouts in the global default section ans sets them on all the backends. The route annotations for the timeouts continue to work as they did before.

Test using haproxy-timeout-tunnel repository:

$ oc -n openshift-ingress get deploy router-default -o yaml | yq .spec.template.spec.containers[0].image 
quay.io/alebedev/openshift-router:12.4.110

$ oc -n openshift-ingress get deploy router-default -o yaml | yq .spec.template.spec.containers[0].env | grep -A1 TUNNEL
- name: ROUTER_DEFAULT_TUNNEL_TIMEOUT
 value: 5s

$ oc -n tunnel-timeout get route hello-openshift -o yaml | yq .metadata.annotations
haproxy.router.openshift.io/timeout-tunnel: 15s

$ ROUTE_HOST=$(oc -n tunnel-timeout get route hello-openshift --template='{{.spec.host}}')
$ time HOST=${ROUTE_HOST} PORT=443 SCHEME=wss ./websocket-cli.js
WebSocket Client Connected
echo-protocol Connection Closed

real	0m15.721s
user	0m0.126s
sys	0m0.021s

$ oc -n openshift-ingress exec -ti deploy/router-default -- cat /var/lib/haproxy/conf/haproxy.config | grep 'timeout tunnel '
 timeout tunnel  5s
 timeout tunnel  5s
 timeout tunnel  5s
 timeout tunnel  5s
 timeout tunnel  5s
 timeout tunnel  5s
 timeout tunnel  5s
 timeout tunnel  5s
 timeout tunnel  15s

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@alebedev87 alebedev87 changed the title [WIP] OCPBUGS-14914: set timeouts on the backend level OCPBUGS-14914: set timeouts on the backend level Dec 4, 2023
@openshift-ci openshift-ci bot removed the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Dec 4, 2023
@Miciah
Copy link
Contributor

Miciah commented Dec 5, 2023

Back to WIP to address the config warning about missing timeout server in the middle backends:

I1130 10:49:26.557528       1 router.go:649] template "msg"="router reloaded" "output"="[NOTICE]   (18) : haproxy version is 2.6.13-234aa6d\n[NOTICE]   (18) : path to executable is /usr/sbin/haproxy\n[WARNING]  (18) : config : missing timeouts for backend 'be_sni'.\n   | While not properly invalid, you will certainly encounter various problems\n   | with such a configuration. To fix this, please ensure that all following\n   | timeouts are set to a non-zero value: 'client', 'connect', 'server'.\n[WARNING]  (18) : config : missing timeouts for backend 'be_no_sni'.\n   | While not properly invalid, you will certainly encounter various problems\n   | with such a configuration. To fix this, please ensure that all following\n   | timeouts are set to a non-zero value: 'client', 'connect', 'server'.\n

I still see these errors in the most recent test results:

Did you mean remove the WIP label?

@alebedev87
Copy link
Contributor Author

#536 (comment) addressed.

@alebedev87 alebedev87 changed the title OCPBUGS-14914: set timeouts on the backend level OCPBUGS-14914: Set tunnel and server timeouts only at the backend level Jan 29, 2024
@openshift-ci-robot
Copy link
Contributor

@alebedev87: This pull request references Jira Issue OCPBUGS-14914, which is valid.

3 validation(s) were run on this bug
  • bug is open, matching expected state (open)
  • bug target version (4.16.0) matches configured target version for branch (4.16.0)
  • bug is in the state POST, which is one of the valid states (NEW, ASSIGNED, POST)

Requesting review from QA contact:
/cc @ShudiLi

The bug has been updated to refer to the pull request using the external bug tracker.

In response to this:

This PR removes the tunnel and server timeouts in the global default section and sets them on all the backends. The route annotations for the timeouts continue to work as they did before.

As suggested in the upstream issue, the middle backends (be_sni and be_no_sni) are set with the maximum of all the route timeouts to avoid the following warning message:

I1130 10:49:26.557528       1 router.go:649] template "msg"="router reloaded" "output"="[NOTICE]   (18) : haproxy version is 2.6.13-234aa6d\n[NOTICE]   (18) : path to executable is /usr/sbin/haproxy\n[WARNING]  (18) : config : missing timeouts for backend 'be_sni'.\n   | While not properly invalid, you will certainly encounter various problems\n   | with such a configuration. To fix this, please ensure that all following\n   | timeouts are set to a non-zero value: 'client', 'connect', 'server'.\n[WARNING]  (18) : config : missing timeouts for backend 'be_no_sni'.\n   | While not properly invalid, you will certainly encounter various problems\n   | with such a configuration. To fix this, please ensure that all following\n   | timeouts are set to a non-zero value: 'client', 'connect', 'server'.\n

Test using haproxy-timeout-tunnel repository:

$ oc -n openshift-ingress get deploy router-default -o yaml | yq .spec.template.spec.containers[0].image 
quay.io/alebedev/openshift-router:12.4.110

$ oc -n openshift-ingress get deploy router-default -o yaml | yq .spec.template.spec.containers[0].env | grep -A1 TUNNEL
- name: ROUTER_DEFAULT_TUNNEL_TIMEOUT
 value: 5s

$ oc -n tunnel-timeout get route hello-openshift -o yaml | yq .metadata.annotations
haproxy.router.openshift.io/timeout-tunnel: 15s

$ ROUTE_HOST=$(oc -n tunnel-timeout get route hello-openshift --template='{{.spec.host}}')
$ time HOST=${ROUTE_HOST} PORT=443 SCHEME=wss ./websocket-cli.js
WebSocket Client Connected
echo-protocol Connection Closed

real	0m15.721s
user	0m0.126s
sys	0m0.021s

$ oc -n openshift-ingress exec -ti deploy/router-default -- cat /var/lib/haproxy/conf/haproxy.config | grep 'timeout tunnel '
 timeout tunnel  5s
 timeout tunnel  5s
 timeout tunnel  5s
 timeout tunnel  5s
 timeout tunnel  5s
 timeout tunnel  5s
 timeout tunnel  5s
 timeout tunnel  5s
 timeout tunnel  15s

The e2e test: openshift/cluster-ingress-operator#1013.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@alebedev87
Copy link
Contributor Author

/test e2e-upgrade

@ShudiLi
Copy link
Member

ShudiLi commented Feb 1, 2024

Tested it with default, 10s, 90s, 70m on 4.16.0-0.test-2024-02-01-013554-ci-ln-g418bqb-latest

  1. tunnel time out

oc get clusterversion

NAME VERSION AVAILABLE PROGRESSING SINCE STATUS
version 4.16.0-0.test-2024-02-01-013554-ci-ln-g418bqb-latest True False 8h Cluster version is 4.16.0-0.test-2024-02-01-013554-ci-ln-g418bqb-latest

sh-4.4# ./run-cli.sh
WebSocket Client Connected
echo-protocol Connection Closed

real 0m15.130s
user 0m0.101s
sys 0m0.019s
sh-4.4#
sh-4.4# ./run-cli.sh
WebSocket Client Connected
echo-protocol Connection Closed

real 0m10.132s
user 0m0.108s
sys 0m0.016s
sh-4.4#
sh-4.4#
sh-4.4# ./run-cli.sh
WebSocket Client Connected
echo-protocol Connection Closed

real 0m30.128s
user 0m0.102s
sys 0m0.017s
sh-4.4#

sh-4.4# ./run-cli.sh
WebSocket Client Connected
echo-protocol Connection Closed

real 1m30.138s
user 0m0.109s
sys 0m0.019s
sh-4.4#

sh-4.4# ./run-cli.sh
WebSocket Client Connected
echo-protocol Connection Closed

real 70m0.140s
user 0m0.109s
sys 0m0.019s
sh-4.4#

  1. server time out, just put the test result for 70m here
    10.129.2.18.43904 > 10.128.2.13.webcache: Flags [P.], cksum 0x3f23 (correct), seq 1:420, ack 1, win 258, options [nop,nop,TS val 2217547651 ecr 3480210042], length 419: HTTP, length: 419
    GET / HTTP/1.1
    user-agent: curl/7.61.1
    accept: /
    host: unsec-server3-default.apps.shudi-416g01.qe.gcp.devcluster.openshift.com
    x-forwarded-host: unsec-server3-default.apps.shudi-416g01.qe.gcp.devcluster.openshift.com
    x-forwarded-port: 80
    x-forwarded-proto: http
    forwarded: for=10.128.2.14;host=unsec-server3-default.apps.shudi-416g01.qe.gcp.devcluster.openshift.com;proto=http
    x-forwarded-for: 10.128.2.14

    10.129.2.18.43904 > 10.128.2.13.webcache: Flags [R.], cksum 0x9a82 (correct), seq 420, ack 1, win 258, options [nop,nop,TS val 2221747652 ecr 3480210043], length 0

@ShudiLi
Copy link
Member

ShudiLi commented Feb 1, 2024

/label qe-approved

images/router/haproxy/conf/haproxy-config.template Outdated Show resolved Hide resolved
timeout server-fin {{ firstMatch $timeSpecPattern (env "ROUTER_DEFAULT_SERVER_FIN_TIMEOUT") "1s" }}
timeout http-request {{ firstMatch $timeSpecPattern (env "ROUTER_SLOWLORIS_TIMEOUT") "10s" }}
timeout http-keep-alive {{ firstMatch $timeSpecPattern (env "ROUTER_SLOWLORIS_HTTP_KEEPALIVE") "300s" }}

# Long timeout for WebSocket connections.
timeout tunnel {{ firstMatch $timeSpecPattern (env "ROUTER_DEFAULT_TUNNEL_TIMEOUT") "1h" }}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why don't you need to specify timeout tunnel on be_sni or be_no_sni? Is HAProxy more lenient about not specifying timeout tunnel than it is about not specifying timeout server?

Does omitting timeout tunnel result in an infinite timeout, similarly to omitting timeout server, or does it fall back to using the server timeout as the tunnel timeout? Crucially, if you have a route that has a tunnel timeout that is greater than any route's server timeout, is the tunnel timeout respected, or does the server timeout in be_sni/be_no_sni cause the connection to be terminated before the route's tunnel timeout?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why don't you need to specify timeout tunnel on be_sni or be_no_sni? Is HAProxy more lenient about not specifying timeout tunnel than it is about not specifying timeout server?

Yes. Only the missing server timeout gives a warning message. I'm not sure the warning reports a real problem though taking into account that the bottom backends will still have the server timeout set.

Does omitting timeout tunnel result in an infinite timeout, similarly to omitting timeout server, or does it fall back to using the server timeout as the tunnel timeout? Crucially, if you have a route that has a tunnel timeout that is greater than any route's server timeout, is the tunnel timeout respected, or does the server timeout in be_sni/be_no_sni cause the connection to be terminated before the route's tunnel timeout?

Regarding the link between server and tunnel timeouts. I discovered that the tunnel timeout supersedes server timeout in the TCP proxies. Note that the middle backends in the router configuration are TCP proxies. So setting tunnel timeout in default or the middle backends (be_sni/be_no_sni) results into interference in the server timeout behavior set on the bottom backend level. I didn't see the opposite though: the server timeout set on default or middle backends didn't influence the tunnel timeout set on the bottom backends.

@@ -581,10 +583,10 @@ backend {{ genBackendNamePrefix $cfg.TLSTermination }}:{{ $cfgIdx }}
{{- end }}
tcp-request content reject if !whitelist
{{- end }}
{{- with $value := clipHAProxyTimeoutValue (firstMatch $timeSpecPattern (index $cfg.Annotations "haproxy.router.openshift.io/timeout")) }}
{{- with $value := clipHAProxyTimeoutValue (firstMatch $timeSpecPattern (index $cfg.Annotations "haproxy.router.openshift.io/timeout") (env "ROUTER_DEFAULT_SERVER_TIMEOUT") "30s") }}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All these additional env calls add some overhead. As Andy says, "Trips across the Go/Template boundary can be expensive, particularly when you have many, many routes (10,000+)." Consider adding {{- $defaultServerTimeout := firstMatch $timeSpecPattern (env "ROUTER_DEFAULT_SERVER_TIMEOUT") "30s" }} at the top of the template and using {{- with $value := clipHAProxyTimeoutValue (firstMatch $timeSpecPattern (index $cfg.Annotations "haproxy.router.openshift.io/timeout") $defaultServerTimeout) }} here.


As noted in #483 (review), clipHAProxyTimeoutValue (firstMatch $timeSpecPattern foo) could be simplified as clipHAProxyTimeoutValue foo. However, that might be better left as a separate change.

...And actually, now that I look at it again, I believe clipHAProxyTimeoutValue (firstMatch $timeSpecPattern) foo will silently ignore invalid values while clipHAProxyTimeoutValue foo will log them. Logging errors for invalid values might be a positive change, but it's an additional behavioral change that you might not want to tie to OCPBUGS-14914.

...Although thinking about it still further, I wonder: Is maxTimeoutFirstMatchedAndClipped going to cause clipHAProxyTimeoutValue to log an error for each route that has an invalid annotation value, where before this change those invalid values get filtered out by firstMatch before clipHAProxyTimeoutValue has a chance to report any errors?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consider adding {{- $defaultServerTimeout := firstMatch $timeSpecPattern (env "ROUTER_DEFAULT_SERVER_TIMEOUT") "30s" }} at the top of the template and using {{- with $value := clipHAProxyTimeoutValue (firstMatch $timeSpecPattern (index $cfg.Annotations "haproxy.router.openshift.io/timeout") $defaultServerTimeout) }} here.

Done.

images/router/haproxy/conf/haproxy-config.template Outdated Show resolved Hide resolved
Comment on lines 397 to 407
timeout := clipHAProxyTimeoutValue(firstMatch(pattern, append([]string{cfg.Annotations[annotation]}, values...)...))
if timeout != "" {
// No error handling because clipHAProxyTimeoutValue returns
// a valid timeout or an empty string. The latter is already handled.
timeoutDuration, _ := time.ParseDuration(timeout)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a lot of redundant computation:

  • maxTimeoutFirstMatchedAndClipped parses each route's timeout annotation value using firstMatch, then (if the value is valid) clipHAProxyTimeoutValue, then time.ParseDuration.
  • fe_sni calls maxTimeoutFirstMatchedAndClipped, so you're parsing each valid value 3 times.
  • fe_no_sni also calls maxTimeoutFirstMatchedAndClipped, so you're parsing each valid value 3 more times.
  • The backend calls clipHAProxyTimeoutValue (firstMatch $timeSpecPattern ...), so you're parsing each valid value 2 more times.

Granted, the regexp cache that Andy added in #268 avoids the cost of compiling the same regexp repeatedly, but you're still repeatedly doing a regexp match and then time.ParseDuration on the same value, right?

Copy link
Contributor Author

@alebedev87 alebedev87 Jul 2, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The goal of the function was to determine which timeout value will be set on the lowest layer backends. That's why I used exactly the same combination of the functions as used in the template.

After some time spent on this one I didn't find a lot of room for the improvement here. However I implemented a benchmark test which:

  • allowed me to find a faster way to do the same: the max annotation timeout is now found first and the values are computed only once afterwards.
  • showed that I don't get any visible performance gain if I try to get the time.Duration from clipHAProxyTimeoutValue to avoid another call to ParseDuration.

Also, as you advised I moved the calculation of the max server timeout out of the backends so that it's done only one.

"parseIPList": parseIPList, //parses the list of IPs/CIDRs (IPv4/IPv6)
"clipHAProxyTimeoutValue": clipHAProxyTimeoutValue, //clips extraordinarily high timeout values to be below the maximum allowed timeout value
"parseIPList": parseIPList, //parses the list of IPs/CIDRs (IPv4/IPv6)
"maxTimeoutFirstMatchedAndClipped": maxTimeoutFirstMatchedAndClipped, //finds the maximum timeout managed by a given annotation among all the routes, matches it against a given pattern, and clips
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Keep in mind that this list of helper functions is effectively an API. We cannot rename or remove a helper function without potentially breaking third-party templates, so we tend to prefer defining general and composable helpers. That said, I think it is worth putting this functionality in a single specialized helper to avoid excessive performance impact.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it is worth putting this functionality in a single specialized helper to avoid excessive performance impact.

I'm not sure I understand how putting the function into a different helper supposed to remove the API contract. Taking into account that the function will still be used in the config template.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My point was that there is a tradeoff here, and we should be aware of it.

  • Pro: Simpler, more efficient template.
  • Con: Risk of breaking unsupported customizations if we ever change or remove the helper.

However, it is probably worth making that tradeoff here. I'm not asking you to change anything; I'm just trying to be deliberate and explicit about the design choice.

Comment on lines +1075 to +1076
annotation string
pattern string
Copy link
Contributor

@Miciah Miciah Feb 13, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you add test cases with different values for annotation and pattern?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I had a test case with 2 routes one of which didn't match the pattern (One route timeout doesn't match pattern). I added another one when no route matched the pattern (No route timeout matches pattern). Let me know if this doesn't addresses your comment.

pkg/router/template/template_helper_test.go Show resolved Hide resolved
Comment on lines +1106 to +1116
name: "Default timeout is maximum",
state: map[ServiceAliasConfigKey]ServiceAliasConfig{
"test:route1": {},
"test:route2": {},
"test:route3": {},
},
annotation: "haproxy.router.openshift.io/timeout",
pattern: `[1-9][0-9]*(us|ms|s|m|h|d)?`,
values: []string{"30s"},
expected: "30s",
},
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should at least one of the routes specify a value? It makes sense to me to have a test case for "some routes specify timeouts, but the default timeout is maximum" .

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

By the way, what happens if you have 0 routes?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should at least one of the routes specify a value? It makes sense to me to have a test case for "some routes specify timeouts, but the default timeout is maximum" .

The goal of maxTimeoutFirstMatchedAndClipped is to get the maximum timeout among the ones set on the lowest layer backends, rather than the maximum of all values provided to the function. For instance, if a route has a timeout annotation set, it will take precedence over the default timeout, even if the default timeout is greater. I updated the godoc for the function to highlight this.

By the way, what happens if you have 0 routes?

Unit test case added to cover this case (Empty state).

@openshift-bot
Copy link
Contributor

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@openshift-ci openshift-ci bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 26, 2024
@openshift-merge-robot openshift-merge-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Jun 26, 2024
@alebedev87 alebedev87 force-pushed the timeouts-in-backends branch from e165421 to 05a37e0 Compare June 28, 2024 21:26
@openshift-merge-robot openshift-merge-robot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Jun 28, 2024
@alebedev87
Copy link
Contributor Author

Reanimating the PR: start addressing Miciah's review.

@alebedev87
Copy link
Contributor Author

/remove-lifecycle stale

@openshift-ci openshift-ci bot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 28, 2024
@alebedev87 alebedev87 force-pushed the timeouts-in-backends branch 3 times, most recently from c1b2a01 to c67ef49 Compare July 2, 2024 10:04
@alebedev87
Copy link
Contributor Author

alebedev87 commented Jul 2, 2024

High level perf test of writeConfig events for a cluster with ~ 1000 routes.

Baseline (CI image == 4.17):

 $ grep -Po 'writeConfig.*' baseline-1000-2-pod*.logs | cut -d ' ' -f1,2
baseline-1000-2-pod1.logs:writeConfig" "duration"="794.385841ms"
baseline-1000-2-pod1.logs:writeConfig" "duration"="867.438091ms"
baseline-1000-2-pod1.logs:writeConfig" "duration"="990.070294ms"
baseline-1000-2-pod1.logs:writeConfig" "duration"="852.333994ms"
baseline-1000-2-pod1.logs:writeConfig" "duration"="815.822647ms"
baseline-1000-2-pod1.logs:writeConfig" "duration"="988.009627ms"
baseline-1000-2-pod1.logs:writeConfig" "duration"="853.049032ms"

Change (image with c67ef49 latest commit):

$ grep -Po 'writeConfig.*' c67ef49-* | cut -d ' ' -f1,2
c67ef49-pod1.logs:writeConfig" "duration"="1.081579122s"
c67ef49-pod1.logs:writeConfig" "duration"="914.782473ms"
c67ef49-pod1.logs:writeConfig" "duration"="1.002188869s"
c67ef49-pod1.logs:writeConfig" "duration"="1.046333068s"
c67ef49-pod1.logs:writeConfig" "duration"="958.092477ms"
c67ef49-pod2.logs:writeConfig" "duration"="947.33866ms"
c67ef49-pod2.logs:writeConfig" "duration"="1.003091258s"
c67ef49-pod2.logs:writeConfig" "duration"="917.348525ms"
c67ef49-pod2.logs:writeConfig" "duration"="1.013730305s"
c67ef49-pod2.logs:writeConfig" "duration"="995.759517ms"

Comment on lines +28 to +31
BENCH_PKGS ?= $(shell \grep -lR '^func Benchmark' | xargs -I {} dirname {} | sed 's|^|./|' | paste -sd ' ')
BENCH_TEST ?= .
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would you mind adding a couple of comments about what these are for? I see below that BENCH_PKGS is a value for flag -benchmem, so it could use a clarification.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

@candita
Copy link
Contributor

candita commented Aug 20, 2024

If I understand correctly, this represents a major change for our users - whatever is the max connect timeout in ANY route becomes the max server and tunnel timeout for all the routes on the backend. I strongly feel like this needs to be something that is configurable, and is off by default in order for past users to see the same behavior when they upgrade to a version that has this fix.

@alebedev87
Copy link
Contributor Author

If I understand correctly, this represents a major change for our users - whatever is the max connect timeout in ANY route becomes the max server and tunnel timeout for all the routes on the backend.

@candita: 1) only the server timeout is set to the max value on the middle backend, 2) this is only one part of the change. The other part is the server and tunnel timeouts set on all the route backends. Note that the default timeout values are added to the route level timeouts.

We should not have any change in the behavior. Because for the routes which don't set any value for the timeouts (in the annotations) the default value will be set on the route backend.

I strongly feel like this needs to be something that is configurable, and is off by default in order for past users to see the same behavior when they upgrade to a version that has this fix.

"Configurable" this may put us in the same situation as right now - when a user can set a global (almost) timeout and override the timeouts set on the route level. "Off by default" would lead us to this warning during the config reloads.

Overall, I understand the concern. The change may appear to have a potential to hide something I didn't manage to test manually and with our test suites. However I don't see any other solution for this bug. Let me try to organize a discussion about this PR outside of GitHub so that we can come to a conclusion.

The middle backends (`be_sni`, `be_no_sni`) are updated with the server timeout
which is set to the maximum value among all routes from the configuration.
This prevents a warning message during config reloads.

A benchmark test is added for the template helper function which
finds the maximum timeout value among all routes (`maxTimeoutFirstMatchedAndClipped`).
A new make target is introduced to run all benchmark tests (`bench`).
@alebedev87 alebedev87 force-pushed the timeouts-in-backends branch from c67ef49 to d70581e Compare October 8, 2024 08:34
@alebedev87 alebedev87 changed the title OCPBUGS-14914: Set tunnel and server timeouts only at the backend level OCPBUGS-14914: Set tunnel and server timeouts at backend level Oct 8, 2024
@alebedev87
Copy link
Contributor Author

Squashed the commits.

@alebedev87
Copy link
Contributor Author

/test e2e-metal-ipi-ovn-ipv6

Copy link
Contributor

openshift-ci bot commented Oct 8, 2024

@alebedev87: all tests passed!

Full PR test history. Your PR dashboard.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

@openshift-ci-robot
Copy link
Contributor

@alebedev87: This pull request references Jira Issue OCPBUGS-14914, which is valid.

3 validation(s) were run on this bug
  • bug is open, matching expected state (open)
  • bug target version (4.18.0) matches configured target version for branch (4.18.0)
  • bug is in the state POST, which is one of the valid states (NEW, ASSIGNED, POST)

Requesting review from QA contact:
/cc @ShudiLi

The bug has been updated to refer to the pull request using the external bug tracker.

In response to this:

This PR removes the tunnel and server timeouts in the global default section and sets them on all the backends. The route annotations for the timeouts continue to work as they did before.

As suggested in the upstream issue, the middle backends (be_sni and be_no_sni) are set with the maximum of all the route timeouts to avoid the following warning message:

I1130 10:49:26.557528       1 router.go:649] template "msg"="router reloaded" "output"="[NOTICE]   (18) : haproxy version is 2.6.13-234aa6d\n[NOTICE]   (18) : path to executable is /usr/sbin/haproxy\n[WARNING]  (18) : config : missing timeouts for backend 'be_sni'.\n   | While not properly invalid, you will certainly encounter various problems\n   | with such a configuration. To fix this, please ensure that all following\n   | timeouts are set to a non-zero value: 'client', 'connect', 'server'.\n[WARNING]  (18) : config : missing timeouts for backend 'be_no_sni'.\n   | While not properly invalid, you will certainly encounter various problems\n   | with such a configuration. To fix this, please ensure that all following\n   | timeouts are set to a non-zero value: 'client', 'connect', 'server'.\n

Test using haproxy-timeout-tunnel repository:

$ oc -n openshift-ingress get deploy router-default -o yaml | yq .spec.template.spec.containers[0].image 
quay.io/alebedev/openshift-router:12.4.110

$ oc -n openshift-ingress get deploy router-default -o yaml | yq .spec.template.spec.containers[0].env | grep -A1 TUNNEL
- name: ROUTER_DEFAULT_TUNNEL_TIMEOUT
 value: 5s

$ oc -n tunnel-timeout get route hello-openshift -o yaml | yq .metadata.annotations
haproxy.router.openshift.io/timeout-tunnel: 15s

$ ROUTE_HOST=$(oc -n tunnel-timeout get route hello-openshift --template='{{.spec.host}}')
$ time HOST=${ROUTE_HOST} PORT=443 SCHEME=wss ./websocket-cli.js
WebSocket Client Connected
echo-protocol Connection Closed

real	0m15.721s
user	0m0.126s
sys	0m0.021s

$ oc -n openshift-ingress exec -ti deploy/router-default -- cat /var/lib/haproxy/conf/haproxy.config | grep 'timeout tunnel '
 timeout tunnel  5s
 timeout tunnel  5s
 timeout tunnel  5s
 timeout tunnel  5s
 timeout tunnel  5s
 timeout tunnel  5s
 timeout tunnel  5s
 timeout tunnel  5s
 timeout tunnel  15s

The e2e test: openshift/cluster-ingress-operator#1013.
Presentation which summarizes the issue: [link|https://docs.google.com/presentation/d/1vU1CH20lzFWDs8Z_iNaWBhPwdKM2iyS-Qb7zlMD9XTc].

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@alebedev87
Copy link
Contributor Author

alebedev87 commented Oct 30, 2024

The slidedeck has been updated with sequence diagrams: current, future.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
jira/severity-important Referenced Jira bug's severity is important for the branch this PR is targeting. jira/valid-bug Indicates that a referenced Jira bug is valid for the branch this PR is targeting. jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. qe-approved Signifies that QE has signed off on this PR
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants