Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chart(update): use podIP in all components server host #2429

Merged
merged 2 commits into from
Oct 12, 2024
Merged

Conversation

VietND96
Copy link
Member

@VietND96 VietND96 commented Oct 12, 2024

User description

Thanks for contributing to the Docker-Selenium project!
A PR well described will help maintainers to quickly review and merge it

Before submitting your PR, please check our contributing guidelines, applied for this repository.
Avoid large PRs, help reviewers by making them as simple and short as possible.

Description

Motivation and Context

Fixes #2065

  • Node service resource creation is disabled by default. Ideally, Node will not be exposed to direct access from outside. Moreover, when Node replicas > 1 or in autoscaling, multiple Node pods point to a service resource, It isn't sensible
  • Component config --host refer to status.podIP
  • Enable addon Istio in Minikube for testing.

Types of changes

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to change)

Checklist

  • I have read the contributing document.
  • My change requires a change to the documentation.
  • I have updated the documentation accordingly.
  • I have added tests to cover my changes.
  • All new and existing tests passed.

PR Type

enhancement, configuration changes


Description

  • Added support for using pod IPs for host configurations across various components, improving network configuration.
  • Disabled node service resource creation by default to prevent unnecessary exposure and improve scalability.
  • Integrated service mesh configuration into the Minikube setup and CI workflow, allowing for optional Istio testing.
  • Updated documentation to reflect changes in default service creation settings for nodes.

Changes walkthrough 📝

Relevant files
Configuration changes
3 files
chart_cluster_setup.sh
Add service mesh configuration for Minikube setup               

tests/charts/make/chart_cluster_setup.sh

  • Added SERVICE_MESH variable with default value false.
  • Enabled Istio addons in Minikube if SERVICE_MESH is true.
  • +5/-0     
    helm-chart-test.yml
    Integrate service mesh configuration in CI workflow           

    .github/workflows/helm-chart-test.yml

  • Added service-mesh configuration to test matrix.
  • Passed SERVICE_MESH to chart cluster setup command.
  • +9/-1     
    values.yaml
    Disable node service creation by default                                 

    charts/selenium-grid/values.yaml

    • Set default node service creation to false.
    +3/-3     
    Enhancement
    7 files
    _helpers.tpl
    Use pod IP for node host configuration                                     

    charts/selenium-grid/templates/_helpers.tpl

    • Added SE_NODE_HOST environment variable to use pod IP.
    +4/-0     
    distributor-deployment.yaml
    Use pod IP for distributor host configuration                       

    charts/selenium-grid/templates/distributor-deployment.yaml

    • Changed SE_DISTRIBUTOR_HOST to use pod IP.
    +3/-1     
    event-bus-deployment.yaml
    Use pod IP for event bus host configuration                           

    charts/selenium-grid/templates/event-bus-deployment.yaml

    • Changed SE_EVENT_BUS_HOST to use pod IP.
    +3/-1     
    hub-deployment.yaml
    Use pod IP for hub host configuration                                       

    charts/selenium-grid/templates/hub-deployment.yaml

    • Changed SE_HUB_HOST to use pod IP.
    +3/-1     
    router-deployment.yaml
    Use pod IP for router host configuration                                 

    charts/selenium-grid/templates/router-deployment.yaml

    • Changed SE_ROUTER_HOST to use pod IP.
    +3/-1     
    session-map-deployment.yaml
    Use pod IP for session map host configuration                       

    charts/selenium-grid/templates/session-map-deployment.yaml

    • Changed SE_SESSIONS_HOST to use pod IP.
    +3/-1     
    session-queue-deployment.yaml
    Use pod IP for session queue host configuration                   

    charts/selenium-grid/templates/session-queue-deployment.yaml

    • Changed SE_SESSION_QUEUE_HOST to use pod IP.
    +3/-1     
    Documentation
    1 files
    CONFIGURATION.md
    Update node service creation default setting                         

    charts/selenium-grid/CONFIGURATION.md

    • Updated default service creation setting for nodes to false.
    +3/-3     

    💡 PR-Agent usage: Comment /help "your question" on any pull request to receive relevant information

    Copy link

    PR Reviewer Guide 🔍

    Here are some key observations to aid the review process:

    ⏱️ Estimated effort to review: 3 🔵🔵🔵⚪⚪
    🧪 PR contains tests
    🔒 No security concerns identified
    ⚡ Recommended focus areas for review

    Configuration Change
    The SE_NODE_HOST environment variable is now set to use the pod IP. This change might affect how nodes communicate within the cluster.

    Default Behavior Change
    The default setting for creating node services has been changed to false. This might impact existing deployments that rely on these services.

    Networking Change
    The SE_DISTRIBUTOR_HOST is now set to use the pod IP instead of the service name. This change could affect how other components communicate with the distributor.

    Copy link

    PR Code Suggestions ✨

    No code suggestions found for the PR.

    Copy link

    qodo-merge-pro bot commented Oct 12, 2024

    CI Failure Feedback 🧐

    (Checks updated until commit cc2dd65)

    Action: Test Selenium Grid on Kubernetes / Test K8s (v1.28.14, deployment, minikube, v3.13.3, 24.0.9, true, true)

    Failed stage: Test chart upgrade [❌]

    Failed test name: ""

    Failure summary:

    The action failed due to multiple issues:

  • The Helm upgrade command failed with the error "timed out waiting for the condition" during the
    deployment of the Selenium Grid chart. This indicates that the deployment did not complete within
    the expected time frame.
  • Several Kubernetes pods, such as selenium-chrome-node, selenium-edge-node, and
    selenium-firefox-node, were stuck in the "PodInitializing" state, preventing them from starting
    properly.
  • The make chart_test_autoscaling_deployment command failed with exit code 2, indicating an error
    occurred during the execution of this step.

  • Relevant error logs:
    1:  ##[group]Operating System
    2:  Ubuntu
    ...
    
    167:  �[36;1mfi�[0m
    168:  �[36;1m�[0m
    169:  �[36;1m# Option: Remove large packages�[0m
    170:  �[36;1m# REF: https://github.com/apache/flink/blob/master/tools/azure-pipelines/free_disk_space.sh�[0m
    171:  �[36;1m�[0m
    172:  �[36;1mif [[ false == 'true' ]]; then�[0m
    173:  �[36;1m  BEFORE=$(getAvailableSpace)�[0m
    174:  �[36;1m  �[0m
    175:  �[36;1m  sudo apt-get remove -y '^aspnetcore-.*' || echo "::warning::The command [sudo apt-get remove -y '^aspnetcore-.*'] failed to complete successfully. Proceeding..."�[0m
    176:  �[36;1m  sudo apt-get remove -y '^dotnet-.*' --fix-missing || echo "::warning::The command [sudo apt-get remove -y '^dotnet-.*' --fix-missing] failed to complete successfully. Proceeding..."�[0m
    177:  �[36;1m  sudo apt-get remove -y '^llvm-.*' --fix-missing || echo "::warning::The command [sudo apt-get remove -y '^llvm-.*' --fix-missing] failed to complete successfully. Proceeding..."�[0m
    178:  �[36;1m  sudo apt-get remove -y 'php.*' --fix-missing || echo "::warning::The command [sudo apt-get remove -y 'php.*' --fix-missing] failed to complete successfully. Proceeding..."�[0m
    179:  �[36;1m  sudo apt-get remove -y '^mongodb-.*' --fix-missing || echo "::warning::The command [sudo apt-get remove -y '^mongodb-.*' --fix-missing] failed to complete successfully. Proceeding..."�[0m
    180:  �[36;1m  sudo apt-get remove -y '^mysql-.*' --fix-missing || echo "::warning::The command [sudo apt-get remove -y '^mysql-.*' --fix-missing] failed to complete successfully. Proceeding..."�[0m
    181:  �[36;1m  sudo apt-get remove -y azure-cli google-chrome-stable firefox powershell mono-devel libgl1-mesa-dri --fix-missing || echo "::warning::The command [sudo apt-get remove -y azure-cli google-chrome-stable firefox powershell mono-devel libgl1-mesa-dri --fix-missing] failed to complete successfully. Proceeding..."�[0m
    182:  �[36;1m  sudo apt-get remove -y google-cloud-sdk --fix-missing || echo "::debug::The command [sudo apt-get remove -y google-cloud-sdk --fix-missing] failed to complete successfully. Proceeding..."�[0m
    183:  �[36;1m  sudo apt-get remove -y google-cloud-cli --fix-missing || echo "::debug::The command [sudo apt-get remove -y google-cloud-cli --fix-missing] failed to complete successfully. Proceeding..."�[0m
    184:  �[36;1m  sudo apt-get autoremove -y || echo "::warning::The command [sudo apt-get autoremove -y] failed to complete successfully. Proceeding..."�[0m
    185:  �[36;1m  sudo apt-get clean || echo "::warning::The command [sudo apt-get clean] failed to complete successfully. Proceeding..."�[0m
    ...
    
    529:  with:
    530:  timeout_minutes: 10
    531:  max_attempts: 3
    532:  command: make setup_dev_env
    533:  
    534:  retry_wait_seconds: 10
    535:  polling_interval_seconds: 1
    536:  warning_on_retry: true
    537:  continue_on_error: false
    ...
    
    1044:  go: downloading github.com/google/uuid v1.3.0
    1045:  go: downloading github.com/huandu/xstrings v1.4.0
    1046:  go: downloading github.com/imdario/mergo v0.3.13
    1047:  go: downloading github.com/mitchellh/copystructure v1.2.0
    1048:  go: downloading github.com/shopspring/decimal v1.3.1
    1049:  go: downloading golang.org/x/crypto v0.21.0
    1050:  go: downloading golang.org/x/text v0.14.0
    1051:  go: downloading github.com/mitchellh/reflectwalk v1.0.2
    1052:  go: downloading github.com/pkg/errors v0.9.1
    ...
    
    1055:  helm-docs [flags]
    1056:  Flags:
    1057:  -b, --badge-style string                                 badge style to use for charts (default "flat-square")
    1058:  -c, --chart-search-root string                           directory to search recursively within for charts (default ".")
    1059:  -g, --chart-to-generate strings                          List of charts that will have documentation generated. Comma separated, no space. Empty list - generate for all charts in chart-search-root
    1060:  -u, --document-dependency-values                         For charts with dependencies, include the dependency values in the chart values documentation
    1061:  -y, --documentation-strict-ignore-absent strings         A comma separate values which are allowed not to be documented in strict mode (default [service.type,image.repository,image.tag])
    1062:  -z, --documentation-strict-ignore-absent-regex strings   A comma separate values which are allowed not to be documented in strict mode (default [.*service\.type,.*image\.repository,.*image\.tag])
    1063:  -x, --documentation-strict-mode                          Fail the generation of docs if there are undocumented values
    1064:  -d, --dry-run                                            don't actually render any markdown files just print to stdout passed
    1065:  -h, --help                                               help for helm-docs
    1066:  -i, --ignore-file string                                 The filename to use as an ignore file to exclude chart directories (default ".helmdocsignore")
    1067:  --ignore-non-descriptions                            ignore values without a comment, this values will not be included in the README
    1068:  -l, --log-level string                                   Level of logs that should printed, one of (panic, fatal, error, warning, info, debug, trace) (default "info")
    ...
    
    1425:  VERSION: 4.26.0-SNAPSHOT
    1426:  BUILD_DATE: 20241012
    1427:  IMAGE_REGISTRY: artifactory/selenium
    1428:  AUTHORS: SeleniumHQ
    1429:  ##[endgroup]
    1430:  VERSION=4.26.0-SNAPSHOT-20241012 ./tests/charts/make/chart_build.sh
    1431:  + SET_VERSION=true
    1432:  + CHART_PATH=charts/selenium-grid
    1433:  + trap on_failure ERR
    ...
    
    1495:  Downloading jaeger from repo https://jaegertracing.github.io/helm-charts
    1496:  Downloading kube-prometheus-stack from repo https://prometheus-community.github.io/helm-charts
    1497:  Deleting outdated charts
    1498:  Linting chart "selenium-grid => (version: \"0.36.2\", path: \"charts/selenium-grid\")"
    1499:  Validating /home/runner/work/docker-selenium/docker-selenium/charts/selenium-grid/Chart.yaml...
    1500:  Validation success! 👍
    1501:  Validating maintainers...
    1502:  ==> Linting charts/selenium-grid
    1503:  1 chart(s) linted, 0 chart(s) failed
    ...
    
    1517:  ##[group]Run nick-invision/retry@master
    1518:  with:
    1519:  timeout_minutes: 12
    1520:  max_attempts: 3
    1521:  retry_wait_seconds: 60
    1522:  command: NAME=${IMAGE_REGISTRY} VERSION=${BRANCH} BUILD_DATE=${BUILD_DATE} make build
    1523:  polling_interval_seconds: 1
    1524:  warning_on_retry: true
    1525:  continue_on_error: false
    ...
    
    1555:  rm -rf ./Base/configs/node && mkdir -p ./Base/configs/node && cp -r ./charts/selenium-grid/configs/node ./Base/configs
    1556:  rm -rf ./Base/certs && cp -r ./charts/selenium-grid/certs ./Base
    1557:  ./Base/certs/gen-cert-helper.sh -d ./Base/certs
    1558:  Generating 2,048 bit RSA key pair and self-signed certificate (SHA256withRSA) with a validity of 3,650 days
    1559:  for: CN=SeleniumHQ, OU=Software Freedom Conservancy, O=SeleniumHQ, L=Unknown, ST=Unknown, C=Unknown
    1560:  [Storing server.jks]
    1561:  Importing keystore server.jks to tls.p12...
    1562:  Entry for alias seleniumhq successfully imported.
    1563:  Import command completed:  1 entries successfully imported, 0 entries failed or cancelled
    ...
    
    2274:  #10 20.92 Downloaded https://repo1.maven.org/maven2/io/netty/netty-buffer/4.1.113.Final/netty-buffer-4.1.113.Final.pom
    2275:  #10 20.92 Downloading https://repo1.maven.org/maven2/io/netty/netty-codec/4.1.113.Final/netty-codec-4.1.113.Final.pom
    2276:  #10 20.93 Downloaded https://repo1.maven.org/maven2/io/opentelemetry/opentelemetry-sdk-trace/1.42.1/opentelemetry-sdk-trace-1.42.1.pom
    2277:  #10 20.93 Downloaded https://repo1.maven.org/maven2/com/google/guava/guava/33.2.1-android/guava-33.2.1-android.pom
    2278:  #10 20.93 Downloaded https://repo1.maven.org/maven2/io/netty/netty-codec/4.1.113.Final/netty-codec-4.1.113.Final.pom
    2279:  #10 20.93 Downloaded https://repo1.maven.org/maven2/io/perfmark/perfmark-api/0.27.0/perfmark-api-0.27.0.pom
    2280:  #10 20.93 Downloading https://repo1.maven.org/maven2/io/netty/netty-codec-http2/4.1.100.Final/netty-codec-http2-4.1.100.Final.pom
    2281:  #10 20.93 Downloading https://repo1.maven.org/maven2/io/grpc/grpc-util/1.66.0/grpc-util-1.66.0.pom
    2282:  #10 20.93 Downloading https://repo1.maven.org/maven2/com/google/errorprone/error_prone_annotations/2.28.0/error_prone_annotations-2.28.0.pom
    2283:  #10 20.94 Downloaded https://repo1.maven.org/maven2/io/netty/netty-common/4.1.113.Final/netty-common-4.1.113.Final.pom
    2284:  #10 20.94 Downloaded https://repo1.maven.org/maven2/io/opentelemetry/opentelemetry-exporter-otlp-common/1.42.1/opentelemetry-exporter-otlp-common-1.42.1.pom
    2285:  #10 20.94 Downloading https://repo1.maven.org/maven2/io/netty/netty-handler-proxy/4.1.100.Final/netty-handler-proxy-4.1.100.Final.pom
    2286:  #10 20.95 Downloaded https://repo1.maven.org/maven2/io/grpc/grpc-util/1.66.0/grpc-util-1.66.0.pom
    2287:  #10 20.95 Downloading https://repo1.maven.org/maven2/io/opentelemetry/opentelemetry-exporter-sender-okhttp/1.42.1/opentelemetry-exporter-sender-okhttp-1.42.1.pom
    2288:  #10 20.95 Downloading https://repo1.maven.org/maven2/io/grpc/grpc-core/1.66.0/grpc-core-1.66.0.pom
    2289:  #10 20.96 Downloaded https://repo1.maven.org/maven2/io/netty/netty-codec-http2/4.1.100.Final/netty-codec-http2-4.1.100.Final.pom
    2290:  #10 20.96 Downloading https://repo1.maven.org/maven2/io/opentelemetry/opentelemetry-sdk-metrics/1.42.1/opentelemetry-sdk-metrics-1.42.1.pom
    2291:  #10 20.96 Downloaded https://repo1.maven.org/maven2/com/google/errorprone/error_prone_annotations/2.28.0/error_prone_annotations-2.28.0.pom
    ...
    
    2298:  #10 20.98 Downloaded https://repo1.maven.org/maven2/io/grpc/grpc-core/1.66.0/grpc-core-1.66.0.pom
    2299:  #10 20.98 Downloaded https://repo1.maven.org/maven2/io/netty/netty-transport-native-unix-common/4.1.100.Final/netty-transport-native-unix-common-4.1.100.Final.pom
    2300:  #10 20.98 Downloading https://repo1.maven.org/maven2/io/grpc/grpc-api/1.66.0/grpc-api-1.66.0.pom
    2301:  #10 20.99 Downloaded https://repo1.maven.org/maven2/io/netty/netty-handler/4.1.113.Final/netty-handler-4.1.113.Final.pom
    2302:  #10 20.99 Downloaded https://repo1.maven.org/maven2/io/grpc/grpc-api/1.66.0/grpc-api-1.66.0.pom
    2303:  #10 21.00 Downloaded https://repo1.maven.org/maven2/io/netty/netty-transport/4.1.113.Final/netty-transport-4.1.113.Final.pom
    2304:  #10 21.03 Downloading https://repo1.maven.org/maven2/com/google/guava/guava-parent/33.2.1-android/guava-parent-33.2.1-android.pom
    2305:  #10 21.03 Downloading https://repo1.maven.org/maven2/io/netty/netty-parent/4.1.100.Final/netty-parent-4.1.100.Final.pom
    2306:  #10 21.03 Downloading https://repo1.maven.org/maven2/com/google/errorprone/error_prone_parent/2.28.0/error_prone_parent-2.28.0.pom
    2307:  #10 21.03 Downloaded https://repo1.maven.org/maven2/io/netty/netty-parent/4.1.100.Final/netty-parent-4.1.100.Final.pom
    2308:  #10 21.04 Downloaded https://repo1.maven.org/maven2/com/google/errorprone/error_prone_parent/2.28.0/error_prone_parent-2.28.0.pom
    ...
    
    2421:  #10 21.65 Downloading https://repo1.maven.org/maven2/io/netty/netty-codec-http/4.1.113.Final/netty-codec-http-4.1.113.Final.jar
    2422:  #10 21.66 Downloaded https://repo1.maven.org/maven2/io/grpc/grpc-api/1.66.0/grpc-api-1.66.0.jar
    2423:  #10 21.66 Downloading https://repo1.maven.org/maven2/com/google/guava/failureaccess/1.0.2/failureaccess-1.0.2.jar
    2424:  #10 21.66 Downloaded https://repo1.maven.org/maven2/io/grpc/grpc-util/1.66.0/grpc-util-1.66.0.jar
    2425:  #10 21.66 Downloading https://repo1.maven.org/maven2/com/squareup/okhttp3/okhttp/4.12.0/okhttp-4.12.0.jar
    2426:  #10 21.66 Downloaded https://repo1.maven.org/maven2/io/grpc/grpc-netty/1.66.0/grpc-netty-1.66.0.jar
    2427:  #10 21.66 Downloading https://repo1.maven.org/maven2/io/opentelemetry/opentelemetry-sdk-trace/1.42.1/opentelemetry-sdk-trace-1.42.1.jar
    2428:  #10 21.67 Downloaded https://repo1.maven.org/maven2/org/checkerframework/checker-qual/3.42.0/checker-qual-3.42.0.jar
    2429:  #10 21.67 Downloading https://repo1.maven.org/maven2/com/google/errorprone/error_prone_annotations/2.28.0/error_prone_annotations-2.28.0.jar
    2430:  #10 21.67 Downloaded https://repo1.maven.org/maven2/com/google/guava/failureaccess/1.0.2/failureaccess-1.0.2.jar
    2431:  #10 21.67 Downloading https://repo1.maven.org/maven2/org/jetbrains/kotlin/kotlin-stdlib-common/1.9.10/kotlin-stdlib-common-1.9.10.jar
    2432:  #10 21.67 Downloaded https://repo1.maven.org/maven2/io/opentelemetry/opentelemetry-sdk/1.42.1/opentelemetry-sdk-1.42.1.jar
    2433:  #10 21.67 Downloading https://repo1.maven.org/maven2/com/google/code/gson/gson/2.11.0/gson-2.11.0.jar
    2434:  #10 21.68 Downloaded https://repo1.maven.org/maven2/com/google/errorprone/error_prone_annotations/2.28.0/error_prone_annotations-2.28.0.jar
    ...
    
    2484:  #14 DONE 0.0s
    2485:  #15 [stage-0 7/8] COPY --chown=1200:1201 certs/tls.crt certs/tls.key certs/server.jks certs/server.pass /opt/selenium/secrets/
    2486:  #15 DONE 0.0s
    2487:  #16 [stage-0 8/8] RUN /opt/bin/add-jks-helper.sh -d /opt/selenium/secrets     && /opt/bin/add-cert-helper.sh -d /opt/selenium/secrets TCu,Cu,Tu
    2488:  #16 0.160 seluser is running cert script!
    2489:  #16 0.549 Processing /opt/selenium/secrets/server.jks
    2490:  #16 0.845 Certificate stored in file </tmp/SeleniumHQ.pem>
    2491:  #16 1.014 Warning: use -cacerts option to access cacerts keystore
    2492:  #16 1.121 keytool error: java.lang.Exception: Alias <SeleniumHQ> does not exist
    2493:  #16 1.250 Warning: use -cacerts option to access cacerts keystore
    2494:  #16 1.361 Certificate was added to keystore
    2495:  #16 1.484 Warning: use -cacerts option to access cacerts keystore
    2496:  #16 1.710 The certificate with alias SeleniumHQ is present in /etc/ssl/certs/java/cacerts
    2497:  #16 2.133 seluser is running cert script!
    2498:  #16 2.219 Processing /opt/selenium/secrets/tls.crt
    2499:  #16 2.221 Adding to db: /home/seluser/.pki/nssdb/cert9.db
    2500:  #16 2.229 certutil: could not find certificate named "SeleniumHQ": SEC_ERROR_INVALID_ARGS: security library: invalid arguments.
    ...
    
    3062:  #10 3.392 W: Target Packages (main/binary-amd64/Packages) is configured multiple times in /etc/apt/sources.list:3 and /etc/apt/sources.list.d/ubuntu.sources:1
    3063:  #10 3.392 W: Target Packages (main/binary-all/Packages) is configured multiple times in /etc/apt/sources.list:3 and /etc/apt/sources.list.d/ubuntu.sources:1
    3064:  #10 3.392 W: Target Packages (universe/binary-amd64/Packages) is configured multiple times in /etc/apt/sources.list:3 and /etc/apt/sources.list.d/ubuntu.sources:1
    3065:  #10 3.392 W: Target Packages (universe/binary-all/Packages) is configured multiple times in /etc/apt/sources.list:3 and /etc/apt/sources.list.d/ubuntu.sources:1
    3066:  #10 3.392 W: Target Packages (main/binary-amd64/Packages) is configured multiple times in /etc/apt/sources.list:5 and /etc/apt/sources.list.d/ubuntu.sources:2
    3067:  #10 3.392 W: Target Packages (main/binary-all/Packages) is configured multiple times in /etc/apt/sources.list:5 and /etc/apt/sources.list.d/ubuntu.sources:2
    3068:  #10 3.392 W: Target Packages (universe/binary-amd64/Packages) is configured multiple times in /etc/apt/sources.list:5 and /etc/apt/sources.list.d/ubuntu.sources:2
    3069:  #10 3.392 W: Target Packages (universe/binary-all/Packages) is configured multiple times in /etc/apt/sources.list:5 and /etc/apt/sources.list.d/ubuntu.sources:2
    3070:  #10 6.545 perl: warning: Setting locale failed.
    ...
    
    3132:  #10 7.040 Setting up libkmod2:amd64 (31+20240202-2ubuntu7) ...
    3133:  #10 7.043 Setting up libsystemd-shared:amd64 (255.4-1ubuntu8.4) ...
    3134:  #10 7.047 Setting up systemd-dev (255.4-1ubuntu8.4) ...
    3135:  #10 7.050 Setting up systemd (255.4-1ubuntu8.4) ...
    3136:  #10 7.069 Created symlink /etc/systemd/system/getty.target.wants/[email protected] → /usr/lib/systemd/system/[email protected].
    3137:  #10 7.072 Created symlink /etc/systemd/system/multi-user.target.wants/remote-fs.target → /usr/lib/systemd/system/remote-fs.target.
    3138:  #10 7.076 Created symlink /etc/systemd/system/sysinit.target.wants/systemd-pstore.service → /usr/lib/systemd/system/systemd-pstore.service.
    3139:  #10 7.079 Initializing machine ID from random generator.
    3140:  #10 7.097 /usr/lib/tmpfiles.d/systemd-network.conf:10: Failed to resolve user 'systemd-network': No such process
    3141:  #10 7.097 /usr/lib/tmpfiles.d/systemd-network.conf:11: Failed to resolve user 'systemd-network': No such process
    3142:  #10 7.097 /usr/lib/tmpfiles.d/systemd-network.conf:12: Failed to resolve user 'systemd-network': No such process
    3143:  #10 7.097 /usr/lib/tmpfiles.d/systemd-network.conf:13: Failed to resolve user 'systemd-network': No such process
    3144:  #10 7.098 /usr/lib/tmpfiles.d/systemd.conf:22: Failed to resolve group 'systemd-journal': No such process
    3145:  #10 7.098 /usr/lib/tmpfiles.d/systemd.conf:23: Failed to resolve group 'systemd-journal': No such process
    3146:  #10 7.098 /usr/lib/tmpfiles.d/systemd.conf:28: Failed to resolve group 'systemd-journal': No such process
    3147:  #10 7.098 /usr/lib/tmpfiles.d/systemd.conf:29: Failed to resolve group 'systemd-journal': No such process
    3148:  #10 7.098 /usr/lib/tmpfiles.d/systemd.conf:30: Failed to resolve group 'systemd-journal': No such process
    ...
    
    4244:  #10 65.95   inflating: noVNC-master/.github/workflows/test.yml  
    4245:  #10 65.95   inflating: noVNC-master/.github/workflows/translate.yml  
    4246:  #10 65.95   inflating: noVNC-master/.gitignore  
    4247:  #10 65.95  extracting: noVNC-master/.gitmodules  
    4248:  #10 65.95   inflating: noVNC-master/AUTHORS    
    4249:  #10 65.95   inflating: noVNC-master/LICENSE.txt  
    4250:  #10 65.95   inflating: noVNC-master/README.md  
    4251:  #10 65.95    creating: noVNC-master/app/
    4252:  #10 65.95   inflating: noVNC-master/app/error-handler.js  
    4253:  #10 65.95    creating: noVNC-master/app/images/
    4254:  #10 65.95   inflating: noVNC-master/app/images/alt.svg  
    4255:  #10 65.95   inflating: noVNC-master/app/images/clipboard.svg  
    4256:  #10 65.95   inflating: noVNC-master/app/images/connect.svg  
    4257:  #10 65.95   inflating: noVNC-master/app/images/ctrl.svg  
    4258:  #10 65.95   inflating: noVNC-master/app/images/ctrlaltdel.svg  
    4259:  #10 65.95   inflating: noVNC-master/app/images/disconnect.svg  
    4260:  #10 65.95   inflating: noVNC-master/app/images/drag.svg  
    4261:  #10 65.95   inflating: noVNC-master/app/images/error.svg  
    ...
    
    7331:  ##[group]Run nick-invision/retry@master
    7332:  with:
    7333:  timeout_minutes: 10
    7334:  max_attempts: 3
    7335:  command: CLUSTER=${CLUSTER} SERVICE_MESH=${SERVICE_MESH} KUBERNETES_VERSION=${KUBERNETES_VERSION} NAME=${IMAGE_REGISTRY} VERSION=${BRANCH} BUILD_DATE=${BUILD_DATE} make chart_cluster_setup
    7336:  retry_wait_seconds: 10
    7337:  polling_interval_seconds: 1
    7338:  warning_on_retry: true
    7339:  continue_on_error: false
    ...
    
    7376:  + SELENIUM_GRID_HOST=localhost
    7377:  + SELENIUM_GRID_PORT=80
    7378:  + WAIT_TIMEOUT=90s
    7379:  + SKIP_CLEANUP=false
    7380:  + KUBERNETES_VERSION=v1.28.14
    7381:  + CNI=calico
    7382:  + CONTAINER_RUNTIME=docker
    7383:  + SERVICE_MESH=true
    7384:  + trap on_failure ERR
    ...
    
    8992:  timeout_minutes: 30
    8993:  max_attempts: 3
    8994:  command: NAME=${IMAGE_REGISTRY} VERSION=${BRANCH} BUILD_DATE=${BUILD_DATE} TEST_UPGRADE_CHART=false make chart_test_autoscaling_deployment \
    8995:  && NAME=${IMAGE_REGISTRY} VERSION=${BRANCH} BUILD_DATE=${BUILD_DATE} make test_video_integrity
    8996:  
    8997:  retry_wait_seconds: 10
    8998:  polling_interval_seconds: 1
    8999:  warning_on_retry: true
    9000:  continue_on_error: false
    ...
    
    9076:  + MAX_SESSIONS_CHROME=1
    9077:  + MAX_SESSIONS_FIREFOX=1
    9078:  + MAX_SESSIONS_EDGE=1
    9079:  + TEST_NAME_OVERRIDE=false
    9080:  + TEST_PATCHED_KEDA=false
    9081:  + BASIC_AUTH_EMBEDDED_URL=false
    9082:  + SELENIUM_GRID_MONITORING=true
    9083:  + TEST_EXISTING_PTS=false
    9084:  + trap on_failure ERR EXIT
    ...
    
    9166:  DownwardAPI:             true
    9167:  QoS Class:                   BestEffort
    9168:  Node-Selectors:              <none>
    9169:  Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
    9170:  node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
    9171:  Events:
    9172:  Type     Reason            Age   From               Message
    9173:  ----     ------            ----  ----               -------
    9174:  Warning  FailedScheduling  1s    default-scheduler  0/1 nodes are available: persistentvolumeclaim "selenium-grid-pvc-local" not found. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling..
    ...
    
    9276:  + cert_dir=./tests/tests
    9277:  + ADD_IP_ADDRESS=hostname
    9278:  + ./charts/selenium-grid/certs/gen-cert-helper.sh -d ./tests/tests
    9279:  Generating 2,048 bit RSA key pair and self-signed certificate (SHA256withRSA) with a validity of 3,650 days
    9280:  for: CN=SeleniumHQ, OU=Software Freedom Conservancy, O=SeleniumHQ, L=Unknown, ST=Unknown, C=Unknown
    9281:  [Storing server.jks]
    9282:  Importing keystore server.jks to tls.p12...
    9283:  Entry for alias seleniumhq successfully imported.
    9284:  Import command completed:  1 entries successfully imported, 0 entries failed or cancelled
    ...
    
    9684:  + echo 'Logs for pod selenium-metrics-exporter-76946c8c88-4xgzt'
    9685:  + kubectl logs -n selenium selenium-metrics-exporter-76946c8c88-4xgzt --all-containers
    9686:  Logs for pod selenium-metrics-exporter-76946c8c88-4xgzt
    9687:  + for pod in $pods
    9688:  + echo 'Logs for pod selenium-prometheus-node-exporter-6fbxv'
    9689:  + kubectl logs -n selenium selenium-prometheus-node-exporter-6fbxv --all-containers
    9690:  Logs for pod selenium-prometheus-node-exporter-6fbxv
    9691:  + '[' true = false ']'
    9692:  + on_failure
    9693:  + local exit_status=0
    9694:  + '[' false = true ']'
    9695:  + echo 'Describe all resources in the selenium namespace for debugging purposes'
    9696:  + kubectl describe all -n selenium
    9697:  Describe all resources in the selenium namespace for debugging purposes
    9698:  + kubectl describe pod -n selenium
    9699:  + echo 'There is step failed with exit status 0'
    9700:  + cleanup
    9701:  There is step failed with exit status 0
    ...
    
    9793:  echo "::warning:: Number of video files: $(echo $list_files | wc -w)"; \
    9794:  number_corrupted_files=0; \
    9795:  if [ -z "$list_files" ]; then \
    9796:  echo "No video files found"; \
    9797:  exit 1; \
    9798:  fi; \
    9799:  for file in $list_files; do \
    9800:  echo "Checking video file: $file"; \
    9801:  docker run -u $(id -u) -v $(pwd):$(pwd) -w $(pwd) --entrypoint="" artifactory/selenium/video:ffmpeg-7.0.2-20241012 ffmpeg -v error -i "$file" -f null - ; \
    ...
    
    10061:  + MAX_SESSIONS_CHROME=1
    10062:  + MAX_SESSIONS_FIREFOX=1
    10063:  + MAX_SESSIONS_EDGE=1
    10064:  + TEST_NAME_OVERRIDE=true
    10065:  + TEST_PATCHED_KEDA=false
    10066:  + BASIC_AUTH_EMBEDDED_URL=false
    10067:  + SELENIUM_GRID_MONITORING=true
    10068:  + TEST_EXISTING_PTS=false
    10069:  + trap on_failure ERR EXIT
    ...
    
    10125:  + cert_dir=./tests/tests
    10126:  + ADD_IP_ADDRESS=hostname
    10127:  + ./charts/selenium-grid/certs/gen-cert-helper.sh -d ./tests/tests
    10128:  Generating 2,048 bit RSA key pair and self-signed certificate (SHA256withRSA) with a validity of 3,650 days
    10129:  for: CN=SeleniumHQ, OU=Software Freedom Conservancy, O=SeleniumHQ, L=Unknown, ST=Unknown, C=Unknown
    10130:  [Storing server.jks]
    10131:  Importing keystore server.jks to tls.p12...
    10132:  Entry for alias seleniumhq successfully imported.
    10133:  Import command completed:  1 entries successfully imported, 0 entries failed or cancelled
    ...
    
    10165:  + echo 'Render manifests YAML for this deployment'
    10166:  + helm template --debug selenium --values tests/charts/ci/nameOverride-values.yaml --values tests/charts/ci/base-auth-ingress-values.yaml --values ./tests/tests/base-recorder-values.yaml --values tests/charts/ci/base-resources-values.yaml --values tests/charts/ci/base-subPath-values.yaml --values tests/charts/ci/base-tls-values.yaml --values ./tests/tests/DeploymentAutoscaling-values.yaml --set autoscaling.scaledOptions.minReplicaCount=1 --set global.seleniumGrid.imageRegistry=artifactory/selenium --set global.seleniumGrid.imageTag=service-mesh-20241012 --set global.seleniumGrid.nodesImageTag=service-mesh-20241012 --set global.seleniumGrid.videoImageTag=ffmpeg-7.0.2-20241012 --set autoscaling.scaledOptions.pollingInterval=20 --set tracing.enabled=true --set global.seleniumGrid.httpLogs=true --set isolateComponents=false --set global.seleniumGrid.logLevel=INFO --set chromeNode.nodeMaxSessions=1 --set firefoxNode.nodeMaxSessions=1 --set edgeNode.nodeMaxSessions=1 --set autoscaling.enabled=false --set autoscaling.enableWithExistingKEDA=true --set monitoring.enabled=true --set monitoring.enabledWithExistingAgent=false --set autoscaling.scaledOptions.maxReplicaCount=10 --set global.K8S_PUBLIC_IP=10.1.0.191 --set ingress.enabled=false --set ingress.enableWithController=false --set hub.serviceType=NodePort --set components.router.serviceType=NodePort --set tls.enabled=true --set tls.create=false --set tls.nameOverride=external-tls-secret --set ingress.nginx.sslSecret=selenium/external-tls-secret --set ingress-nginx.controller.extraArgs.default-ssl-certificate=selenium/external-tls-secret charts/selenium-grid --namespace selenium --create-namespace
    10167:  install.go:214: [debug] Original chart version: ""
    10168:  install.go:231: [debug] CHART PATH: /home/runner/work/docker-selenium/docker-selenium/charts/selenium-grid
    10169:  + '[' false = true ']'
    10170:  Deploy Selenium Grid Chart
    10171:  + echo 'Deploy Selenium Grid Chart'
    10172:  + helm upgrade --install selenium --values tests/charts/ci/nameOverride-values.yaml --values tests/charts/ci/base-auth-ingress-values.yaml --values ./tests/tests/base-recorder-values.yaml --values tests/charts/ci/base-resources-values.yaml --values tests/charts/ci/base-subPath-values.yaml --values tests/charts/ci/base-tls-values.yaml --values ./tests/tests/DeploymentAutoscaling-values.yaml --set autoscaling.scaledOptions.minReplicaCount=1 --set global.seleniumGrid.imageRegistry=artifactory/selenium --set global.seleniumGrid.imageTag=service-mesh-20241012 --set global.seleniumGrid.nodesImageTag=service-mesh-20241012 --set global.seleniumGrid.videoImageTag=ffmpeg-7.0.2-20241012 --set autoscaling.scaledOptions.pollingInterval=20 --set tracing.enabled=true --set global.seleniumGrid.httpLogs=true --set isolateComponents=false --set global.seleniumGrid.logLevel=INFO --set chromeNode.nodeMaxSessions=1 --set firefoxNode.nodeMaxSessions=1 --set edgeNode.nodeMaxSessions=1 --set autoscaling.enabled=false --set autoscaling.enableWithExistingKEDA=true --set monitoring.enabled=true --set monitoring.enabledWithExistingAgent=false --set autoscaling.scaledOptions.maxReplicaCount=10 --set global.K8S_PUBLIC_IP=10.1.0.191 --set ingress.enabled=false --set ingress.enableWithController=false --set hub.serviceType=NodePort --set components.router.serviceType=NodePort --set tls.enabled=true --set tls.create=false --set tls.nameOverride=external-tls-secret --set ingress.nginx.sslSecret=selenium/external-tls-secret --set ingress-nginx.controller.extraArgs.default-ssl-certificate=selenium/external-tls-secret charts/selenium-grid --namespace selenium --create-namespace
    10173:  Error: UPGRADE FAILED: post-upgrade hooks failed: 1 error occurred:
    10174:  * timed out waiting for the condition
    10175:  ++ on_failure
    10176:  ++ local exit_status=1
    10177:  ++ '[' false = true ']'
    10178:  ++ echo 'Describe all resources in the selenium namespace for debugging purposes'
    10179:  ++ kubectl describe all -n selenium
    10180:  Describe all resources in the selenium namespace for debugging purposes
    10181:  ++ kubectl describe pod -n selenium
    10182:  ++ echo 'There is step failed with exit status 1'
    10183:  ++ cleanup
    10184:  There is step failed with exit status 1
    ...
    
    10199:  Logs for pod selenium-chrome-node-79995555bf-bhp7l
    10200:  ++ for pod in $pods
    10201:  ++ echo 'Logs for pod selenium-chrome-node-79995555bf-bhp7l'
    10202:  ++ kubectl logs -n selenium selenium-chrome-node-79995555bf-bhp7l --all-containers
    10203:  ++ for pod in $pods
    10204:  ++ echo 'Logs for pod selenium-chrome-node-79995555bf-rdfvp'
    10205:  ++ kubectl logs -n selenium selenium-chrome-node-79995555bf-rdfvp --all-containers
    10206:  Logs for pod selenium-chrome-node-79995555bf-rdfvp
    10207:  Error from server (BadRequest): container "pre-puller-video" in pod "selenium-chrome-node-79995555bf-rdfvp" is waiting to start: PodInitializing
    10208:  ++ for pod in $pods
    10209:  ++ echo 'Logs for pod selenium-chrome-node-79995555bf-v4pbd'
    10210:  Logs for pod selenium-chrome-node-79995555bf-v4pbd
    10211:  ++ kubectl logs -n selenium selenium-chrome-node-79995555bf-v4pbd --all-containers
    10212:  Error from server (BadRequest): container "pre-puller-selenium-chrome-node" in pod "selenium-chrome-node-79995555bf-v4pbd" is waiting to start: PodInitializing
    ...
    
    10217:  Logs for pod selenium-edge-node-56467f5c9b-4bvv8
    10218:  ++ for pod in $pods
    10219:  ++ echo 'Logs for pod selenium-edge-node-56467f5c9b-4bvv8'
    10220:  ++ kubectl logs -n selenium selenium-edge-node-56467f5c9b-4bvv8 --all-containers
    10221:  ++ for pod in $pods
    10222:  Logs for pod selenium-edge-node-56467f5c9b-lp46t
    10223:  ++ echo 'Logs for pod selenium-edge-node-56467f5c9b-lp46t'
    10224:  ++ kubectl logs -n selenium selenium-edge-node-56467f5c9b-lp46t --all-containers
    10225:  Error from server (BadRequest): container "pre-puller-selenium-edge-node" in pod "selenium-edge-node-56467f5c9b-lp46t" is waiting to start: PodInitializing
    10226:  ++ for pod in $pods
    10227:  Logs for pod selenium-edge-node-56467f5c9b-prngl
    10228:  ++ echo 'Logs for pod selenium-edge-node-56467f5c9b-prngl'
    10229:  ++ kubectl logs -n selenium selenium-edge-node-56467f5c9b-prngl --all-containers
    10230:  ++ for pod in $pods
    10231:  Logs for pod selenium-edge-node-56467f5c9b-s958g
    10232:  ++ echo 'Logs for pod selenium-edge-node-56467f5c9b-s958g'
    10233:  ++ kubectl logs -n selenium selenium-edge-node-56467f5c9b-s958g --all-containers
    10234:  Error from server (BadRequest): container "pre-puller-video" in pod "selenium-edge-node-56467f5c9b-s958g" is waiting to start: PodInitializing
    ...
    
    10243:  ++ for pod in $pods
    10244:  ++ echo 'Logs for pod selenium-firefox-node-7bddbf589-jqg8l'
    10245:  Logs for pod selenium-firefox-node-7bddbf589-jqg8l
    10246:  ++ kubectl logs -n selenium selenium-firefox-node-7bddbf589-jqg8l --all-containers
    10247:  ++ for pod in $pods
    10248:  Logs for pod selenium-firefox-node-7bddbf589-r229m
    10249:  ++ echo 'Logs for pod selenium-firefox-node-7bddbf589-r229m'
    10250:  ++ kubectl logs -n selenium selenium-firefox-node-7bddbf589-r229m --all-containers
    10251:  Error from server (BadRequest): container "pre-puller-video" in pod "selenium-firefox-node-7bddbf589-r229m" is waiting to start: PodInitializing
    ...
    
    10286:  ++ echo 'Logs for pod selenium-metrics-exporter-76946c8c88-4xgzt'
    10287:  ++ kubectl logs -n selenium selenium-metrics-exporter-76946c8c88-4xgzt --all-containers
    10288:  Logs for pod selenium-prometheus-node-exporter-6fbxv
    10289:  ++ for pod in $pods
    10290:  ++ echo 'Logs for pod selenium-prometheus-node-exporter-6fbxv'
    10291:  ++ kubectl logs -n selenium selenium-prometheus-node-exporter-6fbxv --all-containers
    10292:  ++ '[' true = false ']'
    10293:  ++ exit 1
    10294:  + on_failure
    10295:  + local exit_status=1
    10296:  + '[' false = true ']'
    10297:  Describe all resources in the selenium namespace for debugging purposes
    10298:  + echo 'Describe all resources in the selenium namespace for debugging purposes'
    10299:  + kubectl describe all -n selenium
    10300:  + kubectl describe pod -n selenium
    10301:  + echo 'There is step failed with exit status 1'
    10302:  There is step failed with exit status 1
    ...
    
    10318:  + for pod in $pods
    10319:  + echo 'Logs for pod selenium-chrome-node-79995555bf-bhp7l'
    10320:  + kubectl logs -n selenium selenium-chrome-node-79995555bf-bhp7l --all-containers
    10321:  Logs for pod selenium-chrome-node-79995555bf-bhp7l
    10322:  Logs for pod selenium-chrome-node-79995555bf-rdfvp
    10323:  + for pod in $pods
    10324:  + echo 'Logs for pod selenium-chrome-node-79995555bf-rdfvp'
    10325:  + kubectl logs -n selenium selenium-chrome-node-79995555bf-rdfvp --all-containers
    10326:  Error from server (BadRequest): container "pre-puller-video" in pod "selenium-chrome-node-79995555bf-rdfvp" is waiting to start: PodInitializing
    10327:  + for pod in $pods
    10328:  + echo 'Logs for pod selenium-chrome-node-79995555bf-v4pbd'
    10329:  + kubectl logs -n selenium selenium-chrome-node-79995555bf-v4pbd --all-containers
    10330:  Logs for pod selenium-chrome-node-79995555bf-v4pbd
    10331:  Error from server (BadRequest): container "pre-puller-selenium-chrome-node" in pod "selenium-chrome-node-79995555bf-v4pbd" is waiting to start: PodInitializing
    ...
    
    10336:  + for pod in $pods
    10337:  + echo 'Logs for pod selenium-edge-node-56467f5c9b-4bvv8'
    10338:  + kubectl logs -n selenium selenium-edge-node-56467f5c9b-4bvv8 --all-containers
    10339:  Logs for pod selenium-edge-node-56467f5c9b-4bvv8
    10340:  + for pod in $pods
    10341:  Logs for pod selenium-edge-node-56467f5c9b-lp46t
    10342:  + echo 'Logs for pod selenium-edge-node-56467f5c9b-lp46t'
    10343:  + kubectl logs -n selenium selenium-edge-node-56467f5c9b-lp46t --all-containers
    10344:  Error from server (BadRequest): container "pre-puller-selenium-edge-node" in pod "selenium-edge-node-56467f5c9b-lp46t" is waiting to start: PodInitializing
    10345:  + for pod in $pods
    10346:  Logs for pod selenium-edge-node-56467f5c9b-prngl
    10347:  + echo 'Logs for pod selenium-edge-node-56467f5c9b-prngl'
    10348:  + kubectl logs -n selenium selenium-edge-node-56467f5c9b-prngl --all-containers
    10349:  Logs for pod selenium-edge-node-56467f5c9b-s958g
    10350:  + for pod in $pods
    10351:  + echo 'Logs for pod selenium-edge-node-56467f5c9b-s958g'
    10352:  + kubectl logs -n selenium selenium-edge-node-56467f5c9b-s958g --all-containers
    10353:  Error from server (BadRequest): container "pre-puller-selenium-edge-node" in pod "selenium-edge-node-56467f5c9b-s958g" is waiting to start: PodInitializing
    ...
    
    10362:  + for pod in $pods
    10363:  + echo 'Logs for pod selenium-firefox-node-7bddbf589-jqg8l'
    10364:  + kubectl logs -n selenium selenium-firefox-node-7bddbf589-jqg8l --all-containers
    10365:  Logs for pod selenium-firefox-node-7bddbf589-jqg8l
    10366:  + for pod in $pods
    10367:  + echo 'Logs for pod selenium-firefox-node-7bddbf589-r229m'
    10368:  Logs for pod selenium-firefox-node-7bddbf589-r229m
    10369:  + kubectl logs -n selenium selenium-firefox-node-7bddbf589-r229m --all-containers
    10370:  Error from server (BadRequest): container "pre-puller-selenium-firefox-node" in pod "selenium-firefox-node-7bddbf589-r229m" is waiting to start: PodInitializing
    ...
    
    10405:  + kubectl logs -n selenium selenium-metrics-exporter-76946c8c88-4xgzt --all-containers
    10406:  Logs for pod selenium-metrics-exporter-76946c8c88-4xgzt
    10407:  + for pod in $pods
    10408:  Logs for pod selenium-prometheus-node-exporter-6fbxv
    10409:  + echo 'Logs for pod selenium-prometheus-node-exporter-6fbxv'
    10410:  + kubectl logs -n selenium selenium-prometheus-node-exporter-6fbxv --all-containers
    10411:  + '[' true = false ']'
    10412:  + exit 1
    10413:  make: *** [Makefile:923: chart_test_autoscaling_deployment] Error 1
    10414:  ##[error]Process completed with exit code 2.
    ...
    
    10838:  [command]/usr/bin/git submodule foreach --recursive sh -c "git config --local --name-only --get-regexp 'core\.sshCommand' && git config --local --unset-all 'core.sshCommand' || :"
    10839:  flag provided but not defined: -variables
    10840:  Usage: envsubst [options...] <input>
    10841:  Options:
    10842:  -i         Specify file input, otherwise use last argument as input file.
    10843:  If no input file is specified, read from stdin.
    10844:  -o         Specify file output. If none is specified, write to stdout.
    10845:  -no-digit  Do not replace variables starting with a digit. e.g. $1 and ${1}
    10846:  -no-unset  Fail if a variable is not set.
    10847:  -no-empty  Fail if a variable is set but empty.
    10848:  -fail-fast Fail on first error otherwise display all failures if restrictions are set.
    ...
    
    10852:  [command]/usr/bin/git submodule foreach --recursive sh -c "git config --local --name-only --get-regexp 'http\.https\:\/\/github\.com\/\.extraheader' && git config --local --unset-all 'http.https://github.com/.extraheader' || :"
    10853:  flag provided but not defined: -variables
    10854:  Usage: envsubst [options...] <input>
    10855:  Options:
    10856:  -i         Specify file input, otherwise use last argument as input file.
    10857:  If no input file is specified, read from stdin.
    10858:  -o         Specify file output. If none is specified, write to stdout.
    10859:  -no-digit  Do not replace variables starting with a digit. e.g. $1 and ${1}
    10860:  -no-unset  Fail if a variable is not set.
    10861:  -no-empty  Fail if a variable is set but empty.
    10862:  -fail-fast Fail on first error otherwise display all failures if restrictions are set.
    

    ✨ CI feedback usage guide:

    The CI feedback tool (/checks) automatically triggers when a PR has a failed check.
    The tool analyzes the failed checks and provides several feedbacks:

    • Failed stage
    • Failed test name
    • Failure summary
    • Relevant error logs

    In addition to being automatically triggered, the tool can also be invoked manually by commenting on a PR:

    /checks "https://github.com/{repo_name}/actions/runs/{run_number}/job/{job_number}"
    

    where {repo_name} is the name of the repository, {run_number} is the run number of the failed check, and {job_number} is the job number of the failed check.

    Configuration options

    • enable_auto_checks_feedback - if set to true, the tool will automatically provide feedback when a check is failed. Default is true.
    • excluded_checks_list - a list of checks to exclude from the feedback, for example: ["check1", "check2"]. Default is an empty list.
    • enable_help_text - if set to true, the tool will provide a help message with the feedback. Default is true.
    • persistent_comment - if set to true, the tool will overwrite a previous checks comment with the new feedback. Default is true.
    • final_update_message - if persistent_comment is true and updating a previous checks message, the tool will also create a new message: "Persistent checks updated to latest commit". Default is true.

    See more information about the checks tool in the docs.

    @VietND96 VietND96 merged commit 94da26e into trunk Oct 12, 2024
    47 of 53 checks passed
    @VietND96 VietND96 deleted the service-mesh branch October 12, 2024 10:55
    Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
    Projects
    None yet
    Development

    Successfully merging this pull request may close these issues.

    [🐛 Bug]: Nodes Disconnecting from Hub after AKS Deployment with Helm Chart
    1 participant