Skip to content

Commit

Permalink
CI: Use deployment instead of Pod for agnhost
Browse files Browse the repository at this point in the history
This is a followup to 2ba28a3 (Revert "Wait for available API token in
a new namespace (kubernetes-sigs#7045)", 2024-10-25).

While checking for the serviceaccount token is not effective, there is
still a race when creating a Pod directly, because the ServiceAccount
itself might not be created yet.
More details at kubernetes/kubernetes#66689.

This cause very frequent flakes in our CI with spurious failures.

Use a Deployment instead ; it will takes cares of creating the Pods and
retrying ; it also let us use kubectl rollout status instead of manually
checking for the pods.
  • Loading branch information
VannTen committed Dec 12, 2024
1 parent 74aee12 commit 930df78
Showing 1 changed file with 33 additions and 39 deletions.
72 changes: 33 additions & 39 deletions tests/testcases/030_check-network.yml
Original file line number Diff line number Diff line change
Expand Up @@ -79,53 +79,47 @@
command:
cmd: "{{ bin_dir }}/kubectl apply -f -"
stdin: |
apiVersion: v1
kind: Pod
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ item }}
namespace: test
name: agnhost
spec:
containers:
- name: agnhost
image: {{ test_image_repo }}:{{ test_image_tag }}
command: ['/agnhost', 'netexec', '--http-port=8080']
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop: ['ALL']
runAsUser: 1000
runAsNonRoot: true
seccompProfile:
type: RuntimeDefault
replicas: 2
selector:
matchLabels:
app: agnhost
template:
metadata:
labels:
app: agnhost
spec:
containers:
- name: agnhost
image: {{ test_image_repo }}:{{ test_image_tag }}
command: ['/agnhost', 'netexec', '--http-port=8080']
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop: ['ALL']
runAsUser: 1000
runAsNonRoot: true
seccompProfile:
type: RuntimeDefault
changed_when: false
loop:
- agnhost1
- agnhost2

- import_role: # noqa name[missing]
name: cluster-dump

- name: Check that all pods are running and ready
command: "{{ bin_dir }}/kubectl get pods --namespace test --no-headers -o yaml"
changed_when: false
register: run_pods_log
until:
# Check that all pods are running
- '(run_pods_log.stdout | from_yaml)["items"] | map(attribute = "status.phase") | unique | list == ["Running"]'
# Check that all pods are ready
- '(run_pods_log.stdout | from_yaml)["items"] | map(attribute = "status.containerStatuses") | map("map", attribute = "ready") | map("min") | min'
retries: 18
delay: 10
failed_when: false

- name: Get pod names
command: "{{ bin_dir }}/kubectl get pods -n test -o json"
changed_when: false
register: pods

- debug: # noqa name[missing]
msg: "{{ pods.stdout.split('\n') }}"
failed_when: not run_pods_log is success
block:
- name: Check Deployment is ready
command: "{{ bin_dir }}/kubectl rollout status deploy --namespace test agnhost --timeout=180"
changed_when: false
rescue:
- name: Get pod names
command: "{{ bin_dir }}/kubectl get pods -n test -o json"
changed_when: false
register: pods

- name: Get hostnet pods
command: "{{ bin_dir }}/kubectl get pods -n test -o
Expand Down

0 comments on commit 930df78

Please sign in to comment.