Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Drop "interface" mode in linkerd-cni #242

Merged
merged 2 commits into from
May 31, 2023
Merged

Conversation

alpeb
Copy link
Member

@alpeb alpeb commented May 17, 2023

"interface" mode was ill-conceived, in that it wasn't providing full network capabilities as supposed, and so pods provisioned with linkerd-cni in this mode weren't having their network properly set up. This change removes that mode, leaving only the "chained" mode, where linkerd-cni waits for another CNI plugin to drop its config in the target directory, over which linkerd-cni will append its config. It's important to emphasize that no config will be generated until another CNI plugin adds its own, as to not trigger pod scheduling while there is no full CNI network config available.

Also added proper logging timestamps.

Tests

This was successfully tested in k3d, AKS, EKS and GKE (with Calico). GKE with its default CNI still has issues as pointed in the Followup below.

The following log is from having installed linkerd-cni in an node that already a CNI plugin running:

$ k -n linkerd-cni logs -f linkerd-cni-hznp2 install-cni
[2023-05-17 08:50:46] Wrote linkerd CNI binaries to /host/home/kubernetes/bin
[2023-05-17 08:50:46] Installing CNI configuration for /host/etc/cni/net.d/10-calico.conflist
[2023-05-17 08:50:46] Using CNI config template from CNI_NETWORK_CONFIG environment variable.
      "k8s_api_root": "https://__KUBERNETES_SERVICE_HOST__:__KUBERNETES_SERVICE_PORT__",
      "k8s_api_root": "https://10.60.0.1:__KUBERNETES_SERVICE_PORT__",
[2023-05-17 08:50:46] CNI config: {
  "name": "linkerd-cni",
  "type": "linkerd-cni",
  "log_level": "info",
  "policy": {
      "type": "k8s",
      "k8s_api_root": "https://10.60.0.1:443",
      "k8s_auth_token": "__SERVICEACCOUNT_TOKEN__"
  },
  "kubernetes": {
      "kubeconfig": "/etc/cni/net.d/ZZZ-linkerd-cni-kubeconfig"
  },
  "linkerd": {
    "incoming-proxy-port": 4143,
    "outgoing-proxy-port": 4140,
    "proxy-uid": 2102,
    "ports-to-redirect": [],
    "inbound-ports-to-ignore": ["4191","4190"],
    "simulate": false,
    "use-wait-flag": false
  }
}
[2023-05-17 08:50:47] Created CNI config /host/etc/cni/net.d/10-calico.conflist
Setting up watches.
Watches established.

After adding an extra node, the linkerd-cni DaemonSet starts and we can see it waits until another CNI plugin drops its config:

$ k -n linkerd-cni logs -f linkerd-cni-tvv6r
[2023-05-17 08:58:12] Wrote linkerd CNI binaries to /host/home/kubernetes/bin
[2023-05-17 08:58:12] No active CNI configuration files found
Setting up watches.
Watches established.
[2023-05-17 08:58:22] Detected change in /host/etc/cni/net.d/: CREATE 10-calico.conflist
[2023-05-17 08:58:22] New file [10-calico.conflist] detected; re-installing
[2023-05-17 08:58:22] Using CNI config template from CNI_NETWORK_CONFIG environment variable.
      "k8s_api_root": "https://__KUBERNETES_SERVICE_HOST__:__KUBERNETES_SERVICE_PORT__",
      "k8s_api_root": "https://10.60.0.1:__KUBERNETES_SERVICE_PORT__",
[2023-05-17 08:58:22] CNI config: {
  "name": "linkerd-cni",
  "type": "linkerd-cni",
  "log_level": "info",
  "policy": {
      "type": "k8s",
      "k8s_api_root": "https://10.60.0.1:443",
      "k8s_auth_token": "__SERVICEACCOUNT_TOKEN__"
  },
  "kubernetes": {
      "kubeconfig": "/etc/cni/net.d/ZZZ-linkerd-cni-kubeconfig"
  },
  "linkerd": {
    "incoming-proxy-port": 4143,
    "outgoing-proxy-port": 4140,
    "proxy-uid": 2102,
    "ports-to-redirect": [],
    "inbound-ports-to-ignore": ["4191","4190"],
    "simulate": false,
    "use-wait-flag": false
  }
}
[2023-05-17 08:58:22] Created CNI config /host/etc/cni/net.d/10-calico.conflist
[2023-05-17 08:58:22] Detected change in /host/etc/cni/net.d/: DELETE 10-calico.conflist
[2023-05-17 08:58:22] Detected change in /host/etc/cni/net.d/: CREATE 10-calico.conflist
[2023-05-17 08:58:22] Ignoring event: CREATE /host/etc/cni/net.d/10-calico.conflist; no real changes detected

Followup

This doesn't fix another class of problem where pods start to get scheduled after the first CNI plugin config is available, but before the linkerd-cni DaemonSet gets a chance to append its config. This results in the Pod not getting the iptables rules set in time. The injected linkerd-network-validator will catch that and fail the pod. To acquire proper network config, the pod needs to be externally bounced (manually or through an operator).

This happens in GKE with its default CNI plugin when the node pool is scaled.

@alpeb alpeb requested a review from a team as a code owner May 17, 2023 19:34
Copy link
Member

@mateiidavid mateiidavid left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For some reason when I try it out without any CNI installed, it errors out... not sure what's happening since there are no logs. Think this might all be my fault but I want to dig into it so I can make sure my tests didn't uncover anything.

For the record, I'd expect that if no CNI plugin exists it will simply exist in a vacuum until one appears.

NAME                READY   STATUS             RESTARTS      AGE
linkerd-cni-gmgf7   0/1     CrashLoopBackOff   5 (48s ago)   3m56s


:; k logs -n linkerd-cni linkerd-cni-gmgf7
E0519 15:59:46.376582 1086953 memcache.go:255] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
E0519 15:59:46.381207 1086953 memcache.go:106] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
E0519 15:59:46.382144 1086953 memcache.go:106] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request

cni-plugin/deployment/scripts/install-cni.sh Show resolved Hide resolved
@@ -53,7 +53,7 @@ CONTAINER_MOUNT_PREFIX=${CONTAINER_MOUNT_PREFIX:-/host}
CONTAINER_CNI_BIN_DIR=${CONTAINER_CNI_BIN_DIR:-/opt/cni/bin}
# Directory path where CNI configuration should live on the host
HOST_CNI_NET="${CONTAINER_MOUNT_PREFIX}${DEST_CNI_NET_DIR}"
# Default path for when linkerd runs as a standalone CNI plugin
# Location of legacy "interface mode" file, to be automatically deleted
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hmm why not just delete DEFAULT_CNI_CONF_PATH?

################################
### CNI Plugin Install Logic ###
################################

# Delete old "interface mode" file
rm -f "${DEFAULT_CNI_CONF_PATH}"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this for backwards compatibility? i.e. to clean up unused files? if so, it should automatically be cleaned up when the pod is restarted so it might not be necessary?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes I had BC in mind. Isn't this the host filesystem, so the pod restart won't touch that? Or you mean it should be cleaned up by this plugin's logic?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmmm, here's how I initially thought this through:

# and if so, rm it.
if [ -e "${DEFAULT_CNI_CONF_PATH}" ]; then
echo "Cleaning up ${DEFAULT_CNI_CONF_PATH}"
rm -f "${DEFAULT_CNI_CONF_PATH}"
fi

the logic above will clean-up 01-linkerd.conf.

The trap call should trigger the clean-up function whenever a signal is received

# Capture the usual signals and exit from the script
trap 'echo "SIGINT received, simply exiting..."; cleanup' INT
trap 'echo "SIGTERM received, simply exiting..."; cleanup' TERM
trap 'echo "SIGHUP received, simply exiting..."; cleanup' HUP

When doing an upgrade to a new cni version:

  1. We update the manifest
  2. Triggers a re-deploy
  3. Old pod will begin shutting down, since it's on the old version, the script should correctly remove 01-linkerd.conf as part of the clean-up function
  4. New pod will be created and it will block until a file is found.

In my mind it should work, however I wonder if we're still liable to get a race condition... perhaps we should have a TODO instead to remove it after one release (we guarantee BC for successive upgrades, should be fine to remove at that point).

wdyt?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the pod shuts down uncleanly that file will be left around, so we'd still have to keep some logic around to clean it up at startup, right? I'll add a comment to remember to remove that rm call at some point 👍

@alpeb
Copy link
Member Author

alpeb commented May 19, 2023

For some reason when I try it out without any CNI installed, it errors out... not sure what's happening since there are no logs. Think this might all be my fault but I want to dig into it so I can make sure my tests didn't uncover anything.

Hmmm I haven't been able to repro that..

$ k3d cluster create --k3s-arg --flannel-backend=none@server:0
...
$ linkerd install-cni --cni-image ghcr.io/alpeb/cni-plugin --cni-image-version v1.2.8 |k apply -f -
...
$ k -n linkerd-cni logs -f linkerd-cni-b57l2
[2023-05-19 21:52:24] Wrote linkerd CNI binaries to /host/opt/cni/bin
[2023-05-19 21:52:24] No active CNI configuration files found
Setting up watches.
Watches established.

alpeb added a commit that referenced this pull request May 24, 2023
Followup to #242

This adds a new integration test that verifies that pods don't get
scheduled until all the networking has been setup, making sure having
linkerd-cni run run first doesn't trigger scheduling.

Note that for now we just consider the linkerd-cni+calico combo.

Worklow:

1. Install calico and linkerd-cni.
2. Label node and then add node selectors to calico and linkerd-cni to run only on the current node.
3. Create a new node. Calico nor linkerd-cni will be triggered in it.
4. Persist an nginx pod in that node. Verify it doesn't get scheduled.
5. Add label to linkerd-cni to have it trigger in the new node.
6. Verify the nginx pod still isn't scheduled.
7. Add label to calico to have it trigger in the new node.
8. Verify the nginx pod is finally scheduled.

Note that currently the test fails at step 6, but will pass as soon #242
gets merged. In order to test succesfully, you can apply this patch:

```diff
diff --git a/cni-plugin/integration/manifests/calico/linkerd-cni.yaml b/cni-plugin/integration/manifests/calico/linkerd-cni.yaml
index 05f110b..419abdd 100644
--- a/cni-plugin/integration/manifests/calico/linkerd-cni.yaml
+++ b/cni-plugin/integration/manifests/calico/linkerd-cni.yaml
@@ -123,7 +123,7 @@ spec:
       # script copies the files into place and then sleeps so
       # that Kubernetes doesn't keep trying to restart it.
       - name: install-cni
-        image: test.l5d.io/linkerd/cni-plugin:test
+        image: ghcr.io/alpeb/cni-plugin:v1.2.8
         #image: cr.l5d.io/linkerd/cni-plugin:edge-22.12.1
         env:
         - name: DEST_CNI_NET_DIR
```
@alpeb alpeb mentioned this pull request May 24, 2023
alpeb added a commit that referenced this pull request May 24, 2023
Followup to #242

This adds a new integration test that verifies that pods don't get
scheduled until all the networking has been setup, making sure having
linkerd-cni run run first doesn't trigger scheduling.

Note that for now we just consider the linkerd-cni+calico combo.

Worklow:

1. Install calico and linkerd-cni.
2. Label node and then add node selectors to calico and linkerd-cni to run only on the current node.
3. Create a new node. Calico nor linkerd-cni will be triggered in it.
4. Persist an nginx pod in that node. Verify it doesn't get scheduled.
5. Add label to linkerd-cni to have it trigger in the new node.
6. Verify the nginx pod still isn't scheduled.
7. Add label to calico to have it trigger in the new node.
8. Verify the nginx pod is finally scheduled.

Note that currently the test fails at step 6, but will pass as soon #242
gets merged. In order to test succesfully, you can apply this patch:

```diff
diff --git a/cni-plugin/integration/manifests/calico/linkerd-cni.yaml b/cni-plugin/integration/manifests/calico/linkerd-cni.yaml
index 05f110b..419abdd 100644
--- a/cni-plugin/integration/manifests/calico/linkerd-cni.yaml
+++ b/cni-plugin/integration/manifests/calico/linkerd-cni.yaml
@@ -123,7 +123,7 @@ spec:
       # script copies the files into place and then sleeps so
       # that Kubernetes doesn't keep trying to restart it.
       - name: install-cni
-        image: test.l5d.io/linkerd/cni-plugin:test
+        image: ghcr.io/alpeb/cni-plugin:v1.2.8
         #image: cr.l5d.io/linkerd/cni-plugin:edge-22.12.1
         env:
         - name: DEST_CNI_NET_DIR
```
@mateiidavid
Copy link
Member

mateiidavid commented May 24, 2023

Gotcha:

[2023-05-24 13:58:49] Wrote linkerd CNI binaries to /host/opt/cni/bin
[2023-05-24 13:58:49] No active CNI configuration files found

seems to be fine.

Copy link
Member

@mateiidavid mateiidavid left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great work! Still a nitpick left with the default path TODO, I'll leave that to you and will defer to your judgement. Everything else looks good to me 👍🏻

alpeb added a commit that referenced this pull request May 31, 2023
Followup to #242

This adds a new integration test that verifies that pods don't get
scheduled until all the networking has been setup, making sure having
linkerd-cni run run first doesn't trigger scheduling.

Note that for now we just consider the linkerd-cni+calico combo.

Worklow:

1. Install calico and linkerd-cni.
2. Label node and then add node selectors to calico and linkerd-cni to run only on the current node.
3. Create a new node. Calico nor linkerd-cni will be triggered in it.
4. Persist an nginx pod in that node. Verify it doesn't get scheduled.
5. Add label to linkerd-cni to have it trigger in the new node.
6. Verify the nginx pod still isn't scheduled.
7. Add label to calico to have it trigger in the new node.
8. Verify the nginx pod is finally scheduled.

Note that currently the test fails at step 6, but will pass as soon #242
gets merged. In order to test succesfully, you can apply this patch:

```diff
diff --git a/cni-plugin/integration/manifests/calico/linkerd-cni.yaml b/cni-plugin/integration/manifests/calico/linkerd-cni.yaml
index 05f110b..419abdd 100644
--- a/cni-plugin/integration/manifests/calico/linkerd-cni.yaml
+++ b/cni-plugin/integration/manifests/calico/linkerd-cni.yaml
@@ -123,7 +123,7 @@ spec:
       # script copies the files into place and then sleeps so
       # that Kubernetes doesn't keep trying to restart it.
       - name: install-cni
-        image: test.l5d.io/linkerd/cni-plugin:test
+        image: ghcr.io/alpeb/cni-plugin:v1.2.8
         #image: cr.l5d.io/linkerd/cni-plugin:edge-22.12.1
         env:
         - name: DEST_CNI_NET_DIR
```
alpeb added a commit that referenced this pull request May 31, 2023
Followup to #242

This adds a new integration test that verifies that pods don't get
scheduled until all the networking has been setup, making sure having
linkerd-cni run run first doesn't trigger scheduling.

Note that for now we just consider the linkerd-cni+calico combo.

Worklow:

1. Install calico and linkerd-cni.
2. Label node and then add node selectors to calico and linkerd-cni to run only on the current node.
3. Create a new node. Calico nor linkerd-cni will be triggered in it.
4. Persist an nginx pod in that node. Verify it doesn't get scheduled.
5. Add label to linkerd-cni to have it trigger in the new node.
6. Verify the nginx pod still isn't scheduled.
7. Add label to calico to have it trigger in the new node.
8. Verify the nginx pod is finally scheduled.

Note that currently the test fails at step 6, but will pass as soon #242
gets merged. In order to test succesfully, you can apply this patch:

```diff
diff --git a/cni-plugin/integration/manifests/calico/linkerd-cni.yaml b/cni-plugin/integration/manifests/calico/linkerd-cni.yaml
index 05f110b..419abdd 100644
--- a/cni-plugin/integration/manifests/calico/linkerd-cni.yaml
+++ b/cni-plugin/integration/manifests/calico/linkerd-cni.yaml
@@ -123,7 +123,7 @@ spec:
       # script copies the files into place and then sleeps so
       # that Kubernetes doesn't keep trying to restart it.
       - name: install-cni
-        image: test.l5d.io/linkerd/cni-plugin:test
+        image: ghcr.io/alpeb/cni-plugin:v1.2.8
         #image: cr.l5d.io/linkerd/cni-plugin:edge-22.12.1
         env:
         - name: DEST_CNI_NET_DIR
```
alpeb added 2 commits May 31, 2023 16:14
"interface" mode was ill-conceived, in that it wasn't providing full network capabilities as supposed, and so pods provisioned with linkerd-cni in this mode weren't having their network properly set up.
This change removes that mode, leaving only the "chained" mode, where linkerd-cni waits for another CNI plugin to drop its config in the target directory, over which linkerd-cni will append its config. It's important to emphasize that no config will be generated until another CNI plugin adds its own, as to not trigger pod scheduling while there is no full CNI network config available.

Also added proper logging timestamps.

## Tests

This was successfully tested in k3d, AKS, EKS and GKE (with Calico). GKE with its default CNI still has issues as pointed in the Followup below.

The following log is from having installed linkerd-cni in an node that already a CNI plugin running:

```
$ k -n linkerd-cni logs -f linkerd-cni-hznp2 install-cni
[2023-05-17 08:50:46] Wrote linkerd CNI binaries to /host/home/kubernetes/bin
[2023-05-17 08:50:46] Installing CNI configuration for /host/etc/cni/net.d/10-calico.conflist
[2023-05-17 08:50:46] Using CNI config template from CNI_NETWORK_CONFIG environment variable.
      "k8s_api_root": "https://__KUBERNETES_SERVICE_HOST__:__KUBERNETES_SERVICE_PORT__",
      "k8s_api_root": "https://10.60.0.1:__KUBERNETES_SERVICE_PORT__",
[2023-05-17 08:50:46] CNI config: {
  "name": "linkerd-cni",
  "type": "linkerd-cni",
  "log_level": "info",
  "policy": {
      "type": "k8s",
      "k8s_api_root": "https://10.60.0.1:443",
      "k8s_auth_token": "__SERVICEACCOUNT_TOKEN__"
  },
  "kubernetes": {
      "kubeconfig": "/etc/cni/net.d/ZZZ-linkerd-cni-kubeconfig"
  },
  "linkerd": {
    "incoming-proxy-port": 4143,
    "outgoing-proxy-port": 4140,
    "proxy-uid": 2102,
    "ports-to-redirect": [],
    "inbound-ports-to-ignore": ["4191","4190"],
    "simulate": false,
    "use-wait-flag": false
  }
}
[2023-05-17 08:50:47] Created CNI config /host/etc/cni/net.d/10-calico.conflist
Setting up watches.
Watches established.
```

After adding an extra node, the linkerd-cni DaemonSet starts and we can see it waits until another CNI plugin drops its config:

```
$ k -n linkerd-cni logs -f linkerd-cni-tvv6r
[2023-05-17 08:58:12] Wrote linkerd CNI binaries to /host/home/kubernetes/bin
[2023-05-17 08:58:12] No active CNI configuration files found
Setting up watches.
Watches established.
[2023-05-17 08:58:22] Detected change in /host/etc/cni/net.d/: CREATE 10-calico.conflist
[2023-05-17 08:58:22] New file [10-calico.conflist] detected; re-installing
[2023-05-17 08:58:22] Using CNI config template from CNI_NETWORK_CONFIG environment variable.
      "k8s_api_root": "https://__KUBERNETES_SERVICE_HOST__:__KUBERNETES_SERVICE_PORT__",
      "k8s_api_root": "https://10.60.0.1:__KUBERNETES_SERVICE_PORT__",
[2023-05-17 08:58:22] CNI config: {
  "name": "linkerd-cni",
  "type": "linkerd-cni",
  "log_level": "info",
  "policy": {
      "type": "k8s",
      "k8s_api_root": "https://10.60.0.1:443",
      "k8s_auth_token": "__SERVICEACCOUNT_TOKEN__"
  },
  "kubernetes": {
      "kubeconfig": "/etc/cni/net.d/ZZZ-linkerd-cni-kubeconfig"
  },
  "linkerd": {
    "incoming-proxy-port": 4143,
    "outgoing-proxy-port": 4140,
    "proxy-uid": 2102,
    "ports-to-redirect": [],
    "inbound-ports-to-ignore": ["4191","4190"],
    "simulate": false,
    "use-wait-flag": false
  }
}
[2023-05-17 08:58:22] Created CNI config /host/etc/cni/net.d/10-calico.conflist
[2023-05-17 08:58:22] Detected change in /host/etc/cni/net.d/: DELETE 10-calico.conflist
[2023-05-17 08:58:22] Detected change in /host/etc/cni/net.d/: CREATE 10-calico.conflist
[2023-05-17 08:58:22] Ignoring event: CREATE /host/etc/cni/net.d/10-calico.conflist; no real changes detected
```

## Followup

This doesn't fix another class of problem where pods start to get scheduled after the first CNI plugin config is available, but before the linkerd-cni DaemonSet gets a chance to append its config. This results in the Pod not getting the iptables rules set in time. The injected linkerd-network-validator will catch that and fail the pod. To acquire proper network config, the pod needs to be externally bounced (manually or through an operator).

This happens in GKE with its default CNI plugin when the node pool is scaled.
@alpeb alpeb force-pushed the alpeb/eradicate-interface-mode branch from 3a55f2f to 55d034a Compare May 31, 2023 21:14
@alpeb alpeb merged commit c0da952 into main May 31, 2023
@alpeb alpeb deleted the alpeb/eradicate-interface-mode branch May 31, 2023 21:20
alpeb added a commit that referenced this pull request May 31, 2023
Followup to #242

This adds a new integration test that verifies that pods don't get
scheduled until all the networking has been setup, making sure having
linkerd-cni run run first doesn't trigger scheduling.

Note that for now we just consider the linkerd-cni+calico combo.

Worklow:

1. Install calico and linkerd-cni.
2. Label node and then add node selectors to calico and linkerd-cni to run only on the current node.
3. Create a new node. Calico nor linkerd-cni will be triggered in it.
4. Persist an nginx pod in that node. Verify it doesn't get scheduled.
5. Add label to linkerd-cni to have it trigger in the new node.
6. Verify the nginx pod still isn't scheduled.
7. Add label to calico to have it trigger in the new node.
8. Verify the nginx pod is finally scheduled.

Note that currently the test fails at step 6, but will pass as soon #242
gets merged. In order to test succesfully, you can apply this patch:

```diff
diff --git a/cni-plugin/integration/manifests/calico/linkerd-cni.yaml b/cni-plugin/integration/manifests/calico/linkerd-cni.yaml
index 05f110b..419abdd 100644
--- a/cni-plugin/integration/manifests/calico/linkerd-cni.yaml
+++ b/cni-plugin/integration/manifests/calico/linkerd-cni.yaml
@@ -123,7 +123,7 @@ spec:
       # script copies the files into place and then sleeps so
       # that Kubernetes doesn't keep trying to restart it.
       - name: install-cni
-        image: test.l5d.io/linkerd/cni-plugin:test
+        image: ghcr.io/alpeb/cni-plugin:v1.2.8
         #image: cr.l5d.io/linkerd/cni-plugin:edge-22.12.1
         env:
         - name: DEST_CNI_NET_DIR
```
alpeb added a commit that referenced this pull request Jun 1, 2023
Followup to #242

This adds a new integration test that verifies that pods don't get scheduled until all the networking has been setup, making sure having linkerd-cni run run first doesn't trigger scheduling.

Note that for now we just consider the linkerd-cni+calico combo.

Worklow:

1. Install calico and linkerd-cni.
2. Label node and then add node selectors to calico and linkerd-cni to run only on the current node.
3. Create a new node. Calico nor linkerd-cni will be triggered in it.
4. Persist an nginx pod in that node. Verify it doesn't get scheduled.
5. Add label to linkerd-cni to have it trigger in the new node.
6. Verify the nginx pod still isn't scheduled.
7. Add label to calico to have it trigger in the new node.
8. Verify the nginx pod is finally scheduled.

Note that currently the test fails at step 6, but will pass as soon #242 gets merged.

**Update**: now that #242 has been merged, the tests are green 👍
alpeb added a commit that referenced this pull request Dec 6, 2024
This removes treatment of the DELETE event on the inotifywait subshell, which is a leftover from when we implemented an "interface" mode (see #242), and we no longer act on this event.

This also removes the file `cni-plugin/deployment/linkerd-cni.conf.default` that wasn't being used anymore.
alpeb added a commit that referenced this pull request Dec 6, 2024
This removes treatment of the DELETE event on the inotifywait subshell, which is a leftover from when we implemented an "interface" mode (see #242), and we no longer act on this event.

This also removes the file `cni-plugin/deployment/linkerd-cni.conf.default` that wasn't being used anymore.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants