You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
[bcrochet@bcrochet-test01 kubernetes-nmstate]$ oc logs -n cluster-network-addons nmstate-handler-nlv6m | head -2
{"level":"info","ts":1618951199.669442,"logger":"setup","msg":"Try to take exclusive lock on file: /var/k8s_nmstate/handler_lock"}
{"level":"info","ts":1618951199.6698868,"logger":"setup","msg":"Successfully took nmstate exclusive lock"}
[bcrochet@bcrochet-test01 kubernetes-nmstate]$ oc logs -n nmstate nmstate-handler-9qxs8 | head -2
{"level":"info","ts":1618951694.0622187,"logger":"setup","msg":"Try to take exclusive lock on file: /var/k8s_nmstate/handler_lock"}
{"level":"info","ts":1618951694.0627754,"logger":"setup","msg":"Successfully took nmstate exclusive lock"}
Environment:
N/A
NodeNetworkState on affected nodes (use kubectl get nodenetworkstate <node_name> -o yaml):
What happened:
Installed handler via CNAO, then installed via operator as well. Both sets of handlers appears to think they had the lock.
What you expected to happen:
Only the first handlers should have controlled the lock.
How to reproduce it (as minimally and precisely as possible):
Install handler via CNAO, then install via operator.
Anything else we need to know?:
NAMESPACE NAME READY STATUS RESTARTS AGE
cluster-network-addons cluster-network-addons-operator-966677658-hshtz 1/1 Running 0 17m
cluster-network-addons nmstate-cert-manager-d6fb64b4f-gj2qx 1/1 Running 2 17m
cluster-network-addons nmstate-handler-nlv6m 1/1 Running 0 17m
cluster-network-addons nmstate-handler-tcn7g 1/1 Running 0 17m
cluster-network-addons nmstate-webhook-5b779dd5c9-9mmtg 1/1 Running 2 17m
cluster-network-addons nmstate-webhook-5b779dd5c9-qljsb 1/1 Running 0 17m
...
nmstate nmstate-cert-manager-bff5b8695-jxwrq 1/1 Running 1 9m10s
nmstate nmstate-handler-9qxs8 1/1 Running 0 9m10s
nmstate nmstate-handler-srfgm 1/1 Running 0 9m10s
nmstate nmstate-operator-8646844d48-nddm4 1/1 Running 0 12m
nmstate nmstate-webhook-868d486d94-pwx4t 1/1 Running 1 9m10s
nmstate nmstate-webhook-868d486d94-w8j25 1/1 Running 0 9m10s
[bcrochet@bcrochet-test01 kubernetes-nmstate]$ oc logs -n cluster-network-addons nmstate-handler-nlv6m | head -2
{"level":"info","ts":1618951199.669442,"logger":"setup","msg":"Try to take exclusive lock on file: /var/k8s_nmstate/handler_lock"}
{"level":"info","ts":1618951199.6698868,"logger":"setup","msg":"Successfully took nmstate exclusive lock"}
[bcrochet@bcrochet-test01 kubernetes-nmstate]$ oc logs -n nmstate nmstate-handler-9qxs8 | head -2
{"level":"info","ts":1618951694.0622187,"logger":"setup","msg":"Try to take exclusive lock on file: /var/k8s_nmstate/handler_lock"}
{"level":"info","ts":1618951694.0627754,"logger":"setup","msg":"Successfully took nmstate exclusive lock"}
Environment:
N/A
NodeNetworkState
on affected nodes (usekubectl get nodenetworkstate <node_name> -o yaml
):NodeNetworkConfigurationPolicy
:kubectl get pods --all-namespaces -l app=kubernetes-nmstate -o jsonpath='{.items[0].spec.containers[0].image}'
):nmcli --version
)kubectl version
):The text was updated successfully, but these errors were encountered: