Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

mitigation for falco#1909 - fix(k8s-client): handle network related exceptions #610

Merged

Conversation

alacuku
Copy link
Member

@alacuku alacuku commented Sep 15, 2022

Signed-off-by: Aldo Lacuku [email protected]

What type of PR is this?

Uncomment one (or more) /kind <> lines:

/kind bug

/kind cleanup

/kind design

/kind documentation

/kind failing-test

/kind feature

Any specific area of the project related to this PR?

Uncomment one (or more) /area <> lines:

/area API-version

/area build

/area CI

/area driver-kmod

/area driver-bpf

/area driver-modern-bpf

/area libscap-engine-bpf

/area libscap-engine-gvisor

/area libscap-engine-kmod

/area libscap-engine-modern-bpf

/area libscap-engine-nodriver

/area libscap-engine-noop

/area libscap-engine-source-plugin

/area libscap-engine-savefile

/area libscap-engine-udig

/area libscap

/area libpman

/area libsinsp

/area tests

/area proposals

Does this PR require a change in the driver versions?

/version driver-API-version-major

/version driver-API-version-minor

/version driver-API-version-patch

/version driver-SCHEMA-version-major

/version driver-SCHEMA-version-minor

/version driver-SCHEMA-version-patch

What this PR does / why we need it:

When Falco is configured to fetch the metadata from a k8s cluster it fetches all the metadata(pods, deployments, nodes, namespaces, replicasets, daemonsets, replicationcontrollers and services) at start-up time and then watches for new events from the api-server.

For each k8s object, there is a different handler that opens a connection toward the api-server. It could happen that sometimes the api-server throttles the requests causing falco to crash falcosecurity/falco#1909.

This PR is an initial effort to handle the case when the api-server throttles the initial meta-data fetching by handling the error and retrying next time when Falco collects the k8s metadata. The proposed solution is lightweight since it does not delete and recreate all the handlers (k8s_handler and socket_handler) but just resets their state in order to be reused (switch to watching mode) the next time Falco collects the k8s metadata (the default options set the k8s metadata collection to be done every 1 second).

Testing:
I was able to set up a k8s cluster which throttles the falco requests when retrieving pods' metadata:

Thu Sep 15 13:07:23 2022: [libs]: Socket handler (k8s_pod_handler_state) Retrieving all data in blocking mode ...
Thu Sep 15 13:07:23 2022: [libs]: k8s_handler (k8s_pod_handler_state)::collect_data() error [https://192.168.1.70:6443], receiving data from /api/v1/pods?fieldSelector=status.phase!=Failed,status.phase!=Unknown,status.phase!=Succeeded,spec.nodeName=control-plane&pretty=false... m_blocking_socket=1, m_watching=0 SSL Socket handler (k8s_pod_handler_state): Connection closed.
...
Thu Sep 15 13:07:23 2022: [libs]: Socket handler (k8s_pod_handler_state) Retrieving all data in blocking mode ...
Thu Sep 15 13:07:23 2022: [libs]: k8s_handler adding event, (k8s_pod_handler_state) has 0 events from https://192.168.1.70:6443
Thu Sep 15 13:07:23 2022: [libs]: k8s_handler added event, (k8s_pod_handler_state) has 1 events from https://192.168.1.70:6443
...
Thu Sep 15 13:07:23 2022: [libs]: k8s_handler (k8s_pod_handler_state) switching to watch connection for https://192.168.1.70:6443/api/v1/pods?fieldSelector=status.phase!=Failed,status.phase!=Unknown,status.phase!=Succeeded,spec.nodeName=control-plane&pretty=false
Thu Sep 15 13:07:23 2022: [libs]: k8s_handler (k8s_pod_handler_event) k8s_handler::connect() adding handler to collector
Thu Sep 15 13:07:23 2022: [libs]: Socket collector: handler [k8s_pod_handler_event] added socket (25)
Thu Sep 15 13:07:23 2022: [libs]: k8s_handler (k8s_pod_handler_event)::collect_data(), checking connection to https://192.168.1.70:6443
Thu Sep 15 13:07:23 2022: [libs]: k8s_handler (k8s_pod_handler_event) check_enabled() enabling socket in collector
Thu Sep 15 13:07:23 2022: [libs]: k8s_handler (k8s_pod_handler_event)::collect_data() [https://192.168.1.70:6443], requesting data from /api/v1/watch/pods?fieldSelector=status.phase!=Failed,status.phase!=Unknown,status.phase!=Succeeded,spec.nodeName=control-plane&pretty=false... m_blocking_socket=0, m_watching=1
Thu Sep 15 13:07:23 2022: [libs]: k8s_handler (k8s_pod_handler_event) sending request to https://192.168.1.70:6443/api/v1/watch/pods?fieldSelector=status.phase!=Failed,status.phase!=Unknown,status.phase!=Succeeded,spec.nodeName=control-plane&pretty=false
Thu Sep 15 13:07:23 2022: [libs]: Socket handler (k8s_pod_handler_event) socket=25, m_ssl_connection=94519583057200
Thu Sep 15 13:07:23 2022: [libs]: GET /api/v1/watch/pods?fieldSelector=status.phase!=Failed,status.phase!=Unknown,status.phase!=Succeeded,spec.nodeName=control-plane&pretty=false HTTP/1.1
...
Thu Sep 15 13:07:23 2022: [libs]: k8s_handler (k8s_pod_handler_event)::collect_data(), 9 events from https://192.168.1.70:6443/api/v1/watch/pods?fieldSelector=status.phase!=Failed,status.phase!=Unknown,status.phase!=Succeeded,spec.nodeName=control-plane&pretty=false
Thu Sep 15 13:07:23 2022: [libs]: k8s_handler (k8s_pod_handler_event)::collect_data(), data from https://192.168.1.70:6443/api/v1/watch/pods?fieldSelector=status.phase!=Failed,status.phase!=Unknown,status.phase!=Succeeded,spec.nodeName=control-plane&pretty=false, event count=9
Thu Sep 15 13:07:23 2022: [libs]: k8s_handler (k8s_pod_handler_event) dependency (k8s_namespace_handler_event) ready: 1
Thu Sep 15 13:07:23 2022: [libs]: k8s_handler (k8s_pod_handler_event) processing event data:
{"apiVersion":"v1","items":[{"containerStatuses":[{"containerID":"containerd://c1be00ed7bf4ef929a88028810419f0ec46368b9cb2e292326ddf1ec3c31607b","image":"k8s.gcr.io/kube-scheduler:v1.23.10","imageID":"k8s.gcr.io/kube-scheduler@sha256:07d72b53818163ad25b49693a0b9d35d5eb1d1aa2e6363f87fac8ab903164a0e","lastState":{},"name":"kube-scheduler","ready":true,"restartCount":0,"started":true,"state":{"running":{"startedAt":"2022-09-14T17:37:56Z"}}}],"containers":[{"command":["kube-scheduler","--authentication-kubeconfig=/etc/kubernetes/scheduler.conf","--authorization-kubeconfig=/etc/kubernetes/scheduler.conf","--bind-address=127.0.0.1","--kubeconfig=/etc/kubernetes/scheduler.conf","--leader-elect=true"],"image":"k8s.gcr.io/kube-scheduler:v1.23.10","imagePullPolicy":"IfNotPresent","livenessProbe":{"failureThreshold":8,"httpGet":{"host":"127.0.0.1","path":"/healthz","port":10259,"scheme":"HTTPS"},"initialDelaySeconds":10,"periodSeconds":10,"successThreshold":1,"timeoutSeconds":15},"name":"kube-scheduler","resources":{"requests":{"cpu":"100m"}},"startupProbe":{"failureThreshold":24,"httpGet":{"host":"127.0.0.1","path":"/healthz","port":10259,"scheme":"HTTPS"},"initialDelaySeconds":10,"periodSeconds":10,"successThreshold":1,"timeoutSeconds":15},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","volumeMounts":[{"mountPath":"/etc/kubernetes/scheduler.conf","name":"kubeconfig","readOnly":true}]}],"hostIP":"10.0.2.15","initContainerStatuses":null,"labels":{"component":"kube-scheduler","tier":"control-plane"},"name":"kube-scheduler-control-plane","namespace":"kube-system","nodeName":"control-plane","phase":"Running","podIP":"10.0.2.15","timestamp":"2022-09-14T17:38:06Z","uid":"d66747bb-77cc-4527-9ac2-1e55b8a50e04"}],"kind":"Pod","type":"ADDED"}
Thu Sep 15 13:07:23 2022: [libs]: K8s [ADDED, Pod, kube-scheduler-control-plane, d66747bb-77cc-4527-9ac2-1e55b8a50e04]
Thu Sep 15 13:07:23 2022: [libs]: k8s_handler (k8s_pod_handler_event) processing event data:
{"apiVersion":"v1","items":[{"containerStatuses":[{"containerID":"containerd://c156f8d943ace2c33e8ca4f7b3554003ce2cd0102e4786851dbc615d6b6709c1","image":"k8s.gcr.io/kube-apiserver:v1.23.10","imageID":"k8s.gcr.io/kube-apiserver@sha256:a3b6ba0b713cfba71e161e84cef0b2766b99c0afb0d96cd4f1e0f7d6ae0b0467","lastState":{},"name":"kube-apiserver","ready":true,"restartCount":0,"started":true,"state":{"running":{"startedAt":"2022-09-14T17:37:56Z"}}}],"containers":[{"command":["kube-apiserver","--advertise-address=192.168.1.70","--allow-privileged=true","--audit-log-path=/var/lib/k8s-audit/k8s-audit.log","--audit-policy-file=/var/lib/k8s-audit/audit-policy.yaml","--audit-webhook-config-file=/var/lib/k8s-audit/webhook-config.yaml","--authorization-mode=Node,RBAC","--client-ca-file=/etc/kubernetes/pki/ca.crt","--enable-admission-plugins=NodeRestriction","--enable-bootstrap-token-auth=true","--etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt","--etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt","--etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key","--etcd-servers=https://127.0.0.1:2379","--kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt","--kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key","--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname","--proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt","--proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key","--requestheader-allowed-names=front-proxy-client","--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt","--requestheader-extra-headers-prefix=X-Remote-Extra-","--requestheader-group-headers=X-Remote-Group","--requestheader-username-headers=X-Remote-User","--secure-port=6443","--service-account-issuer=https://kubernetes.default.svc.cluster.local","--service-account-key-file=/etc/kubernetes/pki/sa.pub","--service-account-signing-key-file=/etc/kubernetes/pki/sa.key","--service-cluster-ip-range=10.16.0.0/12","--tls-cert-file=/etc/kubernetes/pki/apiserver.crt","--tls-private-key-file=/etc/kubernetes/pki/apiserver.key"],"image":"k8s.gcr.io/kube-apiserver:v1.23.10","imagePullPolicy":"IfNotPresent","livenessProbe":{"failureThreshold":8,"httpGet":{"host":"192.168.1.70","path":"/livez","port":6443,"scheme":"HTTPS"},"initialDelaySeconds":10,"periodSeconds":10,"successThreshold":1,"timeoutSeconds":15},"name":"kube-apiserver","readinessProbe":{"failureThreshold":3,"httpGet":{"host":"192.168.1.70","path":"/readyz","port":6443,"scheme":"HTTPS"},"periodSeconds":1,"successThreshold":1,"timeoutSeconds":15},"resources":{"requests":{"cpu":"250m"}},"startupProbe":{"failureThreshold":24,"httpGet":{"host":"192.168.1.70","path":"/livez","port":6443,"scheme":"HTTPS"},"initialDelaySeconds":10,"periodSeconds":10,"successThreshold":1,"timeoutSeconds":15},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","volumeMounts":[{"mountPath":"/etc/ssl/certs","name":"ca-certs","readOnly":true},{"mountPath":"/etc/ca-certificates","name":"etc-ca-certificates","readOnly":true},{"mountPath":"/etc/pki","name":"etc-pki","readOnly":true},{"mountPath":"/var/lib/k8s-audit/","name":"k8s-audit-data"},{"mountPath":"/etc/kubernetes/pki","name":"k8s-certs","readOnly":true},{"mountPath":"/usr/local/share/ca-certificates","name":"usr-local-share-ca-certificates","readOnly":true},{"mountPath":"/usr/share/ca-certificates","name":"usr-share-ca-certificates","readOnly":true}]}],"hostIP":"10.0.2.15","initContainerStatuses":null,"labels":{"component":"kube-apiserver","tier":"control-plane"},"name":"kube-apiserver-control-plane","namespace":"kube-system","nodeName":"control-plane","phase":"Running","podIP":"10.0.2.15","timestamp":"2022-09-14T17:38:06Z","uid":"0f830c92-6ccd-4bc8-b958-8e98408a61a8"}],"kind":"Pod","type":"ADDED"}
Thu Sep 15 13:07:23 2022: [libs]: K8s [ADDED, Pod, kube-apiserver-control-plane, 0f830c92-6ccd-4bc8-b958-8e98408a61a8]
Thu Sep 15 13:07:23 2022: [libs]: k8s_handler (k8s_pod_handler_event) processing event data:
{"apiVersion":"v1","items":[{"containerStatuses":[{"containerID":"containerd://4de54eed69d8811be9ea7a051e851ba885e3596d8d439d824050364e7de1371f","image":"k8s.gcr.io/kube-controller-manager:v1.23.10","imageID":"k8s.gcr.io/kube-controller-manager@sha256:91c9d5d25c193cd1a2edd5082a3af479e85699bb46aaa58652d17b0f3b442c0f","lastState":{},"name":"kube-controller-manager","ready":true,"restartCount":0,"started":true,"state":{"running":{"startedAt":"2022-09-14T17:37:56Z"}}}],"containers":[{"command":["kube-controller-manager","--allocate-node-cidrs=true","--authentication-kubeconfig=/etc/kubernetes/controller-manager.conf","--authorization-kubeconfig=/etc/kubernetes/controller-manager.conf","--bind-address=127.0.0.1","--client-ca-file=/etc/kubernetes/pki/ca.crt","--cluster-cidr=10.244.0.0/16","--cluster-name=clusterone","--cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt","--cluster-signing-key-file=/etc/kubernetes/pki/ca.key","--controllers=*,bootstrapsigner,tokencleaner","--kubeconfig=/etc/kubernetes/controller-manager.conf","--leader-elect=true","--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt","--root-ca-file=/etc/kubernetes/pki/ca.crt","--service-account-private-key-file=/etc/kubernetes/pki/sa.key","--service-cluster-ip-range=10.16.0.0/12","--use-service-account-credentials=true"],"image":"k8s.gcr.io/kube-controller-manager:v1.23.10","imagePullPolicy":"IfNotPresent","livenessProbe":{"failureThreshold":8,"httpGet":{"host":"127.0.0.1","path":"/healthz","port":10257,"scheme":"HTTPS"},"initialDelaySeconds":10,"periodSeconds":10,"successThreshold":1,"timeoutSeconds":15},"name":"kube-controller-manager","resources":{"requests":{"cpu":"200m"}},"startupProbe":{"failureThreshold":24,"httpGet":{"host":"127.0.0.1","path":"/healthz","port":10257,"scheme":"HTTPS"},"initialDelaySeconds":10,"periodSeconds":10,"successThreshold":1,"timeoutSeconds":15},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","volumeMounts":[{"mountPath":"/etc/ssl/certs","name":"ca-certs","readOnly":true},{"mountPath":"/etc/ca-certificates","name":"etc-ca-certificates","readOnly":true},{"mountPath":"/etc/pki","name":"etc-pki","readOnly":true},{"mountPath":"/usr/libexec/kubernetes/kubelet-plugins/volume/exec","name":"flexvolume-dir"},{"mountPath":"/etc/kubernetes/pki","name":"k8s-certs","readOnly":true},{"mountPath":"/etc/kubernetes/controller-manager.conf","name":"kubeconfig","readOnly":true},{"mountPath":"/usr/local/share/ca-certificates","name":"usr-local-share-ca-certificates","readOnly":true},{"mountPath":"/usr/share/ca-certificates","name":"usr-share-ca-certificates","readOnly":true}]}],"hostIP":"10.0.2.15","initContainerStatuses":null,"labels":{"component":"kube-controller-manager","tier":"control-plane"},"name":"kube-controller-manager-control-plane","namespace":"kube-system","nodeName":"control-plane","phase":"Running","podIP":"10.0.2.15","timestamp":"2022-09-14T17:38:07Z","uid":"cbeeebcf-cd6f-4640-a7c7-d1c812f37411"}],"kind":"Pod","type":"ADDED"}
...

We can see that the error is logged but falco did not exit. It retries later and after the initial metadata fetching it switches to the watching mode for the k8s_pod_handler and process all the pods..

UPDATE tests:
I have teste these changes also in k8s cluster where the api-server throttles the initial requests made by falco. The following snippet shows how Falco behaves in case of error and that it is able to construct the k8s state. Furthermore, it shows that when triggering a rule that uses the k8s fields everything works as it should.

Defaulted container "falco" out of: falco, falco-driver-loader (init)
Fri Sep 16 09:51:38 2022: Falco version 0.32.2
Fri Sep 16 09:51:38 2022: Falco initialized with configuration file /etc/falco/falco.yaml
Fri Sep 16 09:51:38 2022: Loading rules from file /etc/falco/falco_rules.yaml:
Fri Sep 16 09:51:38 2022: Loading rules from file /etc/falco/falco_rules.local.yaml:
Fri Sep 16 09:51:38 2022: Starting internal webserver, listening on port 8765
k8s_handler (k8s_pod_handler_state::collect_data()[https://10.16.0.1] an error occurred while receiving data from k8s_pod_handler_state, m_blocking_socket=1, m_watching=0, SSL Socket handler (k8s_pod_handler_state): Connection closed.
k8s_handler (k8s_replicationcontroller_handler_state::collect_data()[https://10.16.0.1] an error occurred while receiving data from k8s_replicationcontroller_handler_state, m_blocking_socket=1, m_watching=0, SSL Socket handler (k8s_replicationcontroller_handler_state): Connection closed.
k8s_handler (k8s_service_handler_state::collect_data()[https://10.16.0.1] an error occurred while receiving data from k8s_service_handler_state, m_blocking_socket=1, m_watching=0, SSL Socket handler (k8s_service_handler_state): Connection closed.
k8s_handler (k8s_replicaset_handler_state::collect_data()[https://10.16.0.1] an error occurred while receiving data from k8s_replicaset_handler_state, m_blocking_socket=1, m_watching=0, SSL Socket handler (k8s_replicaset_handler_state): Connection closed.
k8s_handler (k8s_daemonset_handler_state::collect_data()[https://10.16.0.1] an error occurred while receiving data from k8s_daemonset_handler_state, m_blocking_socket=1, m_watching=0, SSL Socket handler (k8s_daemonset_handler_state): Connection closed.
k8s_handler (k8s_deployment_handler_state::collect_data()[https://10.16.0.1] an error occurred while receiving data from k8s_deployment_handler_state, m_blocking_socket=1, m_watching=0, SSL Socket handler (k8s_deployment_handler_state): Connection closed.
09:54:49.269656628: Notice A shell was spawned in a container with an attached terminal (user=root user_loginuid=-1 k8s.ns=default k8s.pod=netshoot-7dd548498d-lwc9p container=ecb3a5aeb796 shell=bash parent=runc cmdline=bash terminal=34816 container_id=ecb3a5aeb796 image=docker.io/nicolaka/netshoot)
09:54:57.320291138: Error File below a known binary directory opened for writing (user=root user_loginuid=-1 command=touch /bin/testing file=/bin/testing parent=bash pcmdline=bash gparent=<NA> container_id=ecb3a5aeb796 image=docker.io/nicolaka/netshoot pod_name=netshoot-7dd548498d-lwc9p deployment_name=netshoot) k8s.ns=default k8s.pod=netshoot-7dd548498d-lwc9p container=ecb3a5aeb796

Which issue(s) this PR fixes:

Fixes #

Special notes for your reviewer:

The branch is temporarily based on the 0.7.0 to easily test it with Falco 0.32.2 (a released version affected by the bug).

Does this PR introduce a user-facing change?:

NONE

@alacuku
Copy link
Member Author

alacuku commented Sep 15, 2022

@alacuku
Copy link
Member Author

alacuku commented Sep 15, 2022

Updated the PR description with some info regarding the testing.

@FedeDP
Copy link
Contributor

FedeDP commented Sep 15, 2022

/milestone 0.9.0

@poiana poiana added this to the 0.9.0 milestone Sep 15, 2022
Copy link
Member

@Andreagit97 Andreagit97 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for this, only a few people have the guts to stand up to our k8s client :)!

userspace/libsinsp/k8s_handler.cpp Show resolved Hide resolved
userspace/libsinsp/k8s_handler.cpp Show resolved Hide resolved
Comment on lines 393 to 396
// When the connection is closed we reset the connection state of the handler.
// It could happen when the api-server throttles our requests.
m_connecting = false;
m_connected = false;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe a stupid question: Do we have to do the same stuff also in get_all_data_secure or it is enough to do it into get_all_data_unsecure?

Copy link
Member Author

@alacuku alacuku Sep 16, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's a leftover from previous experiments, nice catch!
We do not need to reset the state of the socket_handler. We know that we are able to contact the api-server through that socket. We keep it alive, and next time falco will try to collect the k8s data then it will switch to the watch mode. It does not need to pre-fetch all the data but the api-server will send all the existing data as ADDED events and our logic will be able to construct the k8s state.
Another solution could be to remove the socket_handler, then falco will recreate the handler and so on. But it would be more expensive in case the api-server continues to throttle the request.

@leogr
Copy link
Member

leogr commented Sep 16, 2022

As discussed with @Andreagit97 and @FedeDP
we are too late to test this, unfortunately, so:

/milestone 0.10.0

@poiana poiana modified the milestones: 0.9.0, 0.10.0 Sep 16, 2022
It could happen that k8s api-server throttles the initial requests done to retrieve the k8s metadata.
When this happens we do not exit, but catch the exception generated by the temporary error and handle
it.
We reset the state of the socket_handler, meaning that before it is used the connection has to be initialized
again and the same is done for the k8s_handler associated with the socket_handler. The involved k8s_handler is
set in the initial state where it needs to connect to the api-server before using it to retrieve the data. Another solution could
be to destroy and recreate the involved handlers but it would be to much expensive since the throttling could
persist for relatively long periods.

Signed-off-by: Aldo Lacuku <[email protected]>
@alacuku alacuku force-pushed the kcl/do-not-exit-on-connection-closed branch from 28c5ea8 to ff33177 Compare September 16, 2022 10:40
@Andreagit97
Copy link
Member

Same as #591

I would give it an attempt, let's see if during the release process we are enough confident to merge it
/milestone 0.9.0

@poiana poiana modified the milestones: 0.10.0, 0.9.0 Sep 16, 2022
Copy link
Member

@leogr leogr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great 👍

@poiana
Copy link
Contributor

poiana commented Sep 16, 2022

LGTM label has been added.

Git tree hash: 4e7375b74cc9640b6475ddf6cd26c548c57d9654

@poiana poiana merged commit 40dd3b4 into falcosecurity:master Sep 16, 2022
@poiana
Copy link
Contributor

poiana commented Sep 16, 2022

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: alacuku, leogr, LucaGuerra

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants