Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix(core): filter watched nodes and pods server-side #1676

Merged
merged 1 commit into from
Mar 12, 2024

Conversation

dqsully
Copy link
Contributor

@dqsully dqsully commented Mar 11, 2024

Purpose of PR?:

Rather than each DaemonSet pod listing/watching every node and pod in the cluster, this tells Kubernetes which node and pods we're interested in for each Daemonset, reducing overhead significantly in larger clusters.

Does this PR introduce a breaking change? Yes

If the changes in this PR are manually verified, list down the scenarios covered: Tested changes in fork of KubeArmor, currently running in production.

Additional information for reviewer? :

This PR does change the hostname behavior in KubeArmor slightly. KubeArmor used to watch all nodes in the cluster and then filter them by which ones had the same hostname prefix, but that still requires listing all nodes, so the only way to filter nodes server-side is if we can tell Kubernetes exactly which node we're interested in.

KubeArmor's previous logic for this was to check the KUBEARMOR_NODENAME environment variable first (set by the Kubernetes configuration, pulling from a fieldRef), and if that's not set, default to the truncated OS hostname to determine the node to watch. In my experience, a node's OS hostname has always mapped to the node's name in Kubernetes, and so to continue not relying strictly on KUBEARMOR_NODENAME being set, I've stopped truncating the OS hostname in KubeArmor.

This has one other major effect which is alerting/monitoring where the node's hostname is logged. Now, rather than logging just the truncated hostname, it will log the full hostname, which in the cases where OS hostnames match up exactly with Kubernetes node names, will be very helpful with correlating unexpected behaviors to specific nodes.

One thing that I haven't done but that might make sense is to check KUBEARMOR_NODENAME as an override for the default hostname globally, while still allowing the end-user to override it with a CLI argument if necessary. This means we'd keep the KUBEARMOR_NODENAME checks where they are in this PR too in case and end-user did override the CLI argument, since KUBEARMOR_NODENAME is intended to be the definitive Kubernetes node name for the pod, whereas the cfg.GlobalCfg.Host variable is used for lots of logging as well. This more depends on if, in situations where the Kubernetes node name differs from the OS hostname, the end-user would prefer a Kubernetes node name to be logged or an OS hostname.

Checklist:

  • Bug fix. Fixes #
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • This change requires a documentation update
  • PR Title follows the convention of <type>(<scope>): <subject>
  • Commit has unit tests
  • Commit has integration tests

Copy link
Member

@daemon1024 daemon1024 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM 👌🏽

Rather than each DaemonSet pod listing/watching
every node and pod in the cluster, this tells
Kubernetes which node and pods we're interested in
for each DaemonSet, reducing overhead
significantly in larger clusters.

Signed-off-by: Dakota Sullivan <[email protected]>
@daemon1024 daemon1024 force-pushed the filter-k8s-serverside branch from 45507ed to 4209d54 Compare March 12, 2024 13:59
Copy link
Member

@DelusionalOptimist DelusionalOptimist left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM.

@DelusionalOptimist DelusionalOptimist merged commit 7ff5c83 into kubearmor:main Mar 12, 2024
17 checks passed
@DelusionalOptimist DelusionalOptimist mentioned this pull request Mar 19, 2024
24 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: Done
Development

Successfully merging this pull request may close these issues.

3 participants