Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Wazuh Configurability - Kubernetes #85

Open
pchristos opened this issue Apr 7, 2020 · 6 comments
Open

Wazuh Configurability - Kubernetes #85

pchristos opened this issue Apr 7, 2020 · 6 comments

Comments

@pchristos
Copy link

Wazuh version Component Install type Install method Platform
3.12.0-1 wazuh-manager manager docker EKS AMI type AL2_x86_64 version 1.14.9-20200228

Hello,

I've been working on deploying Wazuh on EKS using Helm. At the moment, I have an end-to-end working HA setup with Wazuh (1 master and 2 worker nodes) + ELK running on top of Kubernetes.

However, I've come across a few issues regarding its configurability. For instance:

  1. I see how you use a LoadBalancer service to expose the Wazuh API to the world and allow it to perform TLS termination. However, it seems like a setup with a ClusterIP service + Ingress is not easy to configure. How can I disable HTTPS for the Wazuh API, so that HTTPS termination can be handled by my cloud provider's external load balancer? I've played around with API_GENERATED_CERTS, but that doesn't seem to do the tricky. It appears that HTTPS is enabled by default in /var/ossec/api/configuration/config.js, meaning that API_GENERATED_CERTS is effectively a no-op. Do I have to edit /var/ossec/api/configuration/preloaded_vars.conf and re-run either or both of install_api.sh and configure_api.sh.

  2. Due to the above, this if statement seems to always evaluate to false, since HTTPS is enabled by default and server.crt is always present. I believe it's taken from configuration-template dir even for fresh installations.

  3. Regarding the workers' StatefulSet - is there any particular reason you've created two StatefulSet defintions, one per worker pod, instead of a single manifest with 2 replicas? Is there any sort of limitation that I'm missing here?

  4. The configuration of the master and workers nodes looks very similar. The only actual difference I've noticed is in the <cluster> block regarding <node_name> and <node_type>. I'm just wondering whether these two configuration files are actually meant to be so similar. For instance, does it actually make sense for the <auth> block to be part of worker configuration? Isn't this what dictates the behavior of the authd registration service? Isn't this service supposed to be solely exposed by the master node?

What I'd expect:

  1. To be able to tweak various settings via ConfigMaps and, especially, environment variables. At the moment, it's not crystal clear how to do that without hacking around.

  2. To be able to switch from HTTP and HTTPS and vice versa. TBH this looks more like a bug, no?

  3. A single StatefulSet manifest for worker nodes with configurable number of replicas.

  4. Different ossec.conf per node type, so that responsibilities per node type are clear, unless this is not the case.

Thanks in advance!

@pchristos
Copy link
Author

pchristos commented Apr 7, 2020

I tried to use /var/ossec/api/configuration/preloaded_vars.conf and configure_api.sh to tackle (1) above.

A few remarks:

  • IMHO configure_api.sh is not very straightforward to use, especially due to the prompts for user input. I shouldn't have to define most of the settings in preloaded_vars.conf to avoid the prompts. I'd expect at least a -y flag to be provided by the shell script in order to auto-answer "yes" to all prompts in a dynamic environment, where user intervention would be nearly impossible.
  • The change_auth function of configure_api.sh expects USER and PASS to be defined. However, both variables are empty for starters and there's no indication regarding when/where they should be set, which causes htpasswd to fail. Shouldn't it (a) be pointed out to edit preloaded_vars.conf, (b) use sane default, eg. foo:bar as is in the Dockerfile, or (c) use set -u to catch undefined variables in the script?
  • Is it at all possible for this line to conflict with the aforementioned change_auth function?

Sorry for the hammerring, but we are really excited to see Wazuh in production soon!

Cheers!

@manuasir manuasir transferred this issue from wazuh/wazuh Apr 22, 2020
@xr09
Copy link
Contributor

xr09 commented Apr 22, 2020

Hello @pchristos,

We have moved this issue to the k8s repo, since it's more related.

  1. You could build a predefined config.js file with HTTPS disabled and use config maps to mount it.

  2. With 1 done you could define the variable API_GENERATE_CERTS as False.

  3. Due to the way synchronization works on the Wazuh cluster it is not possible to define the workers as replicas and let k8s manage them itself. It's not like an app worker with shared state on a common db, in this case each worker has both its data and its share of the cluster state and losing it can affect the cluster health as a whole.

  4. This could be a new issue for the core repo (wazuh/wazuh), there are a lot of enhancements we could do to ossec.conf.

We'll take note of your remarks about configure_api.sh, there's room for improvement there. By the way if you feel like it PRs are welcome!

Thanks for joining us, and please let us know how it goes with your deployment.

@pchristos
Copy link
Author

Hey,

Thanks for the response. So, here's my two cents:

1 & 2 - So a combination of these two is required? Basically (2) cannot work without (1), right? Don't you think that makes things a bit less intuitive? I'd expect setting API_GENERATE_CERTS to just do the trick.

Regarding (3) - So the problem here is shared state? Is each worker supposed to keep its own state? I believe that's something you can accomplish with a single StatefulSet definition that includes a volumeClaimTemplates block.

@rjmoseley
Copy link
Contributor

Hey,

Very much new to wazuh but have been taking a look at some of this stuff myself with a look to deploying to k8s. Regarding 3, I've got some local changes which switches to a single worker StatefulSet and ConfigMap which along with the sed in wazuh/wazuh-docker#261 allows this to be cleaned up and use replicas to manage and scale the worker nodes. Happy to open a PR for this if that'd be useful for others? I've deployed it myself and wazuh seems happy.

@manuasir
Copy link
Contributor

Hello, @rjmoseley that sounds interesting. Feel free to open a PR so we can evaluate those changes.

@khasim4A2
Copy link

hey hi @pchristos , i'm trying for the same, deploying wazuh on kubernetes with service clusterip + ingress. can you please share the details how you have configured the same.

@xr09 xr09 removed their assignment Mar 31, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants