-
Notifications
You must be signed in to change notification settings - Fork 176
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Wazuh Configurability - Kubernetes #85
Comments
I tried to use A few remarks:
Sorry for the hammerring, but we are really excited to see Wazuh in production soon! Cheers! |
Hello @pchristos, We have moved this issue to the k8s repo, since it's more related.
We'll take note of your remarks about Thanks for joining us, and please let us know how it goes with your deployment. |
Hey, Thanks for the response. So, here's my two cents: 1 & 2 - So a combination of these two is required? Basically (2) cannot work without (1), right? Don't you think that makes things a bit less intuitive? I'd expect setting Regarding (3) - So the problem here is shared state? Is each worker supposed to keep its own state? I believe that's something you can accomplish with a single |
Hey, Very much new to wazuh but have been taking a look at some of this stuff myself with a look to deploying to k8s. Regarding 3, I've got some local changes which switches to a single worker |
Hello, @rjmoseley that sounds interesting. Feel free to open a PR so we can evaluate those changes. |
hey hi @pchristos , i'm trying for the same, deploying wazuh on kubernetes with service clusterip + ingress. can you please share the details how you have configured the same. |
Hello,
I've been working on deploying Wazuh on EKS using Helm. At the moment, I have an end-to-end working HA setup with Wazuh (1 master and 2 worker nodes) + ELK running on top of Kubernetes.
However, I've come across a few issues regarding its configurability. For instance:
I see how you use a
LoadBalancer
service to expose the Wazuh API to the world and allow it to perform TLS termination. However, it seems like a setup with aClusterIP
service +Ingress
is not easy to configure. How can I disable HTTPS for the Wazuh API, so that HTTPS termination can be handled by my cloud provider's external load balancer? I've played around withAPI_GENERATED_CERTS
, but that doesn't seem to do the tricky. It appears that HTTPS is enabled by default in/var/ossec/api/configuration/config.js
, meaning thatAPI_GENERATED_CERTS
is effectively a no-op. Do I have to edit/var/ossec/api/configuration/preloaded_vars.conf
and re-run either or both ofinstall_api.sh
andconfigure_api.sh
.Due to the above, this
if
statement seems to always evaluate to false, since HTTPS is enabled by default andserver.crt
is always present. I believe it's taken fromconfiguration-template
dir even for fresh installations.Regarding the workers'
StatefulSet
- is there any particular reason you've created twoStatefulSet
defintions, one per worker pod, instead of a single manifest with 2 replicas? Is there any sort of limitation that I'm missing here?The configuration of the master and workers nodes looks very similar. The only actual difference I've noticed is in the
<cluster>
block regarding<node_name>
and<node_type>
. I'm just wondering whether these two configuration files are actually meant to be so similar. For instance, does it actually make sense for the<auth>
block to be part of worker configuration? Isn't this what dictates the behavior of theauthd
registration service? Isn't this service supposed to be solely exposed by the master node?What I'd expect:
To be able to tweak various settings via
ConfigMap
s and, especially, environment variables. At the moment, it's not crystal clear how to do that without hacking around.To be able to switch from HTTP and HTTPS and vice versa. TBH this looks more like a bug, no?
A single
StatefulSet
manifest for worker nodes with configurable number ofreplicas
.Different
ossec.conf
per node type, so that responsibilities per node type are clear, unless this is not the case.Thanks in advance!
The text was updated successfully, but these errors were encountered: