-
Notifications
You must be signed in to change notification settings - Fork 38
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: improve capi registration #197
Conversation
@he2ss: There are no 'kind' label on this PR. You need a 'kind' label to generate the release automatically.
DetailsI am a bot created to help the crowdsecurity developers manage community feedback and contributions. You can check out my manifest file to understand my behavior and what I can do. If you want to use this for your project, you can check out the forked project rr404/oss-governance-bot repository. |
@he2ss: There are no area labels on this PR. You can add as many areas as you see fit.
DetailsI am a bot created to help the crowdsecurity developers manage community feedback and contributions. You can check out my manifest file to understand my behavior and what I can do. If you want to use this for your project, you can check out the forked project rr404/oss-governance-bot repository. |
If this is added to configmap will the lapi pods pick up same ID from the configmap and in the dashboard all the lapi pods will be register as one security engine ? Is this what this change is doing ? |
Hey @sigtriggr, Yes that's the idea. If you are running multiple LAPI pods for HA, we want them to appear as one in the console (because they are using the same database on your end, they are functionally identical and your log processors will use either of them). This also solves the issue of requiring ReadWriteMany volumes if you want to persists the creds when running multiple LAPI pods, as we will mount the credentials from the config map. |
/kind enhancement |
…lsInSecret mutually exclusive
Because of PersistentVolume ReadWriteMany that is not allowed il lot of k8s clusters. The persistency is not possible for lapi and so the capi credentials that were generated are always changing.
The first solution is to have a job that register, get the new credentials and patch a configmap using k8s API.