-
Notifications
You must be signed in to change notification settings - Fork 107
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Installing node exporter and filebeat as daemonsets in custom namespaces #1839
Installing node exporter and filebeat as daemonsets in custom namespaces #1839
Conversation
@@ -1,3 +1,5 @@ | |||
--- | |||
filebeat_helm_chart_file_name: filebeat-7.9.2.tgz | |||
filebeat_version: "7.9.2" | |||
# Use custom namespace for logging charts such as filebeat in case of k8s as cloud service. | |||
logging_chart_namespace: epi-logging |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shouldn't it be configurable?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We've decided to keep it in defaults for now. After this refactor #1756 it will be easier to properly implement upgrades for this case.
become: false | ||
run_once: true | ||
|
||
environment: { KUBECONFIG: "{{ vault_location }}/../kubeconfig" } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Recently we reduced the number of tasks where we use KUBECONFIG
variable. Can't the value be taken from group_vars/all.yml
and be specified on a playbook level? The same question for the left part of PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think in case of just the helm feature we have to keep such construction, but for filebeat and node_exporter we have properly defined non-localhost ansible groups, @atsikham I think you're right. 🤔 Actually in both cases lines
environment: { KUBECONFIG: "{{ vault_location }}/../kubeconfig" }
can be removed because it's already defined at playbook level. 👍
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@rpudlowski93 any thoughts? 🤗
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've just added the kubeconfig env to upgrade playbook level and removed the env from role. Tested, works fine. Small changes also added in order to optimize code
core/src/epicli/data/common/ansible/playbooks/roles/upgrade/tasks/filebeat.yml
Outdated
Show resolved
Hide resolved
.../epicli/data/common/ansible/playbooks/roles/filebeat/tasks/install-filebeat-as-daemonset.yml
Show resolved
Hide resolved
core/src/epicli/data/common/ansible/playbooks/roles/upgrade/tasks/filebeat.yml
Outdated
Show resolved
Hide resolved
core/src/epicli/data/common/ansible/playbooks/roles/upgrade/tasks/node-exporter.yml
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What about something like this? In this case it's not a big performance improvement, but this is a better way to do these things in general, it's faster and we get rid of useless log entries. 🤔
---
- hosts: localhost
tasks:
- set_fact:
specification:
helm_chart_name: asd2
helm_ls:
- {"name":"asd1","namespace":"default","revision":"20","updated":"2020-11-20 03:14:29.269115248 +0100 CET","status":"failed","chart":"asd-0.0.1","app_version":"0.0.1"}
- {"name":"asd2","namespace":"default","revision":"20","updated":"2020-11-20 03:14:29.269115248 +0100 CET","status":"failed","chart":"asd-0.0.1","app_version":"0.0.1"}
- set_fact:
helm_release_exists: >-
{{ _names | ternary(true, false) }}
vars:
_names: >-
{{ helm_ls | map(attribute='name')
| select('eq', specification.helm_chart_name)
| list }}
- debug: msg=OK
when: helm_release_exists
Also using is defined
to check boolean values is dangerous! 🙀
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM 👍😍
| map(attribute='name') | ||
| select('==', specification.helm_chart_name) | ||
| list }} | ||
block: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
❤️
environment: { KUBECONFIG: "{{ vault_location }}/../kubeconfig" } | ||
command: helm list --output json | ||
register: helm_list | ||
block: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
❤️
4524f33
to
f6fcc58
Compare
f6fcc58
to
4b22a13
Compare
4b22a13
to
2ce2b4e
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🐒
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🥳
…ces (hitachienergy#1839) * Installing daemonsets in custom namespaces * Helm Chart name added to uninstall role * Upgrade process added * Including vars moved on the top of task * Added more strictly condiction * Obtaining helm releases in different way * Format improved * Kubeconfig env moved to playbook level * Changelog updated Co-authored-by: Robert Pudłowski <[email protected]>
…ces (hitachienergy#1839) * Installing daemonsets in custom namespaces * Helm Chart name added to uninstall role * Upgrade process added * Including vars moved on the top of task * Added more strictly condiction * Obtaining helm releases in different way * Format improved * Kubeconfig env moved to playbook level * Changelog updated Co-authored-by: Robert Pudłowski <[email protected]>
Fix related to bug : #1833