Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FEATURE REQUEST] eplicli preflight checks #2821

Closed
3 of 13 tasks
romsok24 opened this issue Dec 28, 2021 · 2 comments
Closed
3 of 13 tasks

[FEATURE REQUEST] eplicli preflight checks #2821

romsok24 opened this issue Dec 28, 2021 · 2 comments

Comments

@romsok24
Copy link
Contributor

romsok24 commented Dec 28, 2021

Is your feature request related to a problem? Please describe.

When we run an epicli upgrade procedure on existing epiphany installation with an k8s app enabled (ie. node-exporter in daemonset mode) an upgrade procedure is failing ( stating that no clusters/build/test-node-expr/kubeconfig file was found).

Describe the solution you'd like
Perhaps we can implement some additional preflight test to ensure the k8s config file was provided by a user before starting the upgrade? We can even copy the needed kubeconfig from the k8s master cause we have an ssh key to this machine ( to be decided yet ).

Additional checks to implement:

  • Node-exporter chart is not installed more than 1 time. It will dismiss questions here.
  • k8s_as_cloud_service is set to true only for managed K8s or node-exporter feature mapping is disabled when it's set to true. Otherwise there is an error:
level=error ts=2021-12-28T10:51:54.086Z caller=node_exporter.go:194 err="listen tcp 0.0.0.0:9100: bind: address already in use"

Describe alternatives you've considered
No

Additional context
Workarounds:

  • for the first case it's possible to copy the existing kubeconfig file from the K8s control plane node to the machine that runs epicli
  • for the 3rd it is possible to manually stop the conflicting node-exporter installed as a service:
    systemctl stop prometheus-node-exporter.service
    one need to do it on all nodes and for persistent change it one should use systemctl disable ...

DoD checklist

  • Changelog updated
  • COMPONENTS.md updated / doesn't need to be updated
  • Schema updated / doesn't need to be updated
  • Feature has automated tests
  • Automated tests passed (QA pipelines)
    • apply
    • upgrade
  • Idempotency tested
  • Documentation added / updated / doesn't need to be updated
  • All conversations in PR resolved
  • Solution meets requirements and is done according to design doc
  • Usage compliant with license
  • Backport tasks created / doesn't need to be backported
@atsikham atsikham changed the title [FEATURE REQUEST] Preflight check for kubeconfig existance [FEATURE REQUEST] Node-exporter preflight checks Dec 28, 2021
@rafzei
Copy link
Contributor

rafzei commented Jan 10, 2022

Please, double check in the rest k8s apps.

@cicharka cicharka assigned cicharka and unassigned cicharka Jan 13, 2022
@romsok24 romsok24 changed the title [FEATURE REQUEST] Node-exporter preflight checks [FEATURE REQUEST] eplicli preflight checks Jan 13, 2022
@atsikham atsikham assigned atsikham and unassigned atsikham Mar 4, 2022
@atsikham
Copy link
Contributor

atsikham commented Mar 16, 2022

  1. Main statement notes:

Taking that into account, I decided to add a validation that kubeconfig (kubeconfig.local) file exists when k8s_as_cloud_service: true in upgrade mode (#3021).

  1. Preflight check is added Preflight check of single node-exporter installation #3011 to check a single Helm release for Node Exporter.
  2. To avoid an execution of different Node Exporter installation types (system service, daemonset), task conditions were modified, no preflight checks are necessary in this case (Change conditions of node-exporter tasks execution #3021).

@przemyslavic przemyslavic self-assigned this Mar 24, 2022
@seriva seriva closed this as completed Mar 31, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

6 participants