Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kubespray should avoid maintaining apps definitions and should use helm charts #3181

Closed
desaintmartin opened this issue Aug 24, 2018 · 6 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@desaintmartin
Copy link
Contributor

Most of the applications defined in kubespray are maintained within the official helm chart repository.

It looks like it is quickly becoming the de-facto canonical way of representing apps in kubernetes.

In order not to maintain the same things twice, it should be wise to either:

  • Use helm to deploy what we need (efk, netchecker, cert-manager, etc, etc)
  • Use helm to generate definition files and manually applying them
  • Tell the user to use helm by himself with a recommended list

It should be easy to improve bad Charts or create the missing Charts (like netchecker).

What do you think? Where is the limit between "cluster management" and "application management" (see #2658)?

I've heard some of you are against using helm but would consider using helm as templates, can you tell me why?

@ant31
Copy link
Contributor

ant31 commented Aug 24, 2018

Helm is not the de-facto way of representing apps in kubernetes.
The tiller server add another component gateway to manager the resources whereas the apimaster is the 'canonical' way to interact with kubernetes resources.

The limit for cluster management is to get a fully functional cluster that will let the user install any application manager he would like. After Kubespray playbook run a user can easily chain with its own application playbook.

I'd personally remove 'efk' from maintained application in kubespray. Others are ~core components of an operational and production ready cluster.

@desaintmartin
Copy link
Contributor Author

I've seen some improvements into different ways of deploying an efk within k8s lately, when it becomes reliable enough it may be a good idea to remove efk definition from kubespray as we've seen it is not that easy to maintain.

I don't want to enter into an "app manager" war, but helm is becoming by far the most popular tool to manage app lifecycle within kubernetes (which, I know, adds a layer of complexity, but tiller is on the way of dying anyway). But this is not my point. My point is: the same work is done several times, it may be wise to factor and/or delegate (i.e remove from kubespray to focus on core and point user to all the options available to him).

In any way, this is only a suggestion and not an attack.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 11, 2019
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels May 11, 2019
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

4 participants