Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support managing Deployment resource #3

Closed
hashibot opened this issue Jun 13, 2017 · 55 comments
Closed

Support managing Deployment resource #3

hashibot opened this issue Jun 13, 2017 · 55 comments

Comments

@hashibot
Copy link

This issue was originally opened by @dasch as hashicorp/terraform#13420. It was migrated here as part of the provider split. The original body of the issue is below.


Currently, I have to use somewhat of a hack in order to have Terraform create my Kubernetes deployments and services:

# A module that can create Kubernetes resources from YAML file descriptions.

variable "username" {
  description = "The Kubernetes username to use"
}

variable "password" {
  description = "The Kubernetes password to use"
}

variable "server" {
  description = "The address and port of the Kubernetes API server"
}

variable "configuration" {
  description = "The configuration that should be applied"
}

variable "cluster_ca_certificate" {}

resource "null_resource" "kubernetes_resource" {
  triggers {
    configuration = "${var.configuration}"
  }

  provisioner "local-exec" {
    command = "touch ${path.module}/kubeconfig"
  }

  provisioner "local-exec" {
    command = "echo '${var.cluster_ca_certificate}' > ${path.module}/ca.pem"
  }

  provisioner "local-exec" {
    command = "kubectl apply --kubeconfig=${path.module}/kubeconfig --server=${var.server} --certificate-authority=${path.module}/ca.pem --username=${var.username} --password=${var.password} -f - <<EOF\n${var.configuration}\nEOF"
  }
}

I use the above module when I need to create resources, e.g.:

module "kubernetes_nginx_deployment" {
  source        = "./kubernetes"
  server        = "${module.kubernetes_cluster.host}"
  username      = "${module.kubernetes_cluster.username}"
  password      = "${module.kubernetes_cluster.password}"
  cluster_ca_certificate      = "${module.kubernetes_cluster.cluster_ca_certificate}"
  configuration = "${file("kubernetes/nginx-deployment.yaml")}"
}

This is of course far from perfect: it doesn't support modifying or destroying the resources and is generally brittle.

It would be great if there was either first-class support for Deployment and Service resources or generic support for arbitrary Kubernetes resources through YAML or JSON definitions.

@holoGDM
Copy link

holoGDM commented Jun 28, 2017

There is service support: link but there is not Deployment it could be nice to configure whole my environment from Terraform not only part of it. Please can you add it?

@radeksimko
Copy link
Member

As mentioned in the original linked issue and elsewhere there are no plans for supporting alpha or beta resources, which is Deployment's case.

I'm happy to revisit this issue once the resource reaches v1 (stable).

Thanks for understanding.

@radeksimko radeksimko changed the title provider/kubernetes: Support managing Deployment & Service resources Support managing Deployment resource Jun 28, 2017
@roidelapluie
Copy link

I think it is time to change that policy. We could have a beta version of this providers which contains deployments?

@roidelapluie
Copy link

(I mean, because there is now the split providers in 0.10)

@radeksimko
Copy link
Member

@roidelapluie The reasons for not supporting alpha/beta still remain the same even after provider split. The problem wasn't/isn't the codebase or versioning of it. It's the API versioning and promises (or lack of) related to those versions.

TL;DR these reasons are IMO still valid: #1 (comment)

Unfortunately we do not have a good mechanism to deal with versioned APIs in Terraform's core yet. We have discussed it briefly in the team and it is something we want to support eventually, but it's unlikely we'll get to it any time soon.

If you're willing to deal with the problems mentioned in my comment and keen on supporting (potentially) unstable APIs, feel free to fork this provider.

Concrete suggestions on how to deal with versioned APIs in the schema across providers and resources are welcomed over in https://github.com/hashicorp/terraform/issues/new

Thanks.

@roidelapluie
Copy link

every single tutorial/training is using deployments.

@mingfang
Copy link

mingfang commented Jul 26, 2017

I added Deployment support in my fork.
https://github.com/mingfang/terraform-provider-kubernetes/commit/50a308612f71ef5ca042cdf901c0d2d4154dd369

Update: I created a completely new Kubernetes provider that uses dynamic discovery to support all the latest features. More info here https://github.com/mingfang/terraform-provider-k8s

@roidelapluie
Copy link

@mingfang You are awesome.

@owenthereal
Copy link

Any updates on this?

@frosenberg
Copy link

frosenberg commented Sep 6, 2017

@mingfang could you PR this so there is a change this gets into master?

@mingfang
Copy link

mingfang commented Sep 6, 2017

@frosenberg The problem is that they won't accept any PRs that implements beta features.

@roidelapluie
Copy link

beta features that everyone needs/uses

@frosenberg
Copy link

frosenberg commented Sep 6, 2017 via email

@luispabon
Copy link

Agree with above. I understand the reasons not to support these, but deployments, cronjobs etc are features of kubernetes that absolutely everyone use on a daily basis. There's little incentive to use a provider that we have to constantly work around.

BC breaks are what semver is for.

@podollb
Copy link

podollb commented Sep 20, 2017

I also agree, since the majority of people using k8s are using Deployment (and many using CronJob), it would be extremely helpful if TF had support.

@henning
Copy link

henning commented Oct 28, 2017

I also came here because I wanted to create a deployment using Terraform...
Following the discussion I can somewhat understand that The terraform team doesn't want to go through great lenghts to support something the K8s team declares as beta.

I propose, if it is so useful for all of us using and relying on them so heavily, to check why the K8s team still considers them beta and what we can do to help to get them declared stable.

@jonmoter
Copy link

jonmoter commented Nov 8, 2017

I encourage you to revisit this policy. Beta objects like Deployments and DaemonSets are used in every production grade Kubernetes cluster that I've come across. If they're not supported in Terraform, it means I can't use Terraform to manage my Kubernetes resources.

I encourage you to think of terms like Alpha or Beta in context of the particular software project. Terraform itself hasn't reached a 1.0 release, but that's because of the bar Hashicorp sets for what 1.0 means. I think the Kubernetes project has a pretty rigorous level of quality for beta features.

I understand there is risk in supporting features that could have breaking changes. But for me, Deployment support is MVP functionality of this provider, given the current reality of how Kubernetes works.

@zimbatm
Copy link
Contributor

zimbatm commented Nov 9, 2017

Just to insist a little bit more, I think that the policy of only maintaining stable APIs made sense while all the plugins where released along the terraform source code. In that case, hot-fixing a broken API meant cutting a whole new terraform release and impacting people who might not even use that particular provider.

Now that the plugins have been extracted from the Terraform code based it might make sense to revisit that policy and make it more flexible per provider.

@VJftw
Copy link

VJftw commented Dec 16, 2017

http://blog.kubernetes.io/2017/12/kubernetes-19-workloads-expanded-ecosystem.html

Deployment and ReplicaSet, two of the most commonly used objects in Kubernetes, are now stabilized after more than a year of real-world use and feedback. SIG Apps has applied the lessons from this process to all four resource kinds over the last several release cycles, enabling DaemonSet and StatefulSet to join this graduation. The v1 (GA) designation indicates production hardening and readiness, and comes with the guarantee of long-term backwards compatibility.

@debovema
Copy link

debovema commented Jan 5, 2018

Hi @radeksimko,

Does Hashicorp have a roadmap to integrate this new v1 objects ?

Best regards

@synhershko
Copy link

Thanks @sl1pm4t , I will try this out this week as well!

@radeksimko would be nice to hear HashiCorp's idea of keeping this official provider alive and up to speed with Kubernetes' API

@rcrogers
Copy link

For anyone else who's looking, @sl1pm4t's fork also has an example of how to use the new resources:
https://github.com/sl1pm4t/terraform-provider-kubernetes/blob/e8fc10cd13c6bae1dfe1ecd87d785973b242985d/_examples/ingress/main.tf

@trthomps
Copy link

trthomps commented Apr 23, 2018

At this point I can only assume the reason beta/alpha features are not being added is because HashiCorp doesn't like that Kubernetes competes with Console/Nomad and is purposely gimping the product. The Google provider adds beta features within days of release and has no such rule since having a rule like this with Google products is absurd as they are notorious for leaving things in "beta" long after people are using said product/feature in production (GMail anyone?).

@borsboom
Copy link

Deployments aren't even beta anymore. They're now in the apps/v1 API version.

@eversC
Copy link

eversC commented May 1, 2018

@radeksimko will the official provider be updated soon given the k8s Deployment resource is now out of beta?

@iorlas
Copy link

iorlas commented May 3, 2018

We are moving our whole infrastructure to K8s now, using terraform. It is a shame we should stop it or use workarounds. Are deployments and services on a plan at least? We gonna manage K8s with different software, I think, but we don't really want to.

@stigok
Copy link

stigok commented May 3, 2018

This ship isn't moving. Thankfully I'm having success with sl1pm4t's fork: https://github.com/sl1pm4t/terraform-provider-kubernetes

@stefanthorpe
Copy link

@radeksimko Just under a year ago you mentioned that you would revisit this once this is out of beta. We'll it is and there are many people waiting for it.
Could we get some kind of official response on this topic?

@ktham
Copy link

ktham commented Jul 2, 2018

@radeksimko our team is looking to leverage Terraform for Kubernetes, are you/team planning to maintain the Kubernetes provider?

@hafizullah
Copy link

I badly need this feature otherwise I would have to look for alternative solutions.. :(

@tsadoklevi
Copy link

tsadoklevi commented Jul 20, 2018

(EDIT by HashiCorp: we've edited some of the wording below which we felt was not in accordance with our community code of conduct. While the words have been edited, the meaning of the response we intend to keep unchanged.)

HasiCorp, you are probably well aware of this issue. It seems to me that you don't care. The k8s community is already using Deployments, Ingress etc. and it seems that despite a lot of talk there is no progress on this issue.

Terraform is great but you are making people mistrust you and hence mistrust the "back" of Terraform.

Please announce your policy regarding k8s provider: are you going to fully support it or just let it die slowly?

@zimbatm
Copy link
Contributor

zimbatm commented Jul 20, 2018

@tsadoklevi there is no need to be rude

That being said, why not accept more maintainers to the repo? There are some active contributors like @sl1pm4t that could help. My impression was that splitting the providers out of the terraform repo was exactly to allow to delegate control more easily. Maybe it's time to take advantage of this.

@zimbatm
Copy link
Contributor

zimbatm commented Jul 20, 2018

Might be relevant: https://twitter.com/zimbatm/status/1020365345004105729

@paultyng
Copy link
Contributor

@tsadoklevi Your frustrations are well warranted and understood. I promise we're working hard to improve the Kubernetes provider and will outline exactly what we're doing in this post.

Before responding to your concerns: while your point is fair, your tone is not. Whether it is directed at us as a company or any other member of the community, we expect kind discourse. We accept criticism and are happy to respond, but criticism can be delivered constructively without expletives and what may feel like attacks. Because you do raise a fair point, we've filtered your comment and noted that we filtered it (we would never do so secretly). Thank you for raising your concerns.

The Kubernetes provider has been probably the single major point of focus/discussion (non-technically) over the past month. There is a lot of pressure both internally at HashiCorp and externally to improve this quickly. We've already created an improvement plan and roadmap to do so, and are currently looking for developers to work with us to enable it: #178

I want to be absolutely clear that we are disappointed and sorry to the community for the state of this provider. It is important to us, and if we could break down the hours spent over the past couple months you'd see its been something we've spent a disproportionately high amount of time working on. We aren't lying or being deceptive: we care about K8S, we care about this provider, and we want to improve it as quickly as possible.

We're always open to bringing on open source core commmitters (and those committers outnumber full time provider engineers at HashiCorp by more than 10x). There is a challenge here that OSS committers are usually working in their free time and it'd be unfair of us to expect any more. So for a healthy committer environment, they must be supported by full time staff. We're more than happy to merge pull requests, but please understand that hitting the merge button is the easiest thing we can do; the multi-year maintenance with bugs, customer support (paying), feature requests etc. that come with it are the real cost of hitting "merge," and the original PR submitter usually doesn't stick around. Still, we're happy to do that, as long as we have the confidence we can support it. And currently, we need to hire a full time engineer to help us here.

There are a number of forks of this provider and we'd love to work with those owners to bring them in. A lot of the fork owners want this, too. We've reached out to a few of the maintainers (as well as contributors) and asked if they'd be interested in working with HashiCorp on this full time. We got good responses, but due to a number of legal difficulties (see: https://news.ycombinator.com/item?id=17022563) we're blocked. We're at the point though where we're looking to contract these individuals in the interim.

I think we were more optimistic going into this (started a few months ago) that we'd find an FTE quite quickly. That hasn't turned out to be the case and we probably should've engaged community efforts first and pushed some of our own team to substitute for a bit. The latter is easier said than done, since they're all working full time on equally important providers with deep roadmaps.

Note that the Terraform community has been through this pain before. Take the Azure provider as an example. It languished and barely worked 18 months ago. We had similarly upset community and customers and our reasoning was much of the same as the above. We simultaneously engaged Microsoft who have officially partnered with us, brought on core committers, and hired a FTE and very quickly it has become one of our best providers that is being updated frequently, has broad feature coverage, etc. We're filling the same holes now with this provider, but its not easy.

That is the full picture of what's going on. I hope you can understand the situation that we're in.

That was a lot of talk, so what's the action?

  • We're hiring a FTE to help us with this provider: Interested in helping HashiCorp maintain this provider full time? #178
  • We're talking to downstream fork maintainers and contributors about helping us (paid).
  • We've interviewed a number of type of users and formed a clear draft of what we'd like to achieve with this provider in the short term. Note: where "short term" really starts when we have the help to enable it.
  • We are actively looking for community help. This is more recent, we'll support these members in the short term by straining a bit of our team internally that isn't focused on K8S.
  • We will review PRs that come in naturally, but understand that there isn't a dedicated person looking at these currently. Still, in the interim we are looking for ways to allocate time for our other engineers with K8S experience to help out.

We'll try to do better to keep this community up to date via issues and so on going forward.

@Miyurz
Copy link

Miyurz commented Jul 25, 2018

@paultyng Thank you for making the community aware about the progress. Yes, we love terraform and hence want to see the terraform providers for deployment and other k8s resources. I understand the delay as its hard to catch up with the aggressive k8s releases.

Is there any workaround that you or anyone else could suggest(local-exec etc.,) so that I continue to use tf and replace with the provider once its available ?

@Phylu
Copy link

Phylu commented Jul 25, 2018

@Miyurz
My current workaround for running deployments looks like this:

provisioner "local-exec" {
    command = "echo '${data.template_file.deployment.rendered}' > /tmp/deployment.yaml && kubectl apply --kubeconfig=$HOME/.kube/config -f /tmp/deployment.yaml"
  }

Whereas I have a template yaml file which contains the deployment description and is filled with variables depending on the terraform code

data "template_file" "deployment" {
  template = "${file("${path.module}/deployment.yaml")}"

  vars {
    NAMESPACE                     = "${var.namespace}"
    DB_HOST                       = "${var.db_host}"
    DB_PORT                       = "${var.db_port}"
  }
}

@paultyng
Copy link
Contributor

paultyng commented Jul 26, 2018

I have done something similar, essentially having a kubectl provisioner that ran my templated YAML files, only difference being it was remote exec in the cluster to deal with authentication.

@borsboom
Copy link

@paultyng If you're still looking for resources, is this something you'd consider hiring an outside contractor to work on and maintain? The company I work for uses both Terraform and Kubernetes heavily, and we've considered jumping into this implementation but have been reluctant due to the amount of future maintenance likely required (we have to choose our battles, and we don't like to just throw new code over the fence and then expect others to maintain it).

We'd certainly much rather be using TF than Helm, but Helm is "good enough" that the itch hasn't been quite strong enough to decide to take on scratching it "for free." But we'd sure be open to some kind of partnership to help this get done and maintained in the future.

@paultyng
Copy link
Contributor

@borsboom our long term goal is to have a full time employee or more supporting this, but in the near term we would consider contracting to help out the community and keep it moving, if you are still interested though feel free to email me ([email protected])

@NickLarsenNZ
Copy link

Any updates on this? It seems to be dragging along too slowly and as a result the provider is way behind.
The local-exec fallback of course works, but then the state is not maintained.

@bitbrain
Copy link

@NickLarsenNZ it seems so. #101 has been merged 2 days ago. In the comment @Starefossen states:

Now we only need to get #73 merged and then we can have a party 🎉 Keep up the good work everyone ❤️

@alexsomesan
Copy link
Member

@bitbrain #73 was being refreshed for the new client libraries. I will merge it soon.

@alexsomesan
Copy link
Member

Deployments are in.

@zikphil
Copy link

zikphil commented Oct 19, 2018

Is there an up-to-date example on how to use this resource somewhere? The one provided https://github.com/sl1pm4t/terraform-provider-kubernetes/blob/e8fc10cd13c6bae1dfe1ecd87d785973b242985d/_examples/ingress/main.tf does not seem to work.

@smatyas
Copy link

smatyas commented Nov 6, 2018

For the record, the example is now in the official docs here: https://www.terraform.io/docs/providers/kubernetes/r/deployment.html

It was added in #194

@ghost ghost locked and limited conversation to collaborators Apr 21, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests