Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kubernetes_manifest - Error: Failed to determine GroupVersionResource for manifest - cannot select exact GV from REST mapper #1894

Closed
MrCoffey opened this issue Nov 10, 2022 · 6 comments

Comments

@MrCoffey
Copy link

MrCoffey commented Nov 10, 2022

terraform --version
Terraform v1.2.0
on darwin_amd64
+ provider registry.terraform.io/hashicorp/aws v3.45.0
+ provider registry.terraform.io/hashicorp/kubernetes v2.3.2
+ provider registry.terraform.io/hashicorp/kubernetes-alpha v0.5.0
+ provider registry.terraform.io/hashicorp/null v3.1.0

Affected Resource(s)

kubernetes_manifest

Terraform Configuration Files

resource "kubernetes_namespace" "mwaa" {
  metadata {
    annotations = {
      name = "mwaa"
    }
    name = "mwaa"
  }
}

resource "kubernetes_manifest" "mwaa_role" {

  manifest = {
    "apiVersion" = "rbac.authorization.k8s.io/v1"
    "kind"       = "Role"
    "metadata" = {
      "name"      = "mwaa-role"
      "namespace" = "mwaa"
    }
    "rules" = [
      {
        "apiGroups" = [
          "",
          "apps",
          "batch",
          "extensions",
        ]
        "resources" = [
          "jobs",
          "pods",
          "pods/attach",
          "pods/exec",
          "pods/log",
          "pods/portforward",
          "secrets",
          "services",
        ]
        "verbs" = [
          "create",
          "delete",
          "describe",
          "get",
          "list",
          "patch",
          "update",
        ]
      },
    ]
  }

  depends_on = [
    kubernetes_namespace.mwaa
  ]
}

Debug Output

TF_LOG=TRACE

[ERROR] vertex "kubernetes_manifest.mwaa_role_binding" error: Failed to determine GroupVersionResource for manifest
[TRACE] vertex "kubernetes_manifest.mwaa_role_binding": visit complete, with errors
[DEBUG] provider.terraform-provider-kubernetes-alpha_v0.5.0_x5:     "get"
[ERROR] vertex "kubernetes_manifest.mwaa_role (expand)" error: Failed to determine GroupVersionResource for manifest
[TRACE] vertex "kubernetes_manifest.mwaa_role_binding (expand)": dynamic subgraph encountered errors: Failed to determine GroupVersionResource for manifest
[TRACE] vertex "kubernetes_manifest.mwaa_role (expand)": visit complete, with errors
[ERROR] vertex "kubernetes_manifest.mwaa_role_binding (expand)" error: Failed to determine GroupVersionResource for manifest
[DEBUG] provider.terraform-provider-kubernetes-alpha_v0.5.0_x5:    ]
[TRACE] vertex "kubernetes_manifest.mwaa_role_binding (expand)": visit complete, with errors

Panic Output

Steps to Reproduce

  1. terraform apply -->

Expected Behavior

Kubernetes object should be created.

Actual Behavior

Provider fails with the following error:

│ Error: Failed to determine GroupVersionResource for manifest
│ 
│   with kubernetes_manifest.mwaa_role_binding,
│   on main.tf line 77, in resource "kubernetes_manifest" "mwaa_role_binding":
│   77: resource "kubernetes_manifest" "mwaa_role_binding" {
│ 
│ cannot select exact GV from REST mapper

Api versions are available.

kubectl get apiservices  | grep authorization

v1.authorization.k8s.io                Local     True        2d2h
v1.rbac.authorization.k8s.io           Local     True        2d2h

Important Factoids

References

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment
@MrCoffey MrCoffey added the bug label Nov 10, 2022
@github-actions github-actions bot removed the bug label Nov 10, 2022
@BBBmau
Copy link
Contributor

BBBmau commented Nov 16, 2022

Hello! Thank you for opening this issue, could you explain why k8s alpha is present in the provider list? Could you also attempt this again with the latest version of the k8s provider.

@MrCoffey
Copy link
Author

Hi, thanks for taking a look.

At that time we were using the two Kubernetes providers it looked like a duplication so we removed and used only kubernetes provider v2.3.2 instead of kubernetes-alpha.

Neither of them worked in the end, even after deleting the cache. We decided to replace resource "kubernetes_manifest" with "resource "kubectl_manifest" to resolve our issue.

@arybolovlev
Copy link
Contributor

Hi @MrCoffey,

v2.3.2 is quite old. Please try the latest version v2.16.0.

Thanks.

@samiamoura
Copy link

I @arybolovlev I facing the same issue with the v2.16.0 and the Traefik application
The problem is when the CRD doesn't exist in the kubernetes cluster, the terraform kubernetes terraform provider fails beacause it try to read the relevant CRD before creates the CR despite the depends_on field.
The correct behavior should be, first create the helm traefik release (and appropriate CRD) and then read and create the CR

MicrosoftTeams-image (1)

MicrosoftTeams-image (2)

MicrosoftTeams-image (3)

Copy link

github-actions bot commented Dec 2, 2023

Marking this issue as stale due to inactivity. If this issue receives no comments in the next 30 days it will automatically be closed. If this issue was automatically closed and you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. This helps our maintainers find and focus on the active issues. Maintainers may also remove the stale label at their discretion. Thank you!

@github-actions github-actions bot added the stale label Dec 2, 2023
@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Jan 2, 2024
@Monachawla1712
Copy link

Hi, thanks for taking a look.

At that time we were using the two Kubernetes providers it looked like a duplication so we removed and used only kubernetes provider v2.3.2 instead of kubernetes-alpha.

Neither of them worked in the end, even after deleting the cache. We decided to replace resource "kubernetes_manifest" with "resource "kubectl_manifest" to resolve our issue.

i want to know if you used kubectl_manifest resource, it properly worked or not ?

@github-actions github-actions bot removed the stale label May 30, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants