Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

image: auto in istio-ingress/templates/injected-deployment.yaml #35789

Closed
tokiwong opened this issue Oct 28, 2021 · 15 comments
Closed

image: auto in istio-ingress/templates/injected-deployment.yaml #35789

tokiwong opened this issue Oct 28, 2021 · 15 comments
Labels
area/environments kind/docs lifecycle/automatically-closed Indicates a PR or issue that has been closed automatically. lifecycle/stale Indicates a PR or issue hasn't been manipulated by an Istio team member for a while

Comments

@tokiwong
Copy link

tokiwong commented Oct 28, 2021

Bug Description

Injected istio-ingressgateway Image attempts to pull from docker.io/library/auto:latest which doesn't seem to exist

This in contrast to the image defined in istio-ingress/template/deployment.yaml

Version

➜ istioctl version 
client version: 1.10.1
istiod version: 1.10.2-fips
istiod version: 1.11.4-fips
data plane version: 1.10.2-fips (35 proxies), 1.11.4-fips (1 proxies)

Additional Information

➜ docker pull docker.io/library/auto:latest
Error response from daemon: pull access denied for auto, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
@Monkeyanator
Copy link
Contributor

Monkeyanator commented Oct 28, 2021

The image: auto is a placeholder and gets replaced with the actual image when the gateway deployment gets injected, so possibly the gateway is deployed in a namespace without injection enabled?

@tokiwong
Copy link
Author

tokiwong commented Oct 28, 2021

Thanks for the clarification! Is this documented anywhere? The mechanisms for gateway injection aren't as clear as they are for sidecars.

I'm working with a pipeline that doesn't allow pulling images from public sources (in this case, what it thinks is docker.io in the manifest, pre-injection). So if i were to replace image: auto in the template with

{{- if contains "/" .Values.global.proxy.image }}
          image: "{{ .Values.global.proxy.image }}"
{{- else }}
          image: "{{ .Values.global.hub }}/{{ .Values.global.proxy.image | default "proxyv2" }}:{{ .Values.global.tag }}"
{{- end }}
{{- if .Values.global.imagePullPolicy }}

similar to the non-injected ingressgateway deployment, will injection still work as intended?

Followup question: where is the "actual image" configured in the case of gateway injection? I'm assuming it pulls the configuration from {{ .Values.global.proxy.image }} as well

to answer your question, this deployment was attempted in a namespace with istio.io/rev: 1-11-4

@howardjohn
Copy link
Member

No, you should not make that change. You should configure global.proxy.image in the control plane installation.

https://istio.io/latest/docs/setup/additional-setup/sidecar-injection/#customizing-injection describes this a bit but I think we can make the doc more explicit

@tokiwong
Copy link
Author

tokiwong commented Nov 1, 2021

thanks for the docs 👍 this clarifies things better.

is there a recommended way to set our own placeholder that isn't auto? (for internal compliance reasons)

@howardjohn
Copy link
Member

pkg/kube/inject/webhook.go
652:    AutoImage = "auto"

Currently hardcoded

@istio-policy-bot istio-policy-bot added the lifecycle/stale Indicates a PR or issue hasn't been manipulated by an Istio team member for a while label Jan 31, 2022
@aishwaryaa021296
Copy link

what is the solution of this issue

@istio-policy-bot
Copy link

🚧 This issue or pull request has been closed due to not having had activity from an Istio team member since 2021-11-01. If you feel this issue or pull request deserves attention, please reopen the issue. Please see this wiki page for more information. Thank you for your contributions.

Created by the issue and PR lifecycle manager.

@istio-policy-bot istio-policy-bot added the lifecycle/automatically-closed Indicates a PR or issue that has been closed automatically. label Feb 15, 2022
@chrissng
Copy link

@tokiwong a little late here, but if you haven't figured it out, it is defined in .Values.global.hub

@DPS0340
Copy link

DPS0340 commented Sep 1, 2022

In my case (Istio Helm Chart with ArgoCD), istio-proxy container (has image auto) in Istio-gateway Deployment not sidecar injected because istio-sidecar-injector configmap in Istiod was not synced properly.

It was not istio-injection=enabled labeling problem; my istio-system namespace labeled correctly.

I think that istio-base CRD was not loaded at this moment: when charts are synced.

So I have to sync both charts, istio-gateway after istiod. It'll be better using Self Healing with ArgoCD.

@blue-hope
Copy link

in my case, istio-proxy container is not sidecar injected even though the istio-injection=enabled exists.
so I clean up istiod in my istio-system namespace, and re-install it and I got this error:

Error: INSTALLATION FAILED: Internal error occurred: failed calling webhook "validation.istio.io": failed to call webhook: Post "https://istiod.istio-system.svc:443/validate?timeout=10s": no endpoints available for service "istiod"

It seems that it is a version issue of validating webhook, so I removed it like:

kubectl delete validatingwebhookconfiguration istiod-default-validator

and re-install istiod with helm, and injection works!

@s7an-it
Copy link

s7an-it commented May 11, 2023

@DPS0340, great this one saved me but how can it be approached in automated matter, I guess sync waves is the answer but need to shuffle through for helm example idea.

@danielsiwiec
Copy link

My team has had the same issue, which forced us to use sync waves (effectively multiple charts, installed in a particular order). After some deliberation, we went down a different path - creating a Job, that will delete the istio proxy pod, which immediately gets reprovisioned with the right image injected into it. Below is what it looks like. It would have been fantastic to have this somehow handled by istio...

apiVersion: v1
kind: ServiceAccount
metadata:
  name: istio-proxy-restarter
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: istio-proxy-restarter
rules:
  - apiGroups: [""]
    resources: ["pods"]
    verbs: ["delete", "get", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: istio-proxy-restarter
subjects:
  - kind: ServiceAccount
    name: istio-proxy-restarter
roleRef:
  kind: Role
  name: istio-proxy-restarter
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: batch/v1
kind: Job
metadata:
  name: restart-istio-proxy
spec:
  template:
    spec:
      serviceAccountName: istio-proxy-restarter
      containers:
        - name: kill-pod
          image: bitnami/kubectl:latest
          command:
            - kubectl
          args:
            - delete
            - pod
            - -l app={{ .Values.gateway.name }}
      initContainers:
        - name: wait-for-istio
          image: bitnami/kubectl:latest
          command: ["sh", "-c", "kubectl wait pods -l app=istiod --for condition=Ready"]
      restartPolicy: OnFailure

@julian-perge
Copy link

julian-perge commented Jun 27, 2023

For anyone else having this issue but using terraform to deploy the charts. you HAVE to have the gateway chart for istio-ingress depend on the base and istiod charts to finish deploying before the gateway chart starts, or else you will have to find a really goofy workaround to get it work.

@phuntsberger
Copy link

Not sure this should be added to this discussion or not, but using the comment from: #35789 (comment)

Led me in the right direction for using GCPs config sync and their annotations for dependencies. Adding this:

config.kubernetes.io/depends-on: istio-system/deployments/[istiod-deployment-name]

To:

podAnnotations:
  prometheus.io/port: '15020'
  prometheus.io/scrape: 'true'
  prometheus.io/path: '/stats/prometheus'
  inject.istio.io/templates: 'gateway'
  sidecar.istio.io/inject: 'true'
  # @REMARK - Do not remove this annotation, istio discovery needs to be running before this starts.
  # https://github.com/istio/istio/issues/35789#issuecomment-1608520667
  config.kubernetes.io/depends-on: istio-system/deployments/istiod-1-20-0

In the latest helm chart allows us to use the configuration sync without errors. Not sure that folks can use the same in their deployment methodology, but it is noteworthy to add here. Please feel free to direct me to a discussion!

@thequailman
Copy link

@danielsiwiec your config helped a lot. I modified it with Helm hooks to clean up the resources after install.

Here is an "all-in-one" simple/starter Istio Helm chart that deploys base, istio-cni, istiod, and gateway:

Chart.yaml

apiVersion: v1
appVersion: 1.20.0
description: Helm chart for deploying Istio
name: istio
sources:
- https://github.com/istio/istio
version: 1.0.0
dependencies:
  - name: base
    repository: https://istio-release.storage.googleapis.com/charts
    version: 1.20.0
  - name: cni
    repository: https://istio-release.storage.googleapis.com/charts
    version: 1.20.0
  - name: istiod
    repository: https://istio-release.storage.googleapis.com/charts
    version: 1.20.0
  - name: gateway
    repository: https://istio-release.storage.googleapis.com/charts
    version: 1.20.0

values.yaml

gateway:
  name: istio
istiod:
  istio_cni:
    enabled: true

templates/job.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  annotations:
    "helm.sh/hook": post-install
    "helm.sh/hook-weight": "-5"
    "helm.sh/hook-delete-policy": hook-succeeded
  name: istio-proxy-restarter
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  annotations:
    "helm.sh/hook": post-install
    "helm.sh/hook-weight": "-5"
    "helm.sh/hook-delete-policy": hook-succeeded
  name: istio-proxy-restarter
rules:
  - apiGroups: [""]
    resources: ["pods"]
    verbs: ["delete", "get", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  annotations:
    "helm.sh/hook": post-install
    "helm.sh/hook-weight": "-5"
    "helm.sh/hook-delete-policy": hook-succeeded
  name: istio-proxy-restarter
subjects:
  - kind: ServiceAccount
    name: istio-proxy-restarter
roleRef:
  kind: Role
  name: istio-proxy-restarter
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: batch/v1
kind: Job
metadata:
  annotations:
    "helm.sh/hook": post-install
    "helm.sh/hook-weight": "-5"
    "helm.sh/hook-delete-policy": hook-succeeded
  name: restart-istio-proxy
spec:
  template:
    spec:
      serviceAccountName: istio-proxy-restarter
      containers:
        - name: kill-pod
          image: bitnami/kubectl:latest
          command:
            - kubectl
          args:
            - delete
            - pod
            - -l app={{ .Values.gateway.name }}
      initContainers:
        - name: wait-for-istio
          image: bitnami/kubectl:latest
          command: ["sh", "-c", "kubectl wait pods -l app=istiod --for condition=Ready"]
      restartPolicy: OnFailure

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/environments kind/docs lifecycle/automatically-closed Indicates a PR or issue that has been closed automatically. lifecycle/stale Indicates a PR or issue hasn't been manipulated by an Istio team member for a while
Projects
None yet
Development

No branches or pull requests