Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OpenShift route has serviceport and tls hardcoded, and tls is incorrect if endpoint is not secure #490

Open
rgordill opened this issue Apr 4, 2021 · 2 comments
Labels
bug Something isn't working openshift vault-server Area: operation and usage of vault server in k8s

Comments

@rgordill
Copy link

rgordill commented Apr 4, 2021

Describe the bug
service-route.yaml has servicePort hardcoded (8200), and should be equivalent of service-ingress.yaml. Additionally, passthrough is selected for tls termination, but by default 8200 is not a secure port, so same as ingress should apply.

To Reproduce
Steps to reproduce the behavior:

  1. Install chart in OpenShift with server.route.enabled=true and ui.enabled=true
  2. Try to access the url

When route tls is deleted, the ui is accessed without any issues.

Expected behavior
Route configuration consistent with ingress.

Environment

  • Kubernetes version:
    • Distribution or cloud vendor (OpenShift, EKS, GKE, AKS, etc.): OpenShift 4.7
    • Other configuration options or runtime services (istio, etc.): N/A
  • vault-helm version: 0.10.0

Chart values:

csi:
  daemonSet:
    annotations: {}
    updateStrategy:
      maxUnavailable: ""
      type: RollingUpdate
  debug: false
  enabled: false
  image:
    pullPolicy: IfNotPresent
    repository: hashicorp/vault-csi-provider
    tag: 0.1.0
  livenessProbe:
    failureThreshold: 2
    initialDelaySeconds: 5
    periodSeconds: 5
    successThreshold: 1
    timeoutSeconds: 3
  pod:
    annotations: {}
  readinessProbe:
    failureThreshold: 2
    initialDelaySeconds: 5
    periodSeconds: 5
    successThreshold: 1
    timeoutSeconds: 3
  resources: {}
  serviceAccount:
    annotations: {}
  volumeMounts: null
  volumes: null
global:
  enabled: true
  imagePullSecrets: []
  openshift: true
  psp:
    annotations: |
      seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default,runtime/default
      apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
      seccomp.security.alpha.kubernetes.io/defaultProfileName:  runtime/default
      apparmor.security.beta.kubernetes.io/defaultProfileName:  runtime/default
    enable: false
  tlsDisable: true
injector:
  affinity: |
    podAntiAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        - labelSelector:
            matchLabels:
              app.kubernetes.io/name: {{ template "vault.name" . }}-agent-injector
              app.kubernetes.io/instance: "{{ .Release.Name }}"
              component: webhook
          topologyKey: kubernetes.io/hostname
  agentImage:
    repository: vault
    tag: 1.7.0
  annotations: {}
  authPath: auth/kubernetes
  certs:
    caBundle: ""
    certName: tls.crt
    keyName: tls.key
    secretName: null
  enabled: true
  externalVaultAddr: ""
  extraEnvironmentVars: {}
  extraLabels: {}
  failurePolicy: Ignore
  image:
    pullPolicy: IfNotPresent
    repository: hashicorp/vault-k8s
    tag: 0.9.0
  leaderElector:
    enabled: true
    image:
      repository: gcr.io/google_containers/leader-elector
      tag: "0.4"
    ttl: 60s
  logFormat: standard
  logLevel: info
  metrics:
    enabled: false
  namespaceSelector: {}
  nodeSelector: null
  objectSelector: {}
  priorityClassName: ""
  replicas: 1
  resources: {}
  revokeOnShutdown: false
  service:
    annotations: {}
  tolerations: null
server:
  affinity: |
    podAntiAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        - labelSelector:
            matchLabels:
              app.kubernetes.io/name: {{ template "vault.name" . }}
              app.kubernetes.io/instance: "{{ .Release.Name }}"
              component: server
          topologyKey: kubernetes.io/hostname
  annotations: {}
  auditStorage:
    accessMode: ReadWriteOnce
    annotations: {}
    enabled: false
    mountPath: /vault/audit
    size: 10Gi
    storageClass: null
  authDelegator:
    enabled: true
  dataStorage:
    accessMode: ReadWriteOnce
    annotations: {}
    enabled: true
    mountPath: /vault/data
    size: 10Gi
    storageClass: null
  dev:
    devRootToken: root
    enabled: false
  extraArgs: ""
  extraContainers: null
  extraEnvironmentVars: {}
  extraInitContainers: null
  extraLabels: {}
  extraSecretEnvironmentVars: []
  extraVolumes: []
  ha:
    apiAddr: null
    config: |
      ui = true

      listener "tcp" {
        tls_disable = 1
        address = "[::]:8200"
        cluster_address = "[::]:8201"
      }
      storage "consul" {
        path = "vault"
        address = "HOST_IP:8500"
      }

      service_registration "kubernetes" {}

      # Example configuration for using auto-unseal, using Google Cloud KMS. The
      # GKMS keys must already exist, and the cluster must have a service account
      # that is authorized to access GCP KMS.
      #seal "gcpckms" {
      #   project     = "vault-helm-dev-246514"
      #   region      = "global"
      #   key_ring    = "vault-helm-unseal-kr"
      #   crypto_key  = "vault-helm-unseal-key"
      #}
    disruptionBudget:
      enabled: true
      maxUnavailable: null
    enabled: false
    raft:
      config: |
        ui = true

        listener "tcp" {
          tls_disable = 1
          address = "[::]:8200"
          cluster_address = "[::]:8201"
        }

        storage "raft" {
          path = "/vault/data"
        }

        service_registration "kubernetes" {}
      enabled: false
      setNodeId: false
    replicas: 3
  image:
    pullPolicy: IfNotPresent
    repository: vault
    tag: 1.7.0
  ingress:
    annotations: {}
    enabled: false
    hosts:
    - host: chart-example.local
      paths: []
    labels: {}
    tls: []
  livenessProbe:
    enabled: false
    failureThreshold: 2
    initialDelaySeconds: 60
    path: /v1/sys/health?standbyok=true
    periodSeconds: 5
    successThreshold: 1
    timeoutSeconds: 3
  networkPolicy:
    egress: []
    enabled: false
  nodeSelector: null
  postStart: []
  preStopSleepSeconds: 5
  priorityClassName: ""
  readinessProbe:
    enabled: true
    failureThreshold: 2
    initialDelaySeconds: 5
    periodSeconds: 5
    successThreshold: 1
    timeoutSeconds: 3
  resources: {}
  route:
    annotations: {}
    enabled: true
    host: vault-vault.apps-crc.testing
    labels: {}
  service:
    annotations: {}
    enabled: true
    port: 8200
    targetPort: 8200
  serviceAccount:
    annotations: {}
    create: true
    name: ""
  shareProcessNamespace: false
  standalone:
    config: |
      ui = true

      listener "tcp" {
        tls_disable = 1
        address = "[::]:8200"
        cluster_address = "[::]:8201"
      }
      storage "file" {
        path = "/vault/data"
      }

      # Example configuration for using auto-unseal, using Google Cloud KMS. The
      # GKMS keys must already exist, and the cluster must have a service account
      # that is authorized to access GCP KMS.
      #seal "gcpckms" {
      #   project     = "vault-helm-dev"
      #   region      = "global"
      #   key_ring    = "vault-helm-unseal-kr"
      #   crypto_key  = "vault-helm-unseal-key"
      #}
    enabled: '-'
  statefulSet:
    annotations: {}
  tolerations: null
  updateStrategyType: OnDelete
  volumeMounts: null
  volumes: null
ui:
  activeVaultPodOnly: false
  annotations: {}
  enabled: true
  externalPort: 8200
  publishNotReadyAddresses: true
  serviceNodePort: null
  serviceType: ClusterIP

Additional context
Add any other context about the problem here.

@rgordill rgordill added the bug Something isn't working label Apr 4, 2021
@slauger
Copy link

slauger commented May 28, 2021

In general the helm chart lacks of the ability to customize the ingress/route for OpenShift, e.g. currently it is not possible to switch to a reencrypt route.

Also i would prefer to use a Ingress object, because this object has some advantages over to the route object (e.g. external tls secret).

OpenShift 4 automatically creates route objects when you try to create an ingress object. Currently the chart ignores the Ingress configuration when global.openshift is true.

https://docs.openshift.com/container-platform/4.6/networking/routes/route-configuration.html#nw-ingress-creating-a-route-via-an-ingress_route-configuration

This would allow us something like this:

global:
  openshift: true

server:
  ingress:
    enabled: true
    annotations:
      route.openshift.io/termination: "reencrypt" 
    hosts:
      - host: chart-example.local
        paths: []

@slauger
Copy link

slauger commented May 28, 2021

So maybe an override to enable the Ingress object would be a solution?

slauger added a commit to slauger/vault-helm that referenced this issue May 28, 2021
mbaldessari added a commit to mbaldessari/vault-helm that referenced this issue Jan 7, 2022
This covers, in part, issue hashicorp#490. Namely, the route currently hardcodes
the termination mode to "passhtrough". Let's parametrize this so a user
can customize it. The use-case here being that we can deploy the vault
in http mode and expose it via https externally and by choosing 'edge'
as the termination type we let the TLS be taken care of by the route
object.

The default remains 'passthrough' in order to not disrupt any existing
setup.
@tvoran tvoran added the vault-server Area: operation and usage of vault server in k8s label Jan 7, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working openshift vault-server Area: operation and usage of vault server in k8s
Projects
None yet
Development

No branches or pull requests

3 participants