Skip to content
This repository has been archived by the owner on Feb 12, 2024. It is now read-only.

[cetic/nifi] oidc not working using v0.6.4 #215

Closed
luismarqu3s opened this issue Dec 30, 2021 · 1 comment
Closed

[cetic/nifi] oidc not working using v0.6.4 #215

luismarqu3s opened this issue Dec 30, 2021 · 1 comment
Labels
Need more info This issue needs more information stale No recent activity, will be closed unless label removed

Comments

@luismarqu3s
Copy link

luismarqu3s commented Dec 30, 2021

Describe the bug
UI is not answering

Version of Helm, Kubernetes and the Nifi chart:
Helm: v0.6.4
Kubernetes: v1.20.2
Nifi: 1.12.1

What happened:
UI is not answering

What you expected to happen:
UI answering

How to reproduce it (as minimally and precisely as possible):
configurations used on values.yaml file:

# Number of nifi nodes
replicaCount: 1

## Set default image, imageTag, and imagePullPolicy.
## ref: https://hub.docker.com/r/apache/nifi/
##
image:
  repository: apache/nifi
  tag: "1.12.1"
  pullPolicy: IfNotPresent

  ## Optionally specify an imagePullSecret.
  ## Secret must be manually created in the namespace.
  ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
  ##
  # pullSecret: myRegistrKeySecretName

securityContext:
  runAsUser: 1000
  fsGroup: 1000

## @param useHostNetwork - boolean - optional
## Bind ports on the hostNetwork. Useful for CNI networking where hostPort might
## not be supported. The ports need to be available on all hosts. It can be
## used for custom metrics instead of a service endpoint.
##
## WARNING: Make sure that hosts using this are properly firewalled otherwise
## metrics and traces are accepted from any host able to connect to this host.
#

sts:
  # Parallel podManagementPolicy for faster bootstrap and teardown. Default is OrderedReady.
  podManagementPolicy: Parallel
  AntiAffinity: soft
  useHostNetwork: null
  hostPort: null
  pod:
    annotations:
      security.alpha.kubernetes.io/sysctls: net.ipv4.ip_local_port_range=10000 65000
      #prometheus.io/scrape: "true"      
  serviceAccount:
    create: false
    #name: nifi

## Useful if using any custom secrets
## Pass in some secrets to use (if required)
# secrets:
# - name: myNifiSecret
#   keys:
#     - key1
#     - key2
#   mountPath: /opt/nifi/secret

## Useful if using any custom configmaps
## Pass in some configmaps to use (if required)
# configmaps:
#   - name: myNifiConf
#     keys:
#       - myconf.conf
#     mountPath: /opt/nifi/custom-config


properties:
  # use externalSecure for when inbound SSL is provided by nginx-ingress or other external mechanism
  externalSecure: false
  isNode: true # set to false if ldap is enabled
  httpPort: null #8080 # set to null if ldap is enabled
  httpsPort: 9443 #null # set to 9443 if ldap is enabled
  webProxyHost:
  clusterPort: 6007
  clusterSecure: true #false # set to true if ldap is enabled
  needClientAuth: true
  provenanceStorage: "8 GB"
  siteToSite:
    port: 10000
  authorizer: managed-authorizer
  # use properties.safetyValve to pass explicit 'key: value' pairs that overwrite other configuration
  safetyValve:
    #nifi.variable.registry.properties: "${NIFI_HOME}/example1.properties, ${NIFI_HOME}/example2.properties"
    nifi.web.http.network.interface.default: eth0
    # listen to loopback interface so "kubectl port-forward ..." works
    nifi.web.http.network.interface.lo: lo

  ## Include aditional processors
  # customLibPath: "/opt/configuration_resources/custom_lib"

## Include additional libraries in the Nifi containers by using the postStart handler
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/
# postStart: /opt/nifi/psql; wget -P /opt/nifi/psql https://jdbc.postgresql.org/download/postgresql-42.2.6.jar

# Nifi User Authentication
auth:
  admin: CN=admin, OU=NIFI
  SSL:
    keystorePasswd: env:PASS
    truststorePasswd: env:PASS
  ldap:
    enabled: false
    host: ldap://<hostname>:<port>
    searchBase: CN=Users,DC=example,DC=com
    admin: cn=admin,dc=example,dc=be
    pass: password
    searchFilter: (objectClass=*)
    userIdentityAttribute: cn
    authStrategy: SIMPLE # How the connection to the LDAP server is authenticated. Possible values are ANONYMOUS, SIMPLE, LDAPS, or START_TLS.
    identityStrategy: USE_DN
    authExpiration: 12 hours

  oidc:
    enabled: true
    discoveryUrl: https://****.net/auth/realms/master/.well-known/openid-configuration
    clientId: nifi
    clientSecret: ****
    claimIdentifyingUser: [email protected]
    ## Request additional scopes, for example profile
    additionalScopes:

## Expose the nifi service to be accessed from outside the cluster (LoadBalancer service).
## or access it from within the cluster (ClusterIP service). Set the service type and the port to serve it.
## ref: http://kubernetes.io/docs/user-guide/services/
##

# headless service
headless:
  type: ClusterIP
  annotations:
    service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"

# ui service
service:
  type: LoadBalancer
  httpPort: 8080
  httpsPort: 9443
  nodePort: 30236
  annotations: {}
    # loadBalancerIP:
    ## Load Balancer sources
    ## https://kubernetes.io/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/#restrict-access-for-loadbalancer-service
    ##
    # loadBalancerSourceRanges:
    # - 10.10.10.0/24
    ## OIDC authentication requires "sticky" session on the LoadBalancer for JWT to work properly...but AWS doesn't like it on creation
    # sessionAffinity: ClientIP
    # sessionAffinityConfig:
    #   clientIP:
  #     timeoutSeconds: 10800

  # Enables additional port/ports to nifi service for internal processors
  processors:
    enabled: false
    ports:
      - name: processor01
        port: 7001
        targetPort: 7001
        #nodePort: 30701
      - name: processor02
        port: 7002
        targetPort: 7002
        #nodePort: 30702

## Configure Ingress based on the documentation here: https://kubernetes.io/docs/concepts/services-networking/ingress/
##
ingress:
  enabled: true
  annotations: {}
  tls: []
  hosts: 
    - ***.net
  path: /
  # If you want to change the default path, see this issue https://github.com/cetic/helm-nifi/issues/22

# Amount of memory to give the NiFi java heap
jvmMemory: 2g

# Separate image for tailing each log separately and checking zookeeper connectivity
sidecar:
  image: busybox
  tag: "1.32.0"

## Enable persistence using Persistent Volume Claims
## ref: http://kubernetes.io/docs/user-guide/persistent-volumes/
##
persistence:
  enabled: false

  # When creating persistent storage, the NiFi helm chart can either reference an already-defined
  # storage class by name, such as "standard" or can define a custom storage class by specifying
  # customStorageClass: true and providing the "storageClass", "storageProvisioner" and "storageType".
  # For example, to use SSD storage on Google Compute Engine see values-gcp.yaml
  #
  # To use a storage class that already exists on the Kubernetes cluster, we can simply reference it by name.
  # For example:
  # storageClass: standard
  #
  # The default storage class is used if this variable is not set.

  accessModes:  [ReadWriteOnce]
  ## Storage Capacities for persistent volumes
  configStorage:
    size: 100Mi
  authconfStorage:
    size: 100Mi
  # Storage capacity for the 'data' directory, which is used to hold things such as the flow.xml.gz, configuration, state, etc.
  dataStorage:
    size: 1Gi
  # Storage capacity for the FlowFile repository
  flowfileRepoStorage:
    size: 10Gi
  # Storage capacity for the Content repository
  contentRepoStorage:
    size: 10Gi
  # Storage capacity for the Provenance repository. When changing this, one should also change the properties.provenanceStorage value above, also.
  provenanceRepoStorage:
    size: 10Gi
  # Storage capacity for nifi logs
  logStorage:
    size: 5Gi

## Configure resource requests and limits
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
##
resources: {}
  # We usually recommend not to specify default resources and to leave this as a conscious
  # choice for the user. This also increases chances charts run on environments with little
  # resources, such as Minikube. If you do want to specify resources, uncomment the following
  # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
  # limits:
  #  cpu: 100m
  #  memory: 128Mi
  # requests:
  #  cpu: 100m
  #  memory: 128Mi

logresources:
  requests:
    cpu: 10m
    memory: 10Mi
  limits:
    cpu: 50m
    memory: 50Mi

nodeSelector: {}

tolerations: []

initContainers: {}
  # foo-init:  # <- will be used as container name
  #   image: "busybox:1.30.1"
  #   imagePullPolicy: "IfNotPresent"
  #   command: ['sh', '-c', 'echo this is an initContainer']
  #   volumeMounts:
  #     - mountPath: /tmp/foo
  #       name: foo

extraVolumeMounts: []

extraVolumes: []

## Extra containers
extraContainers: []

terminationGracePeriodSeconds: 30

## Extra environment variables that will be pass onto deployment pods
env: []

## Extra environment variables from secrets and config maps
envFrom: []

# envFrom:
#   - configMapRef:
#       name: config-name
#   - secretRef:
#       name: mysecret

## Openshift support
## Use the following varables in order to enable Route and Security Context Constraint creation
openshift:
  scc:
    enabled: false
  route:
    enabled: false
    #host: www.test.com
    #path: /nifi

# ca server details
# Setting this true would create a nifi-toolkit based ca server
# The ca server will be used to generate self-signed certificates required setting up secured cluster
ca:
  ## If true, enable the nifi-toolkit certificate authority
  enabled: true
  persistence:
    enabled: true
  server: ""
  service:
    port: 9090
  token: sixteenCharacters
  admin:
    cn: admin
  serviceAccount:
    create: false
    #name: nifi-ca
  openshift:
    scc:
      enabled: false

# ------------------------------------------------------------------------------
# Zookeeper:
# ------------------------------------------------------------------------------
zookeeper:
  ## If true, install the Zookeeper chart
  ## ref: https://github.com/bitnami/charts/blob/master/bitnami/zookeeper/values.yaml
  enabled: true
  ## If the Zookeeper Chart is disabled a URL and port are required to connect
  url: ""
  port: 2181
  replicaCount: 3

# ------------------------------------------------------------------------------
# Nifi registry:
# ------------------------------------------------------------------------------
registry:
  ## If true, install the Nifi registry
  enabled: true
  url: ""
  port: 80
  ## Add values for the nifi-registry here
  ## ref: https://github.com/dysnix/charts/blob/master/nifi-registry/values.yaml

# Configure metrics
metrics:
  prometheus:
    # Enable Prometheus metrics
    enabled: false
    # Port used to expose Prometheus metrics
    port: 9092
    serviceMonitor:
      # Enable deployment of Prometheus Operator ServiceMonitor resource
      enabled: false
      # Additional labels for the ServiceMonitor
      labels: {}`

**Anything else we need to know**:
trying to integrate nifi with keycloak

Here are some information that help troubleshooting:

* if relevant, provide your `values.yaml` or the changes made to the default one (after removing sensitive information)
* the output of the folowing commands:

Check if a pod is in error: 
```bash
kubectl get pod
NAME                            READY   STATUS    RESTARTS   AGE
helm-nifi-0                     4/4     Running   1          14m
helm-nifi-ca-67475d9677-46fmq   1/1     Running   0          15m
helm-nifi-registry-0            1/1     Running   0          15m
helm-nifi-zookeeper-0           1/1     Running   0          15m
helm-nifi-zookeeper-1           1/1     Running   0          15m
helm-nifi-zookeeper-2           1/1     Running   0          15m

Inspect the pod, check the "Events" section at the end for anything suspicious.
nothing to report

kubectl describe pod myrelease-nifi-0
`Name:         helm-nifi-0
Namespace:    ncsp
Priority:     0
Node:         worker-1-0/172.16.0.43
Start Time:   Thu, 30 Dec 2021 16:55:04 +0000
Labels:       app=nifi
              chart=nifi-0.6.4
              controller-revision-hash=helm-nifi-67d78cf887
              heritage=Helm
              release=helm-nifi
              statefulset.kubernetes.io/pod-name=helm-nifi-0
Annotations:  cni.projectcalico.org/containerID: 5f16185ea2a923c2bd10c17d57fb3cd68fcca85a67e1ad21917eb285c626eff9
              cni.projectcalico.org/podIP: 10.254.14.25/32
              cni.projectcalico.org/podIPs: 10.254.14.25/32
              security.alpha.kubernetes.io/sysctls: net.ipv4.ip_local_port_range=10000 65000
Status:       Running
IP:           10.254.14.25
IPs:
  IP:           10.254.14.25
Controlled By:  StatefulSet/helm-nifi
Init Containers:
  zookeeper:
    Container ID:  docker://9c7f7c3a8946519d6a3c89f773eb4659b54d70180d75be52854872d06039b909
    Image:         busybox:1.32.0
    Image ID:      docker-pullable://busybox@sha256:bde48e1751173b709090c2539fdf12d6ba64e88ec7a4301591227ce925f3c678
    Port:          <none>
    Host Port:     <none>
    Command:
      sh
      -c
      echo trying to contact helm-nifi-zookeeper 2181
      until nc -vzw 1 helm-nifi-zookeeper 2181; do
        echo "waiting for zookeeper..."
        sleep 2
      done
      
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Thu, 30 Dec 2021 16:55:06 +0000
      Finished:     Thu, 30 Dec 2021 16:55:18 +0000
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7w9st (ro)
  cert-request:
    Container ID:  docker://c17a9bb811a60bbd98e12de1df7e58ee64ca8f08e620fd641e9ac83ce5036d46
    Image:         apache/nifi-toolkit:1.12.1
    Image ID:      docker-pullable://apache/nifi-toolkit@sha256:755df01e8f154a0772a9f7489fd94eb937efa47292ff22c72a269146752616b1
    Port:          <none>
    Host Port:     <none>
    Command:
      bash
      -c
      CA_ADDRESS="helm-nifi-ca:9090"
      until echo "" | timeout -t 2 openssl s_client -connect "${CA_ADDRESS}"; do
        # Checking if ca server using nifi-toolkit is up
        echo "Waiting for CA to be available at ${CA_ADDRESS}"
        sleep 2
      done;
      cd /data/config-data
      rm -rf certs
      mkdir certs
      cd certs
      
      # Generate certificate for server with webProxyHost or service name as alternate names to access nifi web ui
      ${NIFI_TOOLKIT_HOME}/bin/tls-toolkit.sh client \
        -c "helm-nifi-ca" \
        -t sixteenCharacters \
        --subjectAlternativeNames helm-nifi.ncsp.svc \
        -D "CN=$(hostname -f), OU=NIFI" \
        -p 9090
      
      # Generate client certificate for browser with webProxyHost or service name as alternate names to access nifi web ui
      mkdir -p /data/config-data/certs/admin
      cd /data/config-data/certs/admin
      
      ${NIFI_TOOLKIT_HOME}/bin/tls-toolkit.sh client \
        -c "helm-nifi-ca" \
        -t sixteenCharacters \
        --subjectAlternativeNames helm-nifi.ncsp.svc \
        -p 9090 \
        -D "CN=admin, OU=NIFI" \
        -T PKCS12
      
      export PASS=$(jq -r .keyStorePassword config.json)
      
      openssl pkcs12 -in "keystore.pkcs12" -out "key.pem" -nocerts -nodes -password "env:PASS"
      openssl pkcs12 -in "keystore.pkcs12" -out "crt.pem" -clcerts -nokeys -password "env:PASS"
      openssl pkcs12 -in "keystore.pkcs12" -out "keystore.jks" -clcerts -nokeys -password "env:PASS"
      
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Thu, 30 Dec 2021 16:55:19 +0000
      Finished:     Thu, 30 Dec 2021 16:55:24 +0000
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /data/config-data from config-data (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7w9st (ro)
Containers:
  server:
    Container ID:  docker://1925cbb5ca3626def4bbfb11c8148995f12ec27218acbfad4687683fd7f27691
    Image:         apache/nifi:1.12.1
    Image ID:      docker-pullable://apache/nifi@sha256:bf7576ab7ad0bfe38c86be5baa47229d1644287984034dc9d5ff4801c5827115
    Ports:         9443/TCP, 6007/TCP
    Host Ports:    0/TCP, 0/TCP
    Command:
      bash
      -ce
      prop_replace () {
        target_file=${NIFI_HOME}/conf/${3:-nifi.properties}
        echo "updating ${1} in ${target_file}"
        if egrep "^${1}=" ${target_file} &> /dev/null; then
          sed -i -e "s|^$1=.*$|$1=$2|"  ${target_file}
        else
          echo ${1}=${2} >> ${target_file}
        fi
      }
      
      mkdir -p ${NIFI_HOME}/config-data/conf
      FQDN=$(hostname -f)
      
      cat "${NIFI_HOME}/conf/nifi.temp" > "${NIFI_HOME}/conf/nifi.properties"
      
      if [[ $(grep $(hostname) conf/authorizers.temp) ]]; then
        cat "${NIFI_HOME}/conf/authorizers.temp" > "${NIFI_HOME}/conf/authorizers.xml"
      else
        cat "${NIFI_HOME}/conf/authorizers.empty" > "${NIFI_HOME}/conf/authorizers.xml"
      fi
      
      if ! test -f /opt/nifi/data/flow.xml.gz && test -f /opt/nifi/data/flow.xml; then
        gzip /opt/nifi/data/flow.xml
      fi
      
      prop_replace nifi.remote.input.host ${FQDN}
      prop_replace nifi.cluster.node.address ${FQDN}
      prop_replace nifi.zookeeper.connect.string ${NIFI_ZOOKEEPER_CONNECT_STRING}
      prop_replace nifi.web.http.host ${FQDN}
      # Update nifi.properties for security properties
      prop_replace nifi.web.https.host ${FQDN}
      prop_replace nifi.security.keystoreType jks
      prop_replace nifi.security.keystore   ${NIFI_HOME}/config-data/certs/keystore.jks
      prop_replace nifi.security.keystorePasswd     $(jq -r .keyStorePassword ${NIFI_HOME}/config-data/certs/config.json)
      prop_replace nifi.security.keyPasswd          $(jq -r .keyPassword ${NIFI_HOME}/config-data/certs/config.json)
      prop_replace nifi.security.truststoreType jks
      prop_replace nifi.security.truststore   ${NIFI_HOME}/config-data/certs/truststore.jks
      prop_replace nifi.security.truststorePasswd   $(jq -r .trustStorePassword ${NIFI_HOME}/config-data/certs/config.json)
      prop_replace nifi.web.proxy.host helm-nifi.ncsp.svc
      prop_replace nifi.web.http.network.interface.default "eth0" nifi.properties
      prop_replace nifi.web.http.network.interface.lo "lo" nifi.properties
      
      
      exec bin/nifi.sh run & nifi_pid="$!"
      
      function offloadNode() {
          FQDN=$(hostname -f)
          echo "disconnecting node '$FQDN'"
          baseUrl=https://${FQDN}:9443
      
          keystore=${NIFI_HOME}/config-data/certs/keystore.jks
          keystorePasswd=$(jq -r .keyStorePassword ${NIFI_HOME}/config-data/certs/config.json)
          keyPasswd=$(jq -r .keyPassword ${NIFI_HOME}/config-data/certs/config.json)
          truststore=${NIFI_HOME}/config-data/certs/truststore.jks
          truststorePasswd=$(jq -r .trustStorePassword ${NIFI_HOME}/config-data/certs/config.json)
      
          secureArgs=" --truststore ${truststore} --truststoreType JKS --truststorePasswd ${truststorePasswd} --keystore ${keystore} --keystoreType JKS --keystorePasswd ${keystorePasswd} --proxiedEntity "CN=admin, OU=NIFI""
      
          echo baseUrl ${baseUrl}
          echo "gracefully disconnecting node '$FQDN' from cluster"
          ${NIFI_TOOLKIT_HOME}/bin/cli.sh nifi get-nodes -ot json -u ${baseUrl} ${secureArgs} > nodes.json
          nnid=$(jq --arg FQDN "$FQDN" '.cluster.nodes[] | select(.address==$FQDN) | .nodeId' nodes.json)
          echo "disconnecting node ${nnid}"
          ${NIFI_TOOLKIT_HOME}/bin/cli.sh nifi disconnect-node -nnid $nnid -u ${baseUrl} ${secureArgs}
          echo ""
          echo "wait until node has state 'DISCONNECTED'"
          while [[ "${node_state}" != "DISCONNECTED" ]]; do
              sleep 1
              ${NIFI_TOOLKIT_HOME}/bin/cli.sh nifi get-nodes -ot json -u ${baseUrl} ${secureArgs} > nodes.json
              node_state=$(jq -r --arg FQDN "$FQDN" '.cluster.nodes[] | select(.address==$FQDN) | .status' nodes.json)
              echo "state is '${node_state}'"
          done
          echo ""
          echo "node '${nnid}' was disconnected"
          echo "offloading node"
          ${NIFI_TOOLKIT_HOME}/bin/cli.sh nifi offload-node -nnid $nnid -u ${baseUrl} ${secureArgs}
          echo ""
          echo "wait until node has state 'OFFLOADED'"
          while [[ "${node_state}" != "OFFLOADED" ]]; do
              sleep 1
              ${NIFI_TOOLKIT_HOME}/bin/cli.sh nifi get-nodes -ot json -u ${baseUrl} ${secureArgs} > nodes.json
              node_state=$(jq -r --arg FQDN "$FQDN" '.cluster.nodes[] | select(.address==$FQDN) | .status' nodes.json)
              echo "state is '${node_state}'"
          done
      }
      
      deleteNode() {
          echo "deleting node"
          ${NIFI_TOOLKIT_HOME}/bin/cli.sh nifi delete-node -nnid ${nnid} -u ${baseUrl} ${secureArgs}
          echo "node deleted"
      }
      
      trap 'echo Received trapped signal, beginning shutdown...;offloadNode;./bin/nifi.sh stop;deleteNode;exit 0;' TERM HUP INT;
      trap ":" EXIT
      
      echo NiFi running with PID ${nifi_pid}.
      wait ${nifi_pid}
      
      /opt/nifi/nifi-toolkit-current/bin/tls-toolkit.sh standalone -n 'helm-nifi-nifi-0.helm-nifi-nifi-headless.ncsp.svc.cluster.local' -C 'cn=admin,dc=example,dc=be' -o '/opt/nifi/nifi-current/conf/' -P env:PASS  -S env:PASS  --nifiPropertiesFile /opt/nifi/nifi-current/conf/nifi.properties
      exec bin/nifi.sh run
      
    State:          Running
      Started:      Thu, 30 Dec 2021 16:55:27 +0000
    Last State:     Terminated
      Reason:       Error
      Exit Code:    2
      Started:      Thu, 30 Dec 2021 16:55:25 +0000
      Finished:     Thu, 30 Dec 2021 16:55:25 +0000
    Ready:          True
    Restart Count:  1
    Liveness:       tcp-socket :9443 delay=90s timeout=1s period=60s #success=1 #failure=3
    Readiness:      tcp-socket :9443 delay=60s timeout=1s period=20s #success=1 #failure=3
    Environment:
      NIFI_ZOOKEEPER_CONNECT_STRING:  helm-nifi-zookeeper:2181
    Mounts:
      /opt/nifi/content_repository from content-repository (rw)
      /opt/nifi/data from data (rw)
      /opt/nifi/data/flow.xml from flow-content (rw,path="flow.xml")
      /opt/nifi/flowfile_repository from flowfile-repository (rw)
      /opt/nifi/nifi-current/auth-conf/ from auth-conf (rw)
      /opt/nifi/nifi-current/conf/authorizers.empty from authorizers-empty (rw,path="authorizers.empty")
      /opt/nifi/nifi-current/conf/authorizers.temp from authorizers-temp (rw,path="authorizers.temp")
      /opt/nifi/nifi-current/conf/bootstrap-notification-services.xml from bootstrap-notification-services-xml (rw,path="bootstrap-notification-services.xml")
      /opt/nifi/nifi-current/conf/bootstrap.conf from bootstrap-conf (rw,path="bootstrap.conf")
      /opt/nifi/nifi-current/conf/login-identity-providers.xml from login-identity-providers-xml (rw,path="login-identity-providers.xml")
      /opt/nifi/nifi-current/conf/nifi.temp from nifi-properties (rw,path="nifi.temp")
      /opt/nifi/nifi-current/conf/state-management.xml from state-management-xml (rw,path="state-management.xml")
      /opt/nifi/nifi-current/conf/zookeeper.properties from zookeeper-properties (rw,path="zookeeper.properties")
      /opt/nifi/nifi-current/config-data from config-data (rw)
      /opt/nifi/nifi-current/logs from logs (rw)
      /opt/nifi/provenance_repository from provenance-repository (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7w9st (ro)
  app-log:
    Container ID:  docker://75fa02b91376b957b86f15d4f6ce5a8bcda59f2b01a69a3e7e2d5afb09fab0d7
    Image:         busybox:1.32.0
    Image ID:      docker-pullable://busybox@sha256:bde48e1751173b709090c2539fdf12d6ba64e88ec7a4301591227ce925f3c678
    Port:          <none>
    Host Port:     <none>
    Args:
      tail
      -n+1
      -F
      /var/log/nifi-app.log
    State:          Running
      Started:      Thu, 30 Dec 2021 16:55:25 +0000
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     50m
      memory:  50Mi
    Requests:
      cpu:        10m
      memory:     10Mi
    Environment:  <none>
    Mounts:
      /var/log from logs (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7w9st (ro)
  bootstrap-log:
    Container ID:  docker://b65d6e8ff38549689ebbe34713ca673342ae66cc6f92148c7fbea33b73fdd150
    Image:         busybox:1.32.0
    Image ID:      docker-pullable://busybox@sha256:bde48e1751173b709090c2539fdf12d6ba64e88ec7a4301591227ce925f3c678
    Port:          <none>
    Host Port:     <none>
    Args:
      tail
      -n+1
      -F
      /var/log/nifi-bootstrap.log
    State:          Running
      Started:      Thu, 30 Dec 2021 16:55:26 +0000
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     50m
      memory:  50Mi
    Requests:
      cpu:        10m
      memory:     10Mi
    Environment:  <none>
    Mounts:
      /var/log from logs (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7w9st (ro)
  user-log:
    Container ID:  docker://9f59eeb59239f9c380058c4dede17251c32ed8e2d33b93d8453860a711011172
    Image:         busybox:1.32.0
    Image ID:      docker-pullable://busybox@sha256:bde48e1751173b709090c2539fdf12d6ba64e88ec7a4301591227ce925f3c678
    Port:          <none>
    Host Port:     <none>
    Args:
      tail
      -n+1
      -F
      /var/log/nifi-user.log
    State:          Running
      Started:      Thu, 30 Dec 2021 16:55:26 +0000
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     50m
      memory:  50Mi
    Requests:
      cpu:        10m
      memory:     10Mi
    Environment:  <none>
    Mounts:
      /var/log from logs (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7w9st (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  bootstrap-conf:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      helm-nifi-config
    Optional:  false
  nifi-properties:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      helm-nifi-config
    Optional:  false
  authorizers-temp:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      helm-nifi-config
    Optional:  false
  authorizers-empty:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      helm-nifi-config
    Optional:  false
  bootstrap-notification-services-xml:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      helm-nifi-config
    Optional:  false
  login-identity-providers-xml:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      helm-nifi-config
    Optional:  false
  state-management-xml:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      helm-nifi-config
    Optional:  false
  zookeeper-properties:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      helm-nifi-config
    Optional:  false
  flow-content:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      helm-nifi-config
    Optional:  false
  config-data:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
  auth-conf:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
  data:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
  flowfile-repository:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
  content-repository:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
  provenance-repository:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
  logs:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
  kube-api-access-7w9st:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason     Age                From               Message
  ----    ------     ----               ----               -------
  Normal  Scheduled  15m                default-scheduler  Successfully assigned ncsp/helm-nifi-0 to worker-1-0
  Normal  Pulled     15m                kubelet            Container image "busybox:1.32.0" already present on machine
  Normal  Created    15m                kubelet            Created container zookeeper
  Normal  Started    15m                kubelet            Started container zookeeper
  Normal  Started    15m                kubelet            Started container cert-request
  Normal  Pulled     15m                kubelet            Container image "apache/nifi-toolkit:1.12.1" already present on machine
  Normal  Created    15m                kubelet            Created container cert-request
  Normal  Created    15m                kubelet            Created container app-log
  Normal  Pulled     15m                kubelet            Container image "busybox:1.32.0" already present on machine
  Normal  Started    15m                kubelet            Started container app-log
  Normal  Pulled     15m                kubelet            Container image "busybox:1.32.0" already present on machine
  Normal  Started    15m                kubelet            Started container user-log
  Normal  Pulled     15m                kubelet            Container image "busybox:1.32.0" already present on machine
  Normal  Created    15m                kubelet            Created container user-log
  Normal  Created    15m                kubelet            Created container bootstrap-log
  Normal  Started    15m                kubelet            Started container bootstrap-log
  Normal  Created    15m (x2 over 15m)  kubelet            Created container server
  Normal  Pulled     15m (x2 over 15m)  kubelet            Container image "apache/nifi:1.12.1" already present on machine
  Normal  Started    15m (x2 over 15m)  kubelet            Started container server

Get logs on a failed container inside the pod (here the server one):

kubectl logs myrelease-nifi-0 server
`updating nifi.remote.input.host in /opt/nifi/nifi-current/conf/nifi.properties
updating nifi.cluster.node.address in /opt/nifi/nifi-current/conf/nifi.properties
updating nifi.zookeeper.connect.string in /opt/nifi/nifi-current/conf/nifi.properties
updating nifi.web.http.host in /opt/nifi/nifi-current/conf/nifi.properties
updating nifi.web.https.host in /opt/nifi/nifi-current/conf/nifi.properties
updating nifi.security.keystoreType in /opt/nifi/nifi-current/conf/nifi.properties
updating nifi.security.keystore in /opt/nifi/nifi-current/conf/nifi.properties
updating nifi.security.keystorePasswd in /opt/nifi/nifi-current/conf/nifi.properties
updating nifi.security.keyPasswd in /opt/nifi/nifi-current/conf/nifi.properties
updating nifi.security.truststoreType in /opt/nifi/nifi-current/conf/nifi.properties
updating nifi.security.truststore in /opt/nifi/nifi-current/conf/nifi.properties
updating nifi.security.truststorePasswd in /opt/nifi/nifi-current/conf/nifi.properties
updating nifi.web.proxy.host in /opt/nifi/nifi-current/conf/nifi.properties
updating nifi.web.http.network.interface.default in /opt/nifi/nifi-current/conf/nifi.properties
updating nifi.web.http.network.interface.lo in /opt/nifi/nifi-current/conf/nifi.properties
NiFi running with PID 44.

Java home: /usr/local/openjdk-8
NiFi home: /opt/nifi/nifi-current

Bootstrap Config File: /opt/nifi/nifi-current/conf/bootstrap.conf

2021-12-30 16:55:28,322 INFO [main] org.apache.nifi.bootstrap.Command Starting Apache NiFi...
2021-12-30 16:55:28,322 INFO [main] org.apache.nifi.bootstrap.Command Working Directory: /opt/nifi/nifi-current
2021-12-30 16:55:28,322 INFO [main] org.apache.nifi.bootstrap.Command Command: /usr/local/openjdk-8/bin/java -classpath /opt/nifi/nifi-current/./conf:/opt/nifi/nifi-current/./lib/nifi-api-1.12.1.jar:/opt/nifi/nifi-current/./lib/jul-to-slf4j-1.7.30.jar:/opt/nifi/nifi-current/./lib/jetty-schemas-3.1.jar:/opt/nifi/nifi-current/./lib/logback-classic-1.2.3.jar:/opt/nifi/nifi-current/./lib/nifi-nar-utils-1.12.1.jar:/opt/nifi/nifi-current/./lib/nifi-runtime-1.12.1.jar:/opt/nifi/nifi-current/./lib/nifi-properties-1.12.1.jar:/opt/nifi/nifi-current/./lib/slf4j-api-1.7.30.jar:/opt/nifi/nifi-current/./lib/nifi-framework-api-1.12.1.jar:/opt/nifi/nifi-current/./lib/jcl-over-slf4j-1.7.30.jar:/opt/nifi/nifi-current/./lib/log4j-over-slf4j-1.7.30.jar:/opt/nifi/nifi-current/./lib/javax.servlet-api-3.1.0.jar:/opt/nifi/nifi-current/./lib/logback-core-1.2.3.jar -Dorg.apache.jasper.compiler.disablejsr199=true -Xmx2g -Xms2g -Djava.security.egd=file:/dev/urandom -Dsun.net.http.allowRestrictedHeaders=true -Djava.net.preferIPv4Stack=true -Djava.awt.headless=true -Djava.protocol.handler.pkgs=sun.net.www.protocol -Dnifi.properties.file.path=/opt/nifi/nifi-current/./conf/nifi.properties -Dnifi.bootstrap.listen.port=43079 -Dapp=NiFi -Dorg.apache.nifi.bootstrap.config.log.dir=/opt/nifi/nifi-current/logs org.apache.nifi.NiFi 
2021-12-30 16:55:28,336 INFO [main] org.apache.nifi.bootstrap.Command Launched Apache NiFi with Process ID 66
`
@banzo
Copy link
Contributor

banzo commented Jan 10, 2022

@luismarqu3s does the problem still occurs with the newest chart version?

@banzo banzo added bug Something isn't working Need more info This issue needs more information labels Jan 10, 2022
@banzo banzo added stale No recent activity, will be closed unless label removed and removed bug Something isn't working labels Mar 15, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Need more info This issue needs more information stale No recent activity, will be closed unless label removed
Projects
None yet
Development

No branches or pull requests

2 participants