Skip to content
This repository has been archived by the owner on Feb 20, 2024. It is now read-only.

The confluent kakfa zookeeper and cp-kakfa keep going for crashloopbackoff #601

Open
geekdk opened this issue Apr 27, 2022 · 1 comment
Open

Comments

@geekdk
Copy link

geekdk commented Apr 27, 2022

NAME READY STATUS RESTARTS AGE confluent-cp-kafka-0 0/1 CrashLoopBackOff 32 (118s ago) 154m confluent-cp-zookeeper-0 0/2 CrashLoopBackOff 64 (100s ago) 154m

@geekdk
Copy link
Author

geekdk commented Apr 27, 2022

minikube kubectl describe pod
Name: confluent-cp-kafka-0
Namespace: default
Priority: 0
Node: minikube/192.168.49.2
Start Time: Wed, 27 Apr 2022 20:38:13 +0530
Labels: app=cp-kafka
controller-revision-hash=confluent-cp-kafka-798c8d4597
release=confluent
statefulset.kubernetes.io/pod-name=confluent-cp-kafka-0
Annotations:
Status: Running
IP: 172.17.0.4
IPs:
IP: 172.17.0.4
Controlled By: StatefulSet/confluent-cp-kafka
Containers:
cp-kafka-broker:
Container ID: docker://de1f83411f8a8216b618e1edb67f7f520e3502fc9d49044464d43b4f01ca0f1f
Image: confluentinc/cp-server:6.1.0
Image ID: docker-pullable://confluentinc/cp-server@sha256:7020a2e0e805cf593db7bf7c39c52b836e6c582ff7dafa35649362e56514e51b
Port: 9092/TCP
Host Port: 0/TCP
Command:
sh
-exc
export KAFKA_BROKER_ID=${HOSTNAME##*-} &&
export KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://${POD_NAME}.confluent-cp-kafka-headless.${POD_NAMESPACE}:9092,EXTERNAL://${HOST_IP}:$((31090 + ${KAFKA_BROKER_ID})) &&
exec /etc/confluent/docker/run

State:          Waiting
  Reason:       CrashLoopBackOff
Last State:     Terminated
  Reason:       Error
  Exit Code:    1
  Started:      Wed, 27 Apr 2022 23:15:47 +0530
  Finished:     Wed, 27 Apr 2022 23:15:47 +0530
Ready:          False
Restart Count:  33
Environment:
  POD_IP:                                         (v1:status.podIP)
  HOST_IP:                                        (v1:status.hostIP)
  POD_NAME:                                      confluent-cp-kafka-0 (v1:metadata.name)
  POD_NAMESPACE:                                 default (v1:metadata.namespace)
  KAFKA_HEAP_OPTS:                               -Xms512M -Xmx512M
  KAFKA_ZOOKEEPER_CONNECT:                       confluent-cp-zookeeper-headless:2181
  KAFKA_LOG_DIRS:                                /opt/kafka/data-0/logs
  KAFKA_METRIC_REPORTERS:                        io.confluent.metrics.reporter.ConfluentMetricsReporter
  CONFLUENT_METRICS_REPORTER_BOOTSTRAP_SERVERS:  PLAINTEXT://confluent-cp-kafka-headless:9092
  KAFKA_LISTENER_SECURITY_PROTOCOL_MAP:          PLAINTEXT:PLAINTEXT,EXTERNAL:PLAINTEXT
  KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR:        3
  KAFKA_JMX_PORT:                                5555
Mounts:
  /opt/kafka/data-0 from datadir-0 (rw)
  /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5rs65 (ro)

Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
datadir-0:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: datadir-0-confluent-cp-kafka-0
ReadOnly: false
kube-api-access-5rs65:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional:
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors:
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message


Warning BackOff 3m56s (x658 over 145m) kubelet Back-off restarting failed container

Name: confluent-cp-zookeeper-0
Namespace: default
Priority: 0
Node: minikube/192.168.49.2
Start Time: Wed, 27 Apr 2022 20:38:13 +0530
Labels: app=cp-zookeeper
controller-revision-hash=confluent-cp-zookeeper-769f96bdb6
release=confluent
statefulset.kubernetes.io/pod-name=confluent-cp-zookeeper-0
Annotations: prometheus.io/port: 5556
prometheus.io/scrape: true
Status: Running
IP: 172.17.0.3
IPs:
IP: 172.17.0.3
Controlled By: StatefulSet/confluent-cp-zookeeper
Containers:
prometheus-jmx-exporter:
Container ID: docker://e09836ddc4bf05db3da65aa4dfd415b3f6c400e04477a926eadb4bb82a6aa595
Image: solsson/kafka-prometheus-jmx-exporter@sha256:6f82e2b0464f50da8104acd7363fb9b995001ddff77d248379f8788e78946143
Image ID: docker-pullable://solsson/kafka-prometheus-jmx-exporter@sha256:6f82e2b0464f50da8104acd7363fb9b995001ddff77d248379f8788e78946143
Port: 5556/TCP
Host Port: 0/TCP
Command:
java
-XX:+UnlockExperimentalVMOptions
-XX:+UseCGroupMemoryLimitForHeap
-XX:MaxRAMFraction=1
-XshowSettings:vm
-jar
jmx_prometheus_httpserver.jar
5556
/etc/jmx-zookeeper/jmx-zookeeper-prometheus.yml
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Wed, 27 Apr 2022 23:15:59 +0530
Finished: Wed, 27 Apr 2022 23:15:59 +0530
Ready: False
Restart Count: 33
Environment:
Mounts:
/etc/jmx-zookeeper from jmx-config (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-245ss (ro)
cp-zookeeper-server:
Container ID: docker://30a8b4c191bb8c749b7862fe41bec6dd15f8bf366c9c9c378ec3950f3322e004
Image: confluentinc/cp-zookeeper:6.1.0
Image ID: docker-pullable://confluentinc/cp-zookeeper@sha256:78c190f4472cd091ba5a046ae399ef8abc3cd2ad33472f7af3aebd9d48d85d19
Ports: 2181/TCP, 2888/TCP, 3888/TCP, 5555/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP, 0/TCP
Command:
bash
-c
ZK_FIX_HOST_REGEX="s/${HOSTNAME}.[^:]:/0.0.0.0:/g"
ZOOKEEPER_SERVER_ID=$((${HOSTNAME##
-}+1))
ZOOKEEPER_SERVERS=echo $ZOOKEEPER_SERVERS | sed -e "$ZK_FIX_HOST_REGEX"
/etc/confluent/docker/run

State:          Waiting
  Reason:       CrashLoopBackOff
Last State:     Terminated
  Reason:       Error
  Exit Code:    1
  Started:      Wed, 27 Apr 2022 23:16:00 +0530
  Finished:     Wed, 27 Apr 2022 23:16:00 +0530
Ready:          False
Restart Count:  33
Environment:
  KAFKA_HEAP_OPTS:                        -Xms512M -Xmx512M
  KAFKA_JMX_PORT:                         5555
  ZOOKEEPER_TICK_TIME:                    2000
  ZOOKEEPER_SYNC_LIMIT:                   5
  ZOOKEEPER_INIT_LIMIT:                   10
  ZOOKEEPER_MAX_CLIENT_CNXNS:             60
  ZOOKEEPER_AUTOPURGE_SNAP_RETAIN_COUNT:  3
  ZOOKEEPER_AUTOPURGE_PURGE_INTERVAL:     24
  ZOOKEEPER_CLIENT_PORT:                  2181
  ZOOKEEPER_SERVERS:                      confluent-cp-zookeeper-0.confluent-cp-zookeeper-headless.default:2888:3888;confluent-cp-zookeeper-1.confluent-cp-zookeeper-headless.default:2888:3888;confluent-cp-zookeeper-2.confluent-cp-zookeeper-headless.default:2888:3888
  ZOOKEEPER_SERVER_ID:                    confluent-cp-zookeeper-0 (v1:metadata.name)
Mounts:
  /var/lib/zookeeper/data from datadir (rw)
  /var/lib/zookeeper/log from datalogdir (rw)
  /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-245ss (ro)

Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
datadir:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: datadir-confluent-cp-zookeeper-0
ReadOnly: false
datalogdir:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: datalogdir-confluent-cp-zookeeper-0
ReadOnly: false
jmx-config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: confluent-cp-zookeeper-jmx-configmap
Optional: false
kube-api-access-245ss:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional:
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors:
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message


Warning BackOff 3m53s (x685 over 145m) kubelet Back-off restarting failed container

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant