-
Notifications
You must be signed in to change notification settings - Fork 211
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Zookeeper Cluster can't start when istio is used #102
Comments
Occur the same: Readiness probe failed: HTTP probe failed with statuscode: 503kubectl describe pod zookeeper-operator-8449df9744-lkx9j Normal Scheduled 2d11h default-scheduler Successfully assigned default/zookeeper-operator-8449df9744-lkx9j to k8stian-n3 [root@k8stian-m2:/usr/local/src/deploy/zookeeper-operator/deploy/crds]# kubectl logs -f zookeeper-operator-8449df9744-lkx9j |
@seecsea @sylvainOL Thanks for reporting this. I'll take a look ASAP. |
Any updates here? |
To fix this we perhaps need to set https://github.com/istio/istio.io/blob/master/content/en/faq/applications/zookeeper.md |
Yes it's still not working for me as well. |
When you turn on |
Hello,
I've deployed a kubernetes cluster with istio.
When trying to deploy a 3 nodes ZooKeeper Cluster, the second one can't start because of immediate closed connections.
As banzai cloud has made a blog post on Kafka (+ZK?) on Istio (https://banzaicloud.com/blog/kafka-on-istio-performance/), and they propose to use your operator (https://github.com/banzaicloud/kafka-operator), I assumed it's possible but don't see how :-/
I've tried to disable mTLS but doesn't seem to be the issue.
here's the way I deployed:
here's the logs:
The text was updated successfully, but these errors were encountered: