Run ClickHouse column-oriented database on kubernetes
This chart bootstraps a ClickHouse deployment on a Kubernetes cluster using the Helm package manager.
# start minikube and install tiller
minikube start
helm init
kubectl patch deployment tiller-deploy -p '{"spec": {"template": {"spec": {"automountServiceAccountToken": true}}}}'
# install chart
helm upgrade --install clickhouse .
Terminal #1:
kubectl port-forward clickhouse-0 9000:9000
Terminal #2:
QUERY=$(cat conf/initdb.sql)
docker run -ti --network=host \
--rm yandex/clickhouse-client \
--host=docker.for.mac.localhost \
--multiquery \
--query="${QUERY}"
To install the chart with the release name my-release
:
git clone https://github.com/tekn0ir/clickhouse_chart.git
cd clickhouse_chart
helm install --name my-release .
To uninstall/delete the my-release
deployment:
helm delete my-release
The following table lists the configurable parameters of the clickhouse chart and their default values.
Parameter | Description | Default |
---|---|---|
replicas |
Number of nodes | 1 |
image |
clickhouse image repository |
yandex/clickhouse-server |
imageTag |
clickhouse image tag |
latest |
imagePullPolicy |
Image pull policy | Always |
imagePullSecret |
Image pull secrets | nil |
persistence.enabled |
Use a PVC to persist data | false |
persistence.existingClaim |
Provide an existing PersistentVolumeClaim | nil |
persistence.storageClass |
Storage class of backing PVC | default |
persistence.accessMode |
Use volume as ReadOnly or ReadWrite | ReadWriteOnce |
persistence.annotations |
Persistent Volume annotations | {} |
persistence.size |
Size of data volume | 10Gi |
resources.limits.cpu |
CPU resource limit | 1 |
resources.limits.memory |
Memory resource limit | 1Gi |
resources.requests.cpu |
CPU resource request | 1 |
resources.requests.memory |
Memory resource request | 1Gi |
service.externalIPs |
External IPs to listen on | [] |
service.port |
TCP port | 8123 |
service.type |
k8s service type exposing ports, e.g. NodePort |
ClusterIP |
service.nodePort |
NodePort value if service.type is NodePort |
nil |
service.annotations |
Service annotations | {} |
service.labels |
Service labels | {} |
ingress.enabled |
Enables Ingress | false |
ingress.annotations |
Ingress annotations | {} |
ingress.labels |
Ingress labels | {} |
ingress.hosts |
Ingress accepted hostnames | [] |
ingress.tls |
Ingress TLS configuration | [] |
nodeSelector |
Node labels for pod assignment | {} |
tolerations |
Toleration labels for pod assignment | {} |
podAnnotations |
Annotations for the clickhouse pod | {} |
annotations |
Annotations for the clickhouse statefulset | {} |
initdb_args |
Arguments for the clickhouse-clinet init query | ['--user=default', '--database=default', '--multiquery'] |
initdb_sql |
Path to bootstrap sql query file | conf/initdb.sql |
config_xml |
Path to server config | conf/config.xml |
users_xml |
Path to users config | conf/users.xml |
Specify each parameter using the --set key=value[,key=value]
argument to helm install
. For example,
helm install --name my-release --set persistence.enabled=true .
Make a copy of values.yaml
and conf
dir:
cp values.yaml myconfig.yaml
cp -rf conf myconfig
Edit myconfig.yaml
and point out files in myconfig
dir:
initdb_sql: myconfig/initdb.sql
config_xml: myconfig/config.xml
users_xml: myconfig/users.xml
To use the edited myconfig.yaml
:
helm install --name my-release -f myconfig.yaml .
Edit myconfig/config.xml
and myconfig/users.xml
according to server settings docs
User passwords can be generated:
echo -n "password" | sha256sum | tr -d '-' | tr -d ' '
# result: 5e884898da28047151d0e56f8dc6292773603d0d6aabbdd62a11ef721d1542d8
Edit myconfig/initdb.sql
boostrap script, to create tables and views you need according to query docs.