-
Notifications
You must be signed in to change notification settings - Fork 1
Technical Requirements
Please follow the guidlines below when choosing hardware for your th2 solution:
-
The rigth balance of CPUs, memory, disks, number of nodes depends on your particular use case, the number of services you are planing to deploy and the expected data load.
-
Approximate configuartion options for a few use cases are available in the Configuration Options section. The suggested hardware are the minimum required. You may need to increase CPU capacity, memory, and disk space beyond the recommended minimums.
-
General recomendations for Hardware and Software are available in the particular sections.
-
The recommended working disk capacity, CPU and memory required for th2 installation can be calculated via the following formula (please find the reference table below) :
th2 env = Infra + Core + Monitoring + Building blocks + Custom + Data Storage(Cassandra) , where:
Infra & Core Components | Memory (MB) | CPU (millicores) | Comment |
---|---|---|---|
th2 infra | 1000 MB | 800 m | Required for all solutions: helm, infra-mgr, infra-editor, infra-operator |
th2 core | 2500 MB | 2000 m | Required for all solutions: mstore, estore, rpt-provider, rpt-viewer |
th2 monitoring | 1500 MB | 2000 m | Recommended. Plus Loki log storage: 150 GB disk space |
Rabbitmq replica 1 in th2 infra | 2000 MB | 1000 m | Required for all solutions |
Other supporting components in th2 infra | 500 MB | 250 m | Depends on the deployment configuration. E.g. in-cluster CD system, ingress and etc |
Custom & Building blocks components | Memory (MB) | CPU (millicores) | Comment |
---|---|---|---|
th2 in-cluster connectivity services | 200 MB * n | 200 m * n | Depends on number of connectivity instances. 1 Connectivity service * n e.g. if we have 10 connectivity instances: 200 MB * 10 = 2000 MB |
th2 codec, act | 200 MB * n | 200 m * n | |
th2 check1 | |||
th2 Java read | 200 MB * n | 200 m * n | |
th2 recon | 200 MB * n | 200 m * n | cacheSize = (podMemoryLimit - 70MB) / (AvrRawMsgSize * 10 * (SUM(number of group in rule))) |
th2 check2 | 800 MB * n | 200 m * n | |
th2 hand | 300 MB * n | 400 m * n |
Though it is possible to use Cassandra single-node installation, generally it’s recommended to setup at least 3-nodes cluster. Requirements to each node are the same.
Apache Cassandra node | Memory (MB) | CPU (Сores) | Disk space (GB) |
---|---|---|---|
Cassandra node_n | 8000 MB | 4 | / 15 GB /var 200 GB |
Note: You may want to mount /var filesystem to disk partition or LVM of required size during node creation. This approach is very convenient because considerable amount of disk space which demanded by Cassandra, docker or other container runtime is allocated inside /var filesystem by default.
Use case #1. Single machine cluster for PoC or development
Kubernetes node:
CPU (Сores) | Memory(MB) | Disk space (GB) |
---|---|---|
6-8 CPU cores | 16-32 GB RAM | /var 150 GB /opt 150 GB |
Use case #2. Single machine cluster with moderate amount of workloads (less than 100 pods), without NFT testing
Kubernetes cluster resources
CPU (Сores) | Memory(MB) | Disk space (GB) |
---|---|---|
8-12 CPU cores | 32 GB RAM | /var 80 GB /opt 150 GB (for logs and metrics) |
Cassandra cluster, 3 nodes, each:
CPU (Сores) | Memory(MB) | Disk space (GB) |
---|---|---|
4 CPU cores | 8 GB RAM | /var 500 GB |
Use case #3. Cluster with significant amount of workloads (more than 100 pods) with NFT testing
Kubernetes master node:
CPU (Сores) | Memory(MB) | Disk space (GB) |
---|---|---|
2-4 CPU cores | 2-4 GB RAM | / 20 GB |
Kubernetes 3 or more worker nodes, each:
CPU (Сores) | Memory(MB) | Disk space (GB) |
---|---|---|
8-12 cpu cores | 32 GB RAM | / 80 GB |
Logs and metrics storage: 150 GB disk space
Cassandra cluster, 3 nodes, each:
CPU (Сores) | Memory(MB) | Disk space (GB) |
---|---|---|
4 CPU cores | 8 GB RAM | / 1 TB |
-
Container runtime ContainerD - recommended. NOTE: possible runtime engines Kubernetes
-
Kubernetes version depends on th2-infra table of compatibility. Kubernetes cluster installed (single master node as development mode, master and 2+ workers as production mode) with the flannel CNI plugin . Creating a cluster with kubeadm
Flannel CNI installation:
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
- git
- kubectl (Kubernetes command-line tool)
- Helm 3+ utility for th2 components deployment into kubernetes
- Chrome 75 or newer
- Access to Maven/PyPI repositories (public and private)
- git 2+
- IDE (IntelliJ IDEA CE)
- OpenJDK 11
- Gradle distribution (installed or available by http url for Gradle Wrapper)
- Python 3.8+
- Chrome 75 or newer
- Cassandra 3.11.6
- Python 3.7+ (for cqlsh)
- JAVA 8
- Nexus Repository OSS or similar
- Gitlab CE or similar
-
Machines that meet kubeadm’s minimum requirements for the workers
-
One or more machines running one of:
- Ubuntu 16.04+
- Debian 9+
- CentOS 7
- Red Hat Enterprise Linux (RHEL) 7
- Fedora 25+
-
Unique hostname, MAC address, and product_uuid for every node.
-
Certain ports are open on your machines. https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#check-required-ports
-
Cassandra ports: By default, Cassandra uses 7000 for cluster communication (7001 if SSL is enabled), 9042 for native protocol clients, and 7199 for JMX. The internode communication and native protocol ports are configurable in the Cassandra Configuration File. The JMX port is configurable in cassandra-env.sh (through JVM options). All ports are TCP.
-
Swap disabled. You MUST disable swap in order for the kubelet to work properly.
-
Full network connectivity between all machines in the cluster (public or private network)
-
sudo privileges on all machines
-
SSH access from one device to all nodes in the system
-
Connectivity to repositories and registries:
- kubernetes-dashboard: https://kubernetes.github.io/dashboard/
- flux: https://charts.fluxcd.io
- ingress-nginx: https://kubernetes.github.io/ingress-nginx
- loki: https://grafana.github.io/loki/charts
- stable: https://charts.helm.sh/stable
- th2: https://th2-net.github.io
- ghcr.io
- quay.io
- docker.io
- k8s.gcr.io
Get in touch with us to learn more about th2 mail to: [email protected]
- Architecture
- Tutorials