diff --git a/offline-search-index.2aa3bb2ee2bd078ed9b621868f9ec254.json b/offline-search-index.2aa3bb2ee2bd078ed9b621868f9ec254.json
new file mode 100644
index 0000000..94004b9
--- /dev/null
+++ b/offline-search-index.2aa3bb2ee2bd078ed9b621868f9ec254.json
@@ -0,0 +1 @@
+[{"body":"This helm chart deploys a scalable containerized logging stack with the main purpose of enabling log observability for kubernetes applications. The design supports both local development use cases such as minikube deployments up to a scaled production scenarios. The log shippers are FluentBits deployed on each kubernetes node mounting the host filesystem. The latter scenarios leverage Kafka message broker, completely decoupling in this way, the log generation and log indexing functions.\nThe helm chart supports OpenSearch in various configurations starting from a single node setup usable for local development, to a scaled multi nodes OpenSearch deployment suitable for production environment. In the latter case there are 3 types of nodes (coordination, data and master) where each of those can be both horizontally and vertically scaled depending on the load and shards replication demands.\nFinally this helm chart provides index templates management in OpenSearch and index pattern management in OpenSearchDashboards. An initial predefined set of dashboards is also provided for illustration purposes.\nAdding the helm chart repository: helm repo add logging https://nickytd.github.io/kubernetes-logging-helm helm repo update Note Any authenticated user should have read access to the helm repository. Prepare a release configuration The recommended approach is the get the default helm chart values and adjust accordingly. At minimum the ingress annotations for the OpenSearch rest endpoint and OpenSearchDashboards UI app have to be adjusted. Here is an example for a minimal single OpenSearch node setup.\nInstall a release helm install ofd logging/kubernetes-logging ","categories":"","description":"","excerpt":"This helm chart deploys a scalable containerized logging stack with …","ref":"/kubernetes-logging-helm/docs/","tags":"","title":"Helm chart overview"},{"body":"The kubernetes logging helm chart supports a number of deployment layouts of OpenSearch and other components depending on the concrete purpose and size of the cluster.\nSingle node OpenSearch opensearch: single_node: true fluentbit: containersLogsHostPath: /var/log/pods journalsLogsHostPath: /var/log containersRuntime: docker kafka: false: This layout is the simplest possible requiring the least compute and memory resources. It comprises of the log shippers, a single OpenSearch node and a single OpenSearchDashabord UI. The log shippers are FluentBits deployed on each kubernetes node mounting the host filesystem. Because the locations of the containers logs or the host journals can vary, those locations have to be adapted accordingly in the FluentBit configuration. The logs are directly send to the Opensearch node for indexing without the need of a message broker in between.\n Recommendation: Although the single node can be scaled by simply increasing the replicas in the “data” configuration, this setup is most suitable for development environments like minikube or kind clusters.\n Multi node OpenSearch OpenSearch supports dedicated node types based on specific functions in the cluster. A coordination node, data node and cluster manager node forming an OpenSearch cluster can be deployed when single_node option is disabled.\nopensearch: single_node: false Scaled multi node OpenSearch in production When the setup is deployed in a production environment both aspects for reliably and throughout of the logs streams are addressed by the helm chart with the introduction of a message broker. A running message broker (Kafka) effectively accumulates spikes of logs volumes or downtimes of the backend OpenSearch cluster.\nNote: Kafka and Logstash needs to be enabled as well!. Delivery chain is: Kafka -\u003e Logstash -\u003e OpenSearch\n Even more importantly each component can be scaled horizontally insuring better reliability.\nopensearch: single_node: false data: replicas: 3 clusterManager: replicas: 3 client: replicas: 3 kafka: enabled: true replicas: 3 logstash: enabled: true replicas: 3 #and so on Additionally each type of workload scheduling strategy can be further optimized by defining node and pods (anti)affinity rules.\nFor example for stateful sets like Kafka’s or data nodes following affinity strategy guarantees that pods will be scheduled on different kubernetes nodes.\naffinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 100 podAffinityTerm: labelSelector: matchExpressions: - key: type operator: In values: # with the corresponding label - kafka topologyKey: kubernetes.io/hostname Or in the case of a deployments like the OpenSearch coordination nodes or Logstashes a spread of pods over nodes can be achieved with:\ntopologySpreadConstraints: - maxSkew: 1 topologyKey: kubernetes.io/hostname whenUnsatisfiable: ScheduleAnyway labelSelector: matchLabels: # with the corresponding label type: client ","categories":"","description":"Overview on available installation layouts\n","excerpt":"Overview on available installation layouts\n","ref":"/kubernetes-logging-helm/docs/deployments/","tags":"","title":"Deployment Layouts"},{"body":"The helm chart maintains retention days for indices in OpenSearch using a ISM policy defined in file index-retention_policy.json. Value is taken from opensearch.retentionDays key.\n Note: Retention period configured in the helm chart (7 days by default) shall reflect the size of the persistence volumes mounted by the OpenSearch data nodes. If the logs volume in the cluster is high, the data nodes PVs sizes shall correspond.\nIt is a good practice to have a resizable storage class in the cluster supporting updates on the persistence volumes. When the persistence volumes fill up, the OpenSearch data node switch to read-only mode and new logs are prevented from indexing.\n ","categories":"","description":"Overview on supported OpenSearch index management scenarios\n","excerpt":"Overview on supported OpenSearch index management scenarios\n","ref":"/kubernetes-logging-helm/docs/components/opensearch/indexmanagement/","tags":"","title":"Index Management"},{"body":"tbd\n","categories":"","description":"Opensearch Dashboards authentication \u0026 authorization configurations\n","excerpt":"Opensearch Dashboards authentication \u0026 authorization configurations\n","ref":"/kubernetes-logging-helm/docs/components/opensearch-dashboards/security/","tags":"","title":"Opensearch-Dashboards Authentication \u0026 Authorization"},{"body":"","categories":"","description":"","excerpt":"","ref":"/kubernetes-logging-helm/docs/components/","tags":"","title":"Logging Stack Components"},{"body":"FluentBit is installed as daemon set on each of the k8s nodes by the helm chart. It follows FluentBit data pipeline setup designed for kubernetes environments.\nThe helm chart itself supports different deployment layouts depending on whether a simple or standard model is required. The standard model is recommended in production where various components runs in HA mode. In this case the FluentBit instances send the collected logs to kafka brokers. The kafka brokers are used for buffering and greatly increase the overall reliability and stability of the entire stack.\nIn the simple case the FluentBit instances communicate directly with OpenSearch nodes. In both cases there is a set of FluentBit configurations which is responsible for proper logs collection from the containers and enriching those with the respective kubernetes metadata such as namespace of the origin workload, its labels and so on. The metadata is later used in indexing, searchers and visualisations scenarios. This shared configuration is shown on the diagrams here as “kubernetes data pipeline”.\nThe “kubernetes data pipeline” uses standard “Tail” input plugin to read the logs from the mounted node filesystem, “Kube-Tag” parser plugin to generate FluentBit tag of the events. Followed by “Kubernetes” filter used to add the kubernetes metadata to the events followed by the end by a “de_dot” filter used to replace dots “.” with undescores “_” in event names.\nThe “kubernetes data pipeline” is the foundation of any application specific configurations. For example nginx ingress controller produces unstructured access logs. To parse those logs and transform the lines into structured json formatted messages we shall enrich the pipeline with corresponding filters and parsers.\nThe nginx access logs parsing example is located at fluentbit-configs folder. Any additional application specific configs needs to be saved in the same location following filenames the naming conventions. Aka filters needs to have “filter” predix, “parsers” for parsers and so on.\nIn the nginx access log example the rewrite_tag filter is used to tag messages originating from containers and which share the app_kubernetes_io/name: ingress-nginx label.\n[FILTER] Name rewrite_tag Match kube.* Rule $kubernetes['labels']['app_kubernetes_io/name'] \"^(ingress-nginx)$\" nginx false [FILTER] Name parser Match nginx Key_Name log Parser k8s-nginx-ingress Reserve_Data True The messages are tagged and re-emitted in the FluentBit data pipeline. Later matched by the nginx parser which uses regex to construct a json formatted structured message\n[PARSER] Name k8s-nginx-ingress Format regex Regex ^(?\u003chost\u003e[^ ]*) - (?\u003cuser\u003e[^ ]*) \\[(?\u003ctime\u003e[^\\]]*)\\] \"(?\u003cmethod\u003e\\S+)(?: +(?\u003cpath\u003e[^\\\"]*?)(?: +\\S*)?)?\" (?\u003ccode\u003e[^ ]*) (?\u003csize\u003e[^ ]*) \"(?\u003creferrer\u003e[^\\\"]*)\" \"(?\u003cagent\u003e[^\\\"]*)\" (?\u003crequest_length\u003e[^ ]*) (?\u003crequest_time\u003e[^ ]*) \\[(?\u003cproxy_upstream_name\u003e[^ ]*)\\] (\\[(?\u003cproxy_alternative_upstream_name\u003e[^ ]*)\\] )?(?\u003cupstream_addr\u003e[^ ]*) (?\u003cupstream_response_length\u003e[^ ]*) (?\u003cupstream_response_time\u003e[^ ]*) (?\u003cupstream_status\u003e[^ ]*) (?\u003creg_id\u003e[^ ]*).*$ Time_Key time Time_Format %d/%b/%Y:%H:%M:%S %z Additional parsers are supported such as multiline parses allowing to reconstruct java stacktraces into a single message. Here is an example of such configuration.\nfilter-zookeeper.conf:\n [FILTER] Name rewrite_tag Match kube.*.logging.*.* Rule $kubernetes['labels']['type'] \"^(zk)$\" zookeeper false Emitter_Storage.type filesystem [FILTER] Name multiline Match zookeeper multiline.parser zookeeper_multiline parser-zookeeper.conf\n [MULTILINE_PARSER] name zookeeper_multiline type regex flush_timeout 1000 key_content log # Regex rules for multiline parsing # --------------------------------- # - first state always has the name: start_state # - every field in the rule must be inside double quotes # # rules | state name | regex pattern | next state name # ------|--------------|--------------------------------------|---------------- rule \"start_state\" \"/^(?\u003cexception\u003e[^ ]+:)(?\u003crest\u003e.*)$/\" \"cont\" rule \"cont\" \"/\\s+at\\s.*/\" \"cont\" Hint: For high volume logs producers consider adding: Emmiter_Storage.type filesystem property. It allows additional buffering during re-emitting of the events, details see FluentBit rewrite-tag.\n ","categories":"","description":"Configuration settings for FluentBit log shipper\n","excerpt":"Configuration settings for FluentBit log shipper\n","ref":"/kubernetes-logging-helm/docs/components/fluentbit/","tags":"","title":"FluentBit"},{"body":"Prerequisites In this guide I expect, that you have access to Kubernetes cluster and have portfoward to some OpenSearch node (= pod from Kubernetes perspective). I recommend chose one node client type.\nAll API call to OpenSearch cluster beginning: curl -ks https://\u003cName\u003e:\u003cPassword\u003e@localhost:9200/\u003cSource\u003e, where:\n \u003cName\u003e = user name in OpenSearch cluster with admin privileges \u003cPassword\u003e = corespondig password for admin account \u003cSource\u003e = datapoint from OpenSearch cluster Check current status: $ curl -ks https://\u003cName\u003e:\u003cPassword\u003e@localhost:9200/_cat/health 1654179496 14:18:16 logging yellow 5 1 true 30 30 0 0 27 0 - 52.6% our cluster is in yellow state\n List all available nodes from OpenSearch perspective: $ curl -ks https://\u003cName\u003e:\u003cPassword\u003e@localhost:9200/_cat/nodes 100.96.7.5 72 97 1 0.30 0.36 0.35 mr - ofd-manager-0 100.96.7.11 51 76 1 0.30 0.36 0.35 r - ofd-client-56dd9c66fb-bs7hp 100.96.7.7 53 76 4 0.30 0.36 0.35 dir - ofd-data-1 100.96.1.8 21 100 1 1.73 0.82 0.41 mr * ofd-manager-1 100.96.7.12 19 76 1 0.30 0.36 0.35 r - ofd-client-56dd9c66fb-q9tv5 here you can see multinode setup, where must exist two nodes from type client, data and manager (6 OpenSearch nodes total)\none datanode “ofd-data-0” is missing\n Check suspicious pods: $ kubectl -n logging get pods | grep data ofd-data-0 1/1 Running 0 130m ofd-data-1 1/1 Running 0 129m from Kubernetes perspective this pods (OpenSearch node) working fine\n Check logs from suspicious pod: $ kubectl -n logging logs ofd-data-0 ... \"message\": \"failed to join ... ... Caused by: org.opensearch.cluster.coordination.CoordinationStateRejectedException: join validation on cluster state with a different cluster uuid 2UlST0WBQIKEV05cDpuWwQ than local cluster uuid v1vi49Q_RRaaC83iMthBnQ, rejecting ... it seems, that this node hold old previously used OpenSearch cluster ID and attempt to connect to this old OpenSearch cluster instance\nthis is reason, why this node is missing\n Reset failed data node: Warning Double check, that you have at last half data nodes healthy! In our case, we must have 2 data nodes total and 1 is missing. Double check, that OpenSearch cluster is in yellow state. Proceeding with smaller amount of datanodes come to datalost! login to this pod: $ kubectl -n logging exec -i -t ofd-data-0 -- /bin/bash delete datadir in this pod: $ rm -rf /data/nodes logout from this pod: $ exit delete this pod for restarting: $ k -n logging delete pod ofd-data-0 pod \"ofd-data-0\" deleted Check OpenSearch cluster health again: $ curl -ks https://\u003cName\u003e:\u003cPassword\u003e@localhost:9200/_cat/health 1654180648 14:37:28 logging yellow 6 2 true 45 30 0 4 11 0 - 75.0% ... 1654180664 14:37:44 logging yellow 6 2 true 53 30 0 2 5 0 - 88.3% ... 1654180852 14:40:52 logging green 6 2 true 60 30 0 0 0 0 - 100.0% our cluster is still in yellow state\nrunning curl command over time give information, that cluster regenerating\nwait some time and if this problem was solved, you can see cluster again healthy in green state\n ","categories":"","description":"","excerpt":"Prerequisites In this guide I expect, that you have access to …","ref":"/kubernetes-logging-helm/docs/components/opensearch/problems/missing-data-node/","tags":"","title":"Missing data node"},{"body":"The Observability plugin allow you to visualize tracing data.\nExample: ","categories":"","description":"OpenSearch Dashboards observability pluging\n","excerpt":"OpenSearch Dashboards observability pluging\n","ref":"/kubernetes-logging-helm/docs/components/opensearch-dashboards/opentelemetry/","tags":"","title":"OpenSearch-Dashboards-Observability"},{"body":"Kuberenetes logging helm chart supports multiple deployment layouts of OpenSearch, which both satisfy local development needs where minimum use of resources is required or production layout with additional Kafka brokers and HA setup of the various components.\nBy default the helm chart configures two indices with corresponding index templates. One index is containers-{YYYY.MM.dd} indexing by default all workloads logs and systemd-{YYYY.MM.dd} for storing journal system logs for “kubelet” or “containerd” services running on the respective cluster nodes. Both indices are created according index templates allowing later on dedicated visualizations in OpenSearch Dahboards UI.\n“Containers” index template uses composable pattern and leverages a predefined component template named “kubernetes-metadata”.\ncontainers [containers-*] 0 [kubernetes-metadata] systemd [systemd-*] 0 [] The latter uses kubernetes metadata attached by the FluentBit log shippers to unify its structure among workloads. It shall be also used by any container specific index with the purpose of sharing the same kubernetes fields mappings.\nThe helm chart deploys all templates extensions found in index-templates folder. An example of such index template is nginx, which inherits the mappings in the “kubernetes-metadata” component templates and adds access logs fields mappings.\n{ \"index_patterns\":[ \"nginx-*\" ], \"composed_of\":[ \"kubernetes-metadata\" ], \"template\":{ \"settings\":{ \"index\":{ \"codec\":\"best_compression\", \"mapping\":{ \"total_fields\":{ \"limit\":1000 } }, \"number_of_shards\":\"{{ (.Values.data.replicas | int) }}\", \"number_of_replicas\":\"{{ (sub (.Values.data.replicas | int) 1) }}\", \"refresh_interval\":\"5s\" } }, \"mappings\":{ \"_source\":{ \"enabled\":true }, \"properties\":{ \"log\":{ \"type\":\"text\" }, \"agent\":{ \"type\":\"keyword\" }, \"code\":{ \"type\":\"keyword\" }, \"host\":{ \"type\":\"keyword\" }, \"method\":{ \"type\":\"keyword\" }, \"path\":{ \"type\":\"keyword\" }, \"proxy_upstream_name\":{ \"type\":\"keyword\" }, \"referrer\":{ \"type\":\"keyword\" }, \"reg_id\":{ \"type\":\"keyword\" }, \"request_length\":{ \"type\":\"long\" }, \"request_time\":{ \"type\":\"double\" }, \"size\":{ \"type\":\"long\" }, \"upstream_addr\":{ \"type\":\"keyword\" }, \"upstream_response_length\":{ \"type\":\"long\" }, \"upstream_response_time\":{ \"type\":\"double\" }, \"upstream_status\":{ \"type\":\"keyword\" }, \"user\":{ \"type\":\"keyword\" } } } } } ","categories":"","description":"Configuration settings for OpenSearch\n","excerpt":"Configuration settings for OpenSearch\n","ref":"/kubernetes-logging-helm/docs/components/opensearch/","tags":"","title":"OpenSearch"},{"body":"Kubernetes logging helm chart deploys an single instance of OpenSearch Dashboards (or just Dashboards) presenting the UI interface to OpenSearch indices.\nThe helm chart enables authentication configurations based on SAML, ODIC or standalone and leverages dashboards tenant concept. The latter allows teams to innovate UIs such as searches, visualizations and dashboards in shared tenant space leaving a predefined readonly UIs at a global space. Once the UIs are ready to be promoted those can become part of the helm chart saved-objects folder and become standard set of the chart deployment.\nIn addition the helm chart provisions an OpenSearch DataPrepper component which allows OpenTelemetry traces to be indexed and later visualized at Dashboards observability UI.\n","categories":"","description":"OpenSearch Dashboards configurations\n","excerpt":"OpenSearch Dashboards configurations\n","ref":"/kubernetes-logging-helm/docs/components/opensearch-dashboards/","tags":"","title":"OpenSearch-Dashboards"},{"body":"OpenSearch / ElasticSearch is pretty nice piece of technology with many self-healing procedure, but sometimes manual interventions is required. In this chapter you can find solutions for some problems.\n","categories":"","description":"","excerpt":"OpenSearch / ElasticSearch is pretty nice piece of technology with …","ref":"/kubernetes-logging-helm/docs/components/opensearch/problems/","tags":"","title":"Possible problems"},{"body":"2.x -\u003e 3.0.0 Since version 3.0.0, the chart values are renamed and follow camel case recommendation. This is a backward incompatibility change and helm chart values for releases needs first to be migrated to the recommended camel case format.\n4.5.4 -\u003e 4.6.0 In the version 4.6.0 we omited Apache ZooKeeper as Kafka dependency. In Raft mode Kafka cluster need to have generated cluster ID. Please check how to generating cluster ID and change your values.yaml file accordingly.\nIf you dont want lost any of your log data in upgrading, here is a safe procedure:\n delete FluentBit daemonset (this stop feeding Kafka with new log data) wait, until Logstash process all cached log records from Kafka to OpenSearch (checking is possible via monitoring API or in Grafana Dashboard) scale down Kafka StatefulSet to zero (this delete old Kafka implementation) delete Kafka StatefulSet scale down ZooKeeper StatefulSet to zero delete ZooKeeper StatefulSet delete PersistantVolumeClaim for Kafka’s and for ZooKeeper’s instances as well do helm upgrade ... 4.6.3 -\u003e 4.6.4 In the version 4.6.4 we securing communication between Kafka instances. If CertManager isnt avaible, helm chart will generate CA certificate for that purpose with 1 year lifeness. Self signed certificate must be managed by user! The setup is good for development purpose, for production environment consider using CertManager.\n","categories":"","description":"","excerpt":"2.x -\u003e 3.0.0 Since version 3.0.0, the chart values are renamed and …","ref":"/kubernetes-logging-helm/docs/upgrade-notes/","tags":"","title":"Upgrade Notes"},{"body":"tbd\n","categories":"","description":"Opensearch Dashboards visualizations configurations\n","excerpt":"Opensearch Dashboards visualizations configurations\n","ref":"/kubernetes-logging-helm/docs/components/opensearch-dashboards/visualizations/","tags":"","title":"Opensearch-Dashboards Visualizations"},{"body":"Bellow you can find helm chart values usage with description.\nValues Key Type Default Description additionalJobAnnotations object {} Additional annotations for jobs pods additionalJobPodAnnotations string nil Additional annotations for job pods additionalJobPodLabels object {} Additional labels for job pods client.affinity object {} client.heapSize string \"512M\" Set JVM parameters -Xms and -Xmx, basicaly set JVM memory size. Consider this value as a half of total pod memory. client.ingress.annotations object {} Annotations for ingress object. client.ingress.className string \"\" Opensearch ingress class name client.ingress.enabled bool false Switch to enable / disable ingress for OpenSearch cluster. client.ingress.host list [] Array of Opensearch ingress host names client.ingress.path string \"/\" client.ingress.tls list [] Certificate setting for hostname. client.podLabels object {} Additional labels for the workload pods client.priorityClass object {} client.replicas int 1 Replicas count for OpenSearch client node role. client.resources.limits.memory string \"2000Mi\" Define maximum memory allocation. client.resources.requests.memory string \"1000Mi\" Define minimum memory allocation. client.tolerations list [] client.topologySpreadConstraints list [] clusterManager.affinity object {} clusterManager.heapSize string \"256M\" Set JVM parameters -Xms and -Xmx, basicaly set JVM memory size. Consider this value as a half of total pod memory. clusterManager.podLabels object {} Additional labels for the workload pods clusterManager.priorityClass object {} clusterManager.replicas int 1 Replicas count for OpenSearch master node role. Minimal is 2. clusterManager.resources.limits.memory string \"700Mi\" Define maximum memory allocation. clusterManager.resources.requests.memory string \"700Mi\" Define minimum memory allocation. clusterManager.storage string \"1Gi\" Persistent volume size. clusterManager.storageClass object {} clusterManager.tolerations list [] clusterName string \"logging\" Default cluster name. data.affinity object {} data.heapSize string \"512M\" Set JVM parameters -Xms and -Xmx, basicaly set JVM memory size. Consider this value as a half of total pod memory. data.podLabels object {} Additional labels for the workload pods data.priorityClass object {} data.replicas int 1 Replicas count for OpenSearch data node role. data.resources.limits.memory string \"2000Mi\" Define maximum memory allocation. data.resources.requests.memory string \"1000Mi\" Define minimum memory allocation. data.storage string \"1Gi\" Persistent volume size. data.storageClass object {} data.tolerations list [] data_prepper.affinity object {} data_prepper.enabled bool false Switch to enable / disable DataPrepper on cluster. data_prepper.heapSize string \"256M\" Set JVM parameters -Xms and -Xmx, basicaly set JVM memory size. Consider this value as a half of total pod memory. data_prepper.image string \"opensearchproject/data-prepper\" Used image name. data_prepper.imageTag string \"2.3.2\" Used component version. data_prepper.podLabels object {} Additional labels for the workload pods data_prepper.priorityClass object {} data_prepper.replicas int 1 Replicas count for DataPrepper pods. data_prepper.resources.limits.memory string \"600Mi\" Define maximum memory allocation. data_prepper.resources.requests.memory string \"600Mi\" Define minimum memory allocation. data_prepper.retention.purge int 3 Days to hold oldest slot data_prepper.retention.slotSize int 10 Slot size in GB data_prepper.tolerations list [] data_prepper.topologySpreadConstraints list [] fluentbit.affinity object {} fluentbit.caCertificateSecret string \"\" fluentbit.containersLogsHostPath string \"/var/log/pods\" fluentbit.containersRuntime string \"containerd\" Define container runtime engine: docker or containerd. fluentbit.disableTailInput bool false fluentbit.enabled bool true fluentbit.extraEnvs object {} fluentbit.image string \"fluent/fluent-bit\" Used image name. fluentbit.imagePullPolicy string \"IfNotPresent\" Image pull policy. fluentbit.imageTag string \"2.1.10\" Used component version. fluentbit.indexPrefix string \"\" fluentbit.journalsLogsHostPath string \"/var/log\" fluentbit.mergeLog string \"On\" fluentbit.metrics.enabled bool false Switch to enable / disable FluentBit metrics for Prometheus. fluentbit.metrics.interval string \"30s\" fluentbit.metrics.namespace string \"\" fluentbit.podLabels object {} Additional labels for the workload pods fluentbit.priorityClass string \"\" fluentbit.readFromHead bool false fluentbit.resources.limits.memory string \"100Mi\" Define maximum memory allocation. fluentbit.resources.requests.memory string \"50Mi\" Define minimum memory allocation. fluentbit.tolerations[0].operator string \"Exists\" imagePullSecrets list [] Secrets containing credentials for pulling images from private registers init_container.image string \"nickytd/init-container\" Used image name. init_container.imagePullPolicy string \"IfNotPresent\" Image pull policy. init_container.imageTag string \"1.1.0\" Used component version. kafka.SSLInterConnectExp int 60 Set days for SSL broker interconnect communication certificate expiration kafka.affinity object {} kafka.certManager object {\"enabled\":false,\"issuerRef\":{}} Settings for CertManager. kafka.certManager.enabled bool false Enable / disable using CertManager instance in cluster. kafka.certManager.issuerRef object {} Define CertManager Issuer object kafka.enabled bool true Switch to enable / disable Kafka instance on cluster. kafka.heapSize string \"256M\" Set JVM parameters -Xms and -Xmx, basicaly set JVM memory size. Consider this value as a half of total pod memory. kafka.image string \"bitnami/kafka\" Used image name. kafka.imagePullPolicy string \"IfNotPresent\" Image pull policy. kafka.imageTag string \"3.4.1\" Used component version. kafka.kraftId string \"M2M5NGQ3ZDA5NWI1NDkxYz\" Set Kafka cluster ID in Raft mode. If not set, default is used. Details see here. kafka.podLabels object {} Additional labels for the workload pods kafka.priorityClass object {} kafka.replicas int 1 Replicas count for Kafka’s pods. Minimal setup with redundancy is 3 pods. kafka.resources.limits.memory string \"600Mi\" Define maximum memory allocation. kafka.resources.requests.memory string \"600Mi\" Define minimum memory allocation. kafka.storage string \"1Gi\" Persistent volume size. kafka.storageClass object {} kafka.tolerations list [] kafka.topics list [{\"config\":\"max.message.bytes=10000000,retention.bytes=-1,retention.ms=3600000\",\"name\":\"containers\"}] Kafka topics definition logstash.affinity object {} logstash.enabled bool true Switch to enable / disable Logstash instance on cluster. logstash.heapSize string \"256M\" Set JVM parameters -Xms and -Xmx, basicaly set JVM memory size. Consider this value as a half of total pod memory. logstash.image string \"opensearchproject/logstash-oss-with-opensearch-output-plugin\" Used image name. logstash.imageTag string \"8.9.0\" Used component version. logstash.monitoring.enabled bool false Set true here, if you want to expose metrics. logstash.monitoring.image string \"nickytd/logstash-exporter\" Used image name. logstash.monitoring.imageTag string \"0.3.0\" Used component version. logstash.monitoring.metricsPort int 9198 TCP port to expose Exporter processing metrics. logstash.monitoring.serviceMonitor.enabled bool false logstash.monitoring.serviceMonitor.namespace string \"\" logstash.podLabels object {} Additional labels for the workload pods logstash.priorityClass object {} logstash.replicas int 1 Replicas count for Logstash’s pods. logstash.resources.limits.memory string \"700Mi\" Define maximum memory allocation. logstash.resources.requests.memory string \"700Mi\" Define minimum memory allocation. logstash.tolerations list [] logstash.topologySpreadConstraints list [] opensearch.additionalJvmParams string \"-Djava.net.preferIPv4Stack=true -XshowSettings:properties -XshowSettings:vm -XshowSettings:system\" Fine tune JVM parameters passed to component. opensearch.certManager object {\"enabled\":false,\"issuerRef\":{},\"namespace\":\"\"} Settings for CertManager. opensearch.certManager.enabled bool false Enable / disable using CertManager instance in cluster. opensearch.certManager.issuerRef object {} Define CertManager Issuer object opensearch.certManager.namespace string \"\" TODO opensearch.externalOpensearch.disabled bool true opensearch.externalOpensearch.url string \"\" opensearch.image string \"opensearchproject/opensearch\" Used image name. opensearch.imagePullPolicy string \"IfNotPresent\" Image pull policy. opensearch.imageTag string \"2.10.0\" Used component version. opensearch.oidc object (see example in values file) Place here your settings, if you want to authenticate via OIDC method. opensearch.password string \"osadmin\" Password for account with admin rights opensearch.retentionDays int 7 Define, how long be held indices in OpenSearch cluster. opensearch.saml object (see example in values file) Place here your settings, if you want to authenticate via SAML method. opensearch.singleNode bool false Set deployment layout for OpenSearch opensearch.snapshot.enabled bool false opensearch.snapshot.size string \"5Gi\" opensearch.snapshot.storageClass object {} opensearch.timeNanoSeconds bool false TODO opensearch.user string \"osadmin\" User name with admin rights opensearch_dashboards.affinity object {} opensearch_dashboards.branding object {} opensearch_dashboards.developer.password string \"develop\" opensearch_dashboards.developer.user string \"developer\" opensearch_dashboards.externalOpensearchDashboards.caCertificateSecret string \"\" opensearch_dashboards.externalOpensearchDashboards.disabled bool true opensearch_dashboards.externalOpensearchDashboards.runJob bool false opensearch_dashboards.externalOpensearchDashboards.url string \"\" opensearch_dashboards.extraEnvs object {} opensearch_dashboards.image string \"opensearchproject/opensearch-dashboards\" Used image name. opensearch_dashboards.imageTag string \"2.10.0\" Used component version. opensearch_dashboards.indexPatterns list [\"containers\",\"systemd\",\"nginx\"] Set indices name for inject patterns to OpenSearchDashboards opensearch_dashboards.ingress.annotations object {} Annotations for ingress object. opensearch_dashboards.ingress.className string \"\" opensearch_dashboards.ingress.enabled bool false Switch to enable / disable ingress for OpenSearch Dashboards. opensearch_dashboards.ingress.host list [] opensearch_dashboards.ingress.hosts object {} opensearch_dashboards.ingress.path string \"/\" opensearch_dashboards.ingress.tls list [] Certificate setting for hostname. opensearch_dashboards.password string \"opensearch\" Set password for user with admin privileges. opensearch_dashboards.podLabels object {} Additional labels for the workload pods opensearch_dashboards.priorityClass object {} opensearch_dashboards.readonly.password string \"view\" Set password for user with read only privileges. opensearch_dashboards.readonly.user string \"viewer\" Set user name with read only privileges. opensearch_dashboards.replicas int 1 Replicas count for OpenSearch Dashboard pods. opensearch_dashboards.resources.limits.memory string \"500Mi\" Define maximum memory allocation. opensearch_dashboards.resources.requests.memory string \"500Mi\" Define minimum memory allocation. opensearch_dashboards.tenants list [\"Global\",\"Developer\"] Set tenants name for importing objects via helm chart opensearch_dashboards.tolerations list [] opensearch_dashboards.user string \"opensearch\" Set user name with admin privileges. priorityClass string \"\" TODO storageClass object {} Defautl Storage Class for used by Persistence Volume Claims. Can be overwritten by workloads withNetworkPolicy bool false Default networkpolicy for ingress and egress traffic Maintainers Name Email Url Niki Dokovski nickytd@gmail.com https://github.com/nickytd ","categories":"","description":"Helm chart values description\n","excerpt":"Helm chart values description\n","ref":"/kubernetes-logging-helm/docs/chart-values/","tags":"","title":"Helm chart values"},{"body":"Cluster ID is required for each Kafka instance to know, to which instance it must to connect and make cluster. It is also used to preparing storage space. We need to set all Kafka instance same cluster ID. If you omit this settings, all Kafka instance generate their own ID and reject make cluster.\nThere is many ways, how to generate the ID, but I recommend use this chain of Bash command:\n$ cat /proc/sys/kernel/random/uuid | tr -d '-' | base64 | cut -b 1-22 You can also use built-in script:\n$ bin/kafka-storage.sh random-uuid Or just start one Kafka instance without the cluster ID settings and it will be generated to the logs.\nSources:\nBlog sleeplessbeastie.eu\nApache Kafka Documentation\n","categories":"","description":"How to generating cluster ID\n","excerpt":"How to generating cluster ID\n","ref":"/kubernetes-logging-helm/docs/components/kafka/howtos/clusterid/","tags":"","title":"How to generating cluster ID"},{"body":"Here you can find guidelines, how to manipulate with Kafka in context of logging helm chart.\n","categories":"","description":"How to achieve ...\n","excerpt":"How to achieve ...\n","ref":"/kubernetes-logging-helm/docs/components/kafka/howtos/","tags":"","title":"Howtos"},{"body":"Kubernetes logging helm chart deploys Apache Kafka as a message broker between FluentBit and Logstash for improving stability and loadbalance in big deployments.\nFrom helm chart version 4.6.0 we omited Apache ZooKeeper as Kafka dependency. Kafka from version 2.8.0 introduced KRaft aka ZooKeeper-less mode. From Kafka version 3.3.0 is KRaft marked as production ready, so, we decide to adopt it in the logging helm chart to save some resources and deploying time. Kafka in Raft mode need to have generated cluster ID. Please check how to generating cluster ID.\n","categories":"","description":"Configuration settings for Kafka\n","excerpt":"Configuration settings for Kafka\n","ref":"/kubernetes-logging-helm/docs/components/kafka/","tags":"","title":"Kafka"},{"body":"#TODO\n","categories":"","description":"","excerpt":"#TODO\n","ref":"/kubernetes-logging-helm/docs/faq/","tags":"","title":"Frequently asked questions"},{"body":"","categories":"","description":"","excerpt":"","ref":"/kubernetes-logging-helm/categories/","tags":"","title":"Categories"},{"body":" Welcome to OFD logging stack project! -= A scalable containerized logging stack featuring Opensearch for kubernetes clusters =-\nDocumentation Source Repository ","categories":"","description":"","excerpt":" Welcome to OFD logging stack project! -= A scalable containerized …","ref":"/kubernetes-logging-helm/","tags":"","title":"HomePage"},{"body":"","categories":"","description":"","excerpt":"","ref":"/kubernetes-logging-helm/tags/","tags":"","title":"Tags"}]
\ No newline at end of file
diff --git a/tags/index.html b/tags/index.html
index 3896a8c..3891e0d 100644
--- a/tags/index.html
+++ b/tags/index.html
@@ -43,7 +43,7 @@