diff --git a/installer/README.md b/installer/README.md
new file mode 100644
index 00000000..2bff305b
--- /dev/null
+++ b/installer/README.md
@@ -0,0 +1,293 @@
+# Installer
+
+The Sysdig Installer tool is a golang binary that helps automate the on-premises deployment of the Sysdig platform (Sysdig Monitor and Sysdig Secure), for environments using Kubernetes or OpenShift. Use the Installer to install or upgrade your Sysdig platform. It is recommended as a replacement for the earlier manual installation and upgrade procedures.
+
+# Installation Overview
+
+To install you will:
+
+1. Log in to quay.io
+2. Download a sysdig-chart/values.yaml file
+3. Provide a few basic parameters
+4. Launch the Installer
+
+In a successful installation, the Installer automatically completes the configuration and deployment.
+
+If your environment has access to the internet, you can perform a quickstart install. If your environment is air-gapped, you can perform a partial or full installation, as needed. Each method is described below.
+
+## Prerequisites
+
+### Requirements for Environments with Internet Access
+
+`kubectl` or `oc` binary at a version that matches the version on the target environment.
+- Network access to quay.io
+- A domain name you control
+
+### Requirements for airgapped Environments
+
+- Edited sysdig-chart/values.yaml, with air-gap registry details updated
+- Network and authenticated access to the private registry
+
+### Access Requirements
+
+- Sysdig license key (Monitor and/or Secure)
+- Quay pull secret
+
+# Quickstart Install
+
+Follow these steps if your cluster (Kubernetes or Openshift) has Internet access to pull images directly from `quay.io`:
+
+1. Copy the current version of sysdig-chart/values.yaml to your working directory:
+
+ ```bash
+ wget https://raw.githubusercontent.com/draios/sysdig-cloud-scripts/installer/installer/values.yaml
+ ```
+2. Edit the following values:
+
+ - [`size`](docs/02-configuration_parameters.md#size): Specifies the size of the cluster. Size defines CPU, Memory, Disk, and Replicas. Valid options are: `small`, `medium` and `large`.
+ - [`quaypullsecret`](docs/02-configuration_parameters.md#quaypullsecret): quay.io provided with your Sysdig purchase confirmation mail.
+ - [`storageClassProvisioner`](docs/02-configuration_parameters.md#storageClassProvisioner): The name of the storage class provisioner to use when creating the configured storageClassName parameter. Valid options include: `aws`, `gke`, `hostPath`. If you do not use one of those dynamic storage provisioners, enter `hostPath` and refer to the Advanced examples for how to configure static storage provisioning with this option.
+ - [`sysdig.license`](docs/02-configuration_parameters.md#sysdiglicense): Sysdig license key provided with your Sysdig purchase confirmation mail.
+ - [`sysdig.platformAuditTrail.enabled`](docs/02-configuration_parameters.md#sysdigplatformAuditTrailenabled): To use Sysdig Platform Audit, set this parameter to `true`.
+ - [`sysdig.secure.events.audit.config.store.ip`](docs/02-configuration_parameters.md#sysdigsecureeventsauditconfigstoreip): To see the origin IP address in Sysdig Platform Audit, set this parameter to `true`.
+ - [`sysdig.dnsName`](docs/02-configuration_parameters.md#sysdigdnsName): The domain name the Sysdig APIs will be served on.
+ - [`sysdig.collector.dnsName`](docs/02-configuration_parameters.md#sysdigcollectordnsName): (OpenShift installs only) Domain name the Sysdig collector will be served on. When not configured it defaults to whatever is configured for sysdig.dnsName.
+ - [`sysdig.ingressNetworking`](docs/02-configuration_parameters.md#sysdigingressnetworking): The networking construct used to expose the Sysdig API and collector. The options are:
+
+ - `hostnetwork`: sets the hostnetworking in the ingress daemonset and opens
+ host ports for api and collector. This does not create a Kubernetes service.
+ - `loadbalancer`: creates a service of type loadbalancer and expects that
+ your Kubernetes cluster can provision a load balancer with your cloud provider.
+ - `nodeport`: creates a service of type nodeport. The node ports can be
+ customized with:
+
+ - sysdig.ingressNetworkingInsecureApiNodePort
+ - sysdig.ingressNetworkingApiNodePort
+ - sysdig.ingressNetworkingCollectorNodePort
+
+ When not configured `sysdig.ingressNetworking` defaults to `hostnetwork`.
+
+ **NOTE**: For an airgapped install (see Airgapped Installation Options), also edit the following values:
+
+ - [`airgapped_registry_name`](docs/02-configuration_parameters.md#airgapped_registry_name): The URL of the airgapped (internal) docker registry. This URL is used for installations where the Kubernetes cluster can not pull images directly from Quay.
+ - [`airgapped_repository_prefix`](docs/02-configuration_parameters.md#airgapped_repository_prefix): This defines custom repository prefix for air-gapped_registry. Tags and pushes images as airgapped_registry_name/airgapped_repository_prefix/image_name:tag
+ - [`airgapped_registry_password`](docs/02-configuration_parameters.md#airgapped_registry_password): The password for the configured airgapped_registry_username. Ignore this parameter if the registry does not require authentication.
+ - [`airgapped_registry_username`](docs/02-configuration_parameters.md#airgapped_registry_username): The username for the configured airgapped_registry_name. Ignore this parameter if the registry does not require authentication.
+
+3. Download the installer binary that matches your OS from the [installer releases page](https://github.com/draios/installer/releases).
+4. Run the Installer.
+
+ ```bash
+ ./installer deploy
+ ```
+On successful run of Installer towards the end of your terminal you should see the below:
+
+ ```
+ Congratulations, your Sysdig installation was successful!
+ You can now login to the UI at "https://awesome-domain.com:443" with:
+
+ username: "configured-username@awesome-domain.com"
+ password: "awesome-password"
+
+ Collector endpoint for connecting agents is: awesome-domain.com
+ Collector port is: 6443
+ ```
+
+5. Save the values.yaml file in a secure location; it will be used for future upgrades.
+
+The Installer also generates a directory containing all of the Kubernetes YAML manifests the Installer applied against your cluster. It is not necessary to keep this directory. The Installer can regenerate it by using the exact same binary, the exact same` values.yaml` and the `--skip-import` option.
+
+# Airgapped Installation Options
+
+The Installer can be used in airgapped environments, either with a multi-homed installation machine that has internet access, or in an environment with no internet access.
+
+## Airgapped with Multi-Homed Installation Machine
+
+This method uses a private docker registry. The installation machine requires network access to pull from quay.io and push images to the private registry.
+
+The Prerequisites and workflow are the same as in the Quickstart Install, with the following exceptions:
+
+- In step 2, add the air-gap registry information.
+- Make the installer push sysdig images to the airgapped registry by running:
+```bash
+./installer airgap
+```
+ That will pull all the images into `images_archive` directory as tar files
+ and push them to the airgapped registry
+
+- Run the Installer.
+
+ ```bash
+ ./installer deploy
+ ```
+
+## Full Air-Gap Install
+
+Use this method where the installation machine does not have network access to pull from quay.io, but can push images to a private docker registry. A machine with network access called the “jump machine” will pull an image containing a self-extracting tarball which can be copied to the installation machine.
+
+### Requirements for Jump Machine
+
+- Network access to quay.io
+- Docker
+- jq
+
+### Requirements for Installation machine
+
+- Network access to Kubernetes cluster
+- Docker
+- Network and authenticated access to the private registry
+- Edited sysdig-chart/values.yaml, with air-gap registry details updated
+
+### Workflow
+
+#### On the Jump Machine
+
+1. Follow the Docker Log In to quay.io steps under the Access Requirements section.
+2. Pull the image containing the self-extracting tar:
+ ```bash
+ docker pull quay.io/sysdig/installer:3.5.1-1-uber
+ ```
+3. Extract the tarball:
+ ```bash
+ docker create --name uber_image quay.io/sysdig/installer:3.5.1-1-uber
+ docker cp uber_image:/sysdig_installer.tar.gz .
+ docker rm uber_image
+ ```
+4. Copy the tarball to the installation machine.
+
+#### On the Installation Machine:
+
+1. Copy the current version sysdig-chart/values.yaml to your working directory:
+ ```bash
+ wget https://raw.githubusercontent.com/draios/sysdig-cloud-scripts/installer/installer/values.yaml
+ ```
+2. Edit the following values:
+
+ - [`size`](docs/02-configuration_parameters.md#size): Specifies the size of the cluster. Size
+ defines CPU, Memory, Disk, and Replicas. Valid options are: small, medium and
+ large
+ - [`quaypullsecret`](docs/02-configuration_parameters.md#quaypullsecret): quay.io provided with
+ your Sysdig purchase confirmation mail
+ - [`storageClassProvider`](docs/02-configuration_parameters.md#storageClassProvider): The
+ name of the storage class provisioner to use when creating the configured
+ storageClassName parameter. Use hostPath or local in clusters that do not have
+ a provisioner. For setups where Persistent Volumes and Persistent Volume Claims
+ are created manually this should be configured as none. Valid options are:
+ aws,gke,hostPath,local,none
+ - [`sysdig.license`](docs/02-configuration_parameters.md#sysdiglicense): Sysdig license key
+ provided with your Sysdig purchase confirmation mail
+ - [`sysdig.dnsName`](docs/02-configuration_parameters.md#sysdigdnsName): The domain name
+ the Sysdig APIs will be served on.
+ - [`sysdig.collector.dnsName`](docs/02-configuration_parameters.md#sysdigcollectordnsName):
+ (OpenShift installs only) Domain name the Sysdig collector will be served on.
+ When not configured it defaults to whatever is configured for sysdig.dnsName.
+ - [`sysdig.ingressNetworking`](docs/02-configuration_parameters.md#sysdigingressnetworking):
+ The networking construct used to expose the Sysdig API and collector. Options
+ are:
+ - hostnetwork: sets the hostnetworking in the ingress daemonset and opens
+ host ports for api and collector. This does not create a Kubernetes service.
+ - loadbalancer: creates a service of type loadbalancer and expects that
+ your Kubernetes cluster can provision a load balancer with your cloud provider.
+ - nodeport: creates a service of type nodeport. The node ports can be
+ customized with:
+ - sysdig.ingressNetworkingInsecureApiNodePort
+ - sysdig.ingressNetworkingApiNodePort
+ - sysdig.ingressNetworkingCollectorNodePort
+ - [`airgapped_registry_name`](docs/02-configuration_parameters.md#airgapped_registry_name):
+ The URL of the airgapped (internal) docker registry. This URL is used for
+ installations where the Kubernetes cluster can not pull images directly from
+ Quay.
+ - [`airgapped_repository_prefix`](docs/02-configuration_parameters.md#airgapped_repository_prefix):
+ This defines custom repository prefix for airgapped_registry.
+ Tags and pushes images as airgapped_registry_name/airgapped_repository_prefix/image_name:tag
+ - [`airgapped_registry_password`](docs/02-configuration_parameters.md#airgapped_registry_password):
+ The password for the configured airgapped_registry_username. Ignore this
+ parameter if the registry does not require authentication.
+ - [`airgapped_registry_username`](docs/02-configuration_parameters.md#airgapped_registry_username):
+ The username for the configured airgapped_registry_name. Ignore this
+ parameter if the registry does not require authentication.
+
+3. Copy the tarball file to the directory where you have your values.yaml file.
+4. Run:
+```bash
+installer airgap --tar-file sysdig_installer.tar.gz
+```
+This extracts the images into the `images_archive` directory relative to where the installer was run and pushes the images to the airgapped_registry.
+
+5. Run the Installer:
+ ```bash
+ ./installer deploy
+ ```
+
+On successful run of Installer towards the end of your terminal you should see this message:
+
+ ```
+ All Pods Ready.....Continuing
+ Congratulations, your Sysdig installation was successful!
+ You can now login to the UI at "https://awesome-domain.com:443" with:
+
+ username: "configured-username@awesome-domain.com"
+ password: "awesome-password"
+ ```
+
+6. Save the values.yaml file in a secure location; it will be used for future upgrades.
+
+There will also be a generated directory containing various Kubernetes configuration yaml files which were applied by Installer against your cluster. It is not necessary to keep the generated directory, as the Installer can regenerate is consistently with the same values.yaml file.
+
+## Upgrades
+
+See [upgrade.md](docs/03-upgrade.md) for upgrades documentation.
+
+## Configuration Parameters and Examples
+
+For the full dictionary of configuration parameters, see:
+[configuration_parameters.md](docs/02-configuration_parameters.md)
+
+## Permissions
+
+### General
+* CRU on the sysdig namespace
+* CRU on StorageClass (only Read is required if the storageClass already exists)
+* CRUD on Secrets/ServiceAccount/ConfigMap/Deployment/CronJob/Job/StatefulSet/Service/DaemonSet in the sysdig namespace.
+* CRUD on role/rolebinding in sysdig namespace (if sysdig ingress controller is deployed)
+* CRU on the ingress-controller(this is the name of the object) ClusterRole/ClusterRoleBinding (if sysdig ingress controller is deployed)
+* Get Nodes (for validations).
+
+### MultiAZ Enabled
+* CRU on the node-labels-to-files(this is the name of the object) ClusterRole/ClusterRoleBinding (for multi-AZ deployments)
+
+### HostPath
+* CRU on PV
+* CRU on PVC in sysdig namespace
+
+### Openshift
+* CRUD on route in the sysdig namespace
+* CRUD on openshift SCC in the sysdig namespace
+
+### Network Policies Enabled
+* CRUD on networkpolicies in sysdig namespace (if networkpolicies are enabled, this is an alpha feature customers should not enable it)
+
+
+## Advanced Configuration
+
+For advanced configuration option see [advanced.md](docs/04-advanced_configuration.md)
+
+## Example values.yaml
+
+- [openshift-with-hostpath values.yaml](examples/openshift-with-hostpath/values.yaml)
+
+## Resource Requirements
+
+This table represents the amount of resources for various cluster sizes and deployment modes in their default configuration:
+
+|Size |Mode |CPU Cores Requests|CPU Cores Limits|Memory GB Limits|Total Disk GB|
+|----------------------------------------|------------|------------------|----------------|----------------|-------------|
+|Small |Secure Only |23 |80 |94 |947.15 |
+| |Platform |53 |119 |213 |1403.15 |
+| |Monitor Only|26 |76 |169 |1191 |
+|Medium |Secure Only |37 |92 |109 |1589 |
+| |Platform |61 |137 |222 |4244 |
+| |Monitor Only|31 |81 |182 |2616 |
+|Large |Secure Only |45 |101 |115 |3040 |
+| |Platform |111 |166 |403 |10180 |
+| |Monitor Only|91 |120 |365 |6663 |
diff --git a/installer/docs/01-command_line_arguments.md b/installer/docs/01-command_line_arguments.md
new file mode 100644
index 00000000..3b5a9dd2
--- /dev/null
+++ b/installer/docs/01-command_line_arguments.md
@@ -0,0 +1,281 @@
+
+
+
+
+
+
+# Command Line Arguments
+
+
+
+## Command: `deploy`
+
+`--skip-namespace`
+
+- Installer does not deploy the `namespace.yaml` manifest.
+ It expects the Namespace to exist and to match the value in `values.yaml`
+ If there is a mismatch, the installer will fail as no validation is in place.
+
+`--skip-pull-secret`
+
+- The services require the pull secret to exist with the expected name (`sysdigcloud-pull-secret`) and to have access to the registry.
+
+- If the pull secret is missing, the behavior could be unpredictable. Some Pods could start if they can find the image locally and if their `imagePullPolicy` is not `Always`; Other Pods will fail because they can't pull the image.
+
+`--skip-serviceaccount`
+
+- The user must provide service accounts with the exact same name expected:
+
+```text
+sysdig-serviceaccount.yaml: name: sysdig
+sysdig-serviceaccount.yaml: name: node-labels-to-files
+sysdig-serviceaccount.yaml: name: sysdig-with-root
+sysdig-serviceaccount.yaml: name: sysdig-elasticsearch
+sysdig-serviceaccount.yaml: name: sysdig-cassandra
+```
+
+- One implication of this is that unless the `node-to-labels` ServiceAccount is added,
+ rack awareness will not be available for any datastore.
+ Another implication is that if the ServiceAccount(s) are missing, the user will have to `describe`
+ the StatefulSet because Pods will not start at all:
+
+```text
+Events:
+ Type Reason Age From Message
+ ---- ------ ---- ---- -------
+ Normal SuccessfulCreate 2m29s statefulset-controller create Claim data-sysdigcloud-cassandra-0 Pod sysdigcloud-cassandra-0 in StatefulSet sysdigcloud-cassandra success
+ Warning FailedCreate 67s (x15 over 2m29s) statefulset-controller create Pod sysdigcloud-cassandra-0 in StatefulSet sysdigcloud-cassandra failed error: pods "sysdigcloud-cassandra-0" is forbidden: error looking up service account benedetto/sysdig-cassandra: serviceaccount "sysdig-cassandra" not found
+```
+
+`--skip-storageclass`
+
+- Installer does not apply the StorageClass manifest.
+ It expects the storageClassName specified in values.yaml to exist.
+
+`--disable-proxy`
+
+- This flag allows disabling an existing configuration for proxy. Several services can be configured to use a proxy to go out to the Internet. For example `scanningv2-pkgmeta`, `certmanager`, `eventsForwarder` etc.
+- If it becomes necessary to remove such configuration, this flag can be used to remove the proxy configuration.
+- This flag also applies to `generate`, `diff` and `import`.
+
+## Command: `import`
+
+`--zookeeper-workloadname `
+
+- This is the value that will be used for the `zookeeper` StatefulSet.
+The default value is `zookeeper`, this argument must be used when the
+actual name of the StatefulSet in the cluster differs
+
+`--kafka-workloadname `
+
+- Same as above for `kafka`
+
+`--cassandra-workloadname `
+
+- Same as above for `cassandra`
+
+`--use-import-v2`
+
+- This flag will use the new import logic, which will import the values from the cluster and then generate the manifests based on the imported values. Defaults to `false`, which means the old import logic will be used, unless the `--use-import-v2` flag is provided. Import V2 is supported starting from version 6.6.0, and is expected to become the default in the future.
+
+## Command: `update-license`
+
+Prerequisite: `kubectl` version `1.20.0` or greater.
+
+This command performs the minimal changes and restarts to apply a new license.
+Based on [this page](https://docs.sysdig.com/en/docs/administration/on-premises-deployments/upgrade-an-on-premises-license/)
+
+This command performs the following:
+
+- Gets a new license from either `--license` or from `--license-file name.ext`
+
+- Applies the license to `common-config` and to the relevant Secret of the following backend services:
+
+ - `api`
+ - `collector`
+ - `worker`
+
+- If `secure` and `anchore` are enabled, it also applies and restarts all Anchore services.
+
+## Command: `image-list`
+
+This command prints to `stdout` (and optionally to a file) a list of all images in a generated stack.
+
+It requires a `values.yaml` and it produces a list of images based on that `values.yaml`.
+
+It does not require a live cluster, and it does not fetches any value from a live cluster, if one is accessible.
+
+### Flags
+
+`-f ` - write the list to a file. If the file already exists, it will be overwritten.
+
+### Example
+
+```log
+./installer/out/installer-darwin-amd64 image-list
+I1118 18:48:44.643520 97065 main.go:64] Installer version
+I1118 18:48:44.646391 97065 values.go:122] using namespace sysdig from values.yaml
+I1118 18:48:44.660236 97065 imagelist.go:44] installerVersion: darwin amd64 gc
+I1118 18:48:44.660263 97065 imagelist.go:13] generating manifests
+I1118 18:48:44.722172 97065 validate.go:1255] skipping Kubernetes version validation for PostgreSQL because HA is not enabled
+I1118 18:48:44.723158 97065 generate.go:171] validation stage:generate passed
+I1118 18:49:00.625921 97065 generate.go:234] Generating kubernetes manifests
+I1118 18:49:00.642116 97065 generate.go:253] Generating kubernetes manifests for dependencies
+I1118 18:49:00.987615 97065 imagelist.go:20] extracting images from generated manifests
+I1118 18:49:01.147089 97065 imagelist.go:23] writing images list to file image_list.txt
+I1118 18:49:01.147276 97065 imagelist.go:30] found 72 images in the generated manifests
+quay.io/sysdig/activity-audit-api:6.0.0.12431
+quay.io/sysdig/certman-janitor:6.0.0.12431
+quay.io/sysdig/nginx:6.0.0.12431
+quay.io/sysdig/anchore:0.8.1-49
+quay.io/sysdig/postgres:12.10.0.0
+quay.io/sysdig/cp-kafka-6:0.2.1
+quay.io/sysdig/kube-rbac-proxy:v0.8.0
+quay.io/sysdig/secure-onboarding-api:6.0.0.12431
+quay.io/sysdig/ui-monitor-nginx:6.0.0.12431
+quay.io/sysdig/sysdig-worker:6.0.0.12431
+quay.io/sysdig/profiling-api:6.0.0.12431
+quay.io/sysdig/scanning-retention-mgr:6.0.0.12431
+quay.io/sysdig/sysdig-api:6.0.0.12431
+quay.io/sysdig/helm-renderer:1.0.677
+quay.io/sysdig/cp-zookeeper-6:0.4.0
+quay.io/sysdig/redis-sentinel-6:1.0.1
+quay.io/sysdig/activity-audit-janitor:6.0.0.12431
+quay.io/sysdig/secure-todo-worker:6.0.0.12431
+quay.io/sysdig/reporting-init:6.0.0.12431
+quay.io/sysdig/certman:6.0.0.12431
+quay.io/sysdig/sysdig-meerkat-collector:6.0.0.12431
+quay.io/sysdig/policies:6.0.0.12431
+quay.io/sysdig/profiling-worker:6.0.0.12431
+quay.io/sysdig/cloudsec-api:6.0.0.12431
+quay.io/sysdig/compliance-api:6.0.0.12431
+quay.io/sysdig/elasticsearch-tools:0.0.35
+quay.io/sysdig/events-forwarder:6.0.0.12431
+quay.io/sysdig/ingress-default-backend:1.5
+docker.io/sysdig/falco_rules_installer:latest
+quay.io/sysdig/events-api:6.0.0.12431
+quay.io/sysdig/events-forwarder-api:6.0.0.12431
+quay.io/sysdig/promqlator:0.99.0-master.2022-10-03T12-41-14Z.2f800e101b
+quay.io/sysdig/ui-secure-nginx:6.0.0.12431
+quay.io/sysdig/reporting-worker:6.0.0.12431
+quay.io/sysdig/scanning-ve-janitor:6.0.0.12431
+quay.io/sysdig/rapid-response-janitor:6.0.0.12431
+quay.io/sysdig/compliance-worker:6.0.0.12431
+quay.io/sysdig/events-janitor:6.0.0.12431
+quay.io/sysdig/events-dispatcher:6.0.0.12431
+quay.io/sysdig/haproxy-ingress:1.1.5-v0.10
+quay.io/sysdig/sysdig-meerkat-api:6.0.0.12431
+quay.io/sysdig/metadata-service-operator:1.0.1.23
+quay.io/sysdig/netsec:6.0.0.12431
+quay.io/sysdig/nats-exporter:0.9.0.2
+quay.io/sysdig/secure-prometheus:2.17.2
+quay.io/sysdig/opensearch-1:0.0.16
+quay.io/sysdig/events-gatherer:6.0.0.12431
+quay.io/sysdig/reporting-api:6.0.0.12431
+quay.io/sysdig/promchap:0.99.0-master.2022-11-18T13-46-40Z.d6b3d10f83
+quay.io/sysdig/redis-6:1.0.1
+quay.io/sysdig/ui-admin-nginx:6.0.0.12431
+quay.io/sysdig/admission-controller-api:6.0.0.12431
+quay.io/sysdig/scanning:6.0.0.12431
+quay.io/sysdig/sysdig-alert-notifier:6.0.0.12431
+quay.io/sysdig/cassandra:0.0.36
+quay.io/sysdig/metadata-service-server:1.10.63
+quay.io/sysdig/rapid-response-connector:6.0.0.12431
+quay.io/sysdig/secure-todo-api:6.0.0.12431
+quay.io/sysdig/api-docs:6.0.0.12431
+quay.io/sysdig/cloudsec-worker:6.0.0.12431
+quay.io/sysdig/sysdig-collector:6.0.0.12431
+quay.io/sysdig/events-ingestion:6.0.0.12431
+quay.io/sysdig/rsyslog:8.2102.0.4
+quay.io/sysdig/sysdig-meerkat-aggregator:6.0.0.12431
+quay.io/sysdig/secure-todo-janitor:6.0.0.12431
+quay.io/sysdig/sysdig-alert-manager:6.0.0.12431
+quay.io/sysdig/redis-exporter-1:1.0.9
+quay.io/sysdig/ui-inspect-nginx:6.0.0.12431
+```
+
+## Command: `diff`
+
+Performs a diff between the platform objects in a running Kubernetes cluster, and the generated manifests based on some values.
+
+`--write-diff`
+
+- Writes the diff on the filesystem organized in subfolders, rather than printing it to the stdout.
+
+`--out-diff-dir`
+
+- Allows you to specify a custom path for the diff files being written on the filesystem. Will be used only if also `--write-diff` is provided. If not set, a temporary directory will be used.
+
+`--cleanup`
+
+- If set, will attempt to automatically delete any generated diff files on the filesystem if the directory used to store the diff files already exists. Requires both `--write-diff` and `--out-diff-dir` to be set.
+
+`--secure`
+
+- Applies some filters to the produced diff in order to avoid printing sensitive information. This is useful if you need to share diffs with a user who should not have access to credentials.
+
+`--summary`
+
+- Prints a summary of the diff errors.
+
+The `diff` command also has options inherited from the `generate` command options. See **generate** command section.
+
+### Sub-Command: secure-diff [DEPRECATED]
+
+Performs a diff not showing sensitive information.
+This subcommand is DEPRECATED and will be removed starting from version 6.7.0, you can have the same effect with the diff command and the flag `--secure`.
+
+## Command: `generate`
+
+`--manifest-directory`
+
+- Set the location where the installer will write the genearted manifests.
+
+`--skip-generate`
+
+- Skips generating Kubernetes manifests and attempts to diff whatever is in the manifests directory. Manifest directory can be specified using `--manifest-directory ` flag.
+
+`--skip-import`
+
+- Skips the import phase, which would try to import values from a running cluster.
+
+`--skip-validation`
+
+- Skips validation checks.
+
+`--ignore-kubeconfig-errors`
+
+- This will ignore all errors from trying to parse kubeconfig file.
+
+`--preserve-templates`
+
+- Preserve directory installer templates are extracted to, this should only be used for debugging purposes
+
+`--k8s-server-version`
+
+- Sets the `kubernetesServerVersion` within values.
+
+`--helm-install`
+
+- The installer will extract the necessary files for an installation using the `helm` command only. By default it will create a directory `helm-install` in the directory where the installer is being executed. Content of the directory:
+
+ - `values.hi.yaml`: the complete values generated by the `installer`
+ - `values.hi.nats.yaml` and `values.hi.nats.global.yaml`: values for the rendering of NATSJS
+ - `charts`: the Helm charts that make up the Sysdig onprem stack
+
+`--helm-install-out-dir`
+
+- To use a custom directory to output the files generated by `--helm-install` instead of the default.
+
+## Command: `list-resources`
+
+Lists all the required resources and limits for a planned deployment, based on the the defaults, provided values, and overlays.
+This command expects to have a `generated` folder. If one doesn't exist, you can use the `--generate-manifests` flag to create it within the scope of this command.
+
+`--generate-manifests`
+
+- Generate Kubernetes manifests before generating the list of resources. Defaults to `false`.
+
+`--node-count`
+
+- Number of nodes in the target cluster. This impacts the resource calculation, because DaemonSets get deployed on every (tolerated) node in the cluster. Defaults to `1`.
diff --git a/installer/docs/02-configuration_parameters.md b/installer/docs/02-configuration_parameters.md
new file mode 100644
index 00000000..04250c83
--- /dev/null
+++ b/installer/docs/02-configuration_parameters.md
@@ -0,0 +1,15652 @@
+
+
+
+
+
+
+# Configuration Parameters
+
+
+
+## **global.graphServicesEnabled**
+**Required**: `false`
+**Description**: A shared flag to turn enable/disable the following GraphDB services: neo4j, graph-query, graph-gatherer, sysql-api, config-service, resource-ingestion
+**Options**: true/false
+**Default**: false
+**Example**:
+
+```yaml
+global:
+ graphServicesEnabled: true
+```
+
+## **quaypullsecret**
+
+**Required**: `true`
+**Description**: quay.io credentials provided with your Sysdig purchase confirmation mail.
+**Options**:
+**Default**:
+**Example**:
+
+```yaml
+quaypullsecret: Y29tZS13b3JrLWF0LXN5c2RpZwo=
+```
+
+## **schema_version**
+
+**Required**: `true`
+**Description**: Represents the schema version of the values.yaml
+configuration. Versioning follows [Semver](https://semver.org/) (Semantic
+Versioning) and maintains semver guarantees about versioning.
+**Options**:
+**Default**: `1.0.0`
+**Example**:
+
+```yaml
+schema_version: 1.0.0
+```
+
+## **size**
+
+**Required**: `true`
+**Description**: Specifies the size of the cluster. Size defines CPU, Memory,
+Disk, and Replicas.
+**Options**: `small|medium|large`
+**Default**:
+**Example**:
+
+```yaml
+size: medium
+```
+
+## **kubernetesServerVersion**
+
+**Required**: `false`
+**Description**: The Kubernetes version of the targeted cluster.
+This helps to programmatically determine which apiVersions should be used, i.e. for `Ingress` - `networking.k8s.io/v1`
+must be used with k8s version 1.22+.
+**Options**:
+**Default**:If not provided, it will be pulled during `import` phase.
+**Example**:
+
+```yaml
+kubernetesServerVersion: v1.18.10
+```
+
+## **storageClassProvisioner**
+
+**Required**: `false`
+**Description**: The name of the [storage class provisioner](https://kubernetes.io/docs/concepts/storage/storage-classes/#provisioner)
+to use when creating the configured storageClassName parameter. Use hostPath
+or local in clusters that do not have a provisioner. For setups where
+Persistent Volumes and Persistent Volume Claims are created manually this
+should be configured as `none`. If this is not configured
+[`storageClassName`](#storageclassname) needs to be configured.
+**Options**: `aws|gke|hostPath|none`
+**Default**:
+**Example**:
+
+```yaml
+storageClassProvisioner: aws
+```
+
+## **apps**
+
+**Required**: `false`
+**Description**: Specifies the Sysdig Platform components to be installed.
+Combine multiple components by space separating them. Specify at least one
+app, for example, `monitor`.
+**Options**: `monitor|monitor secure`
+**Default**: `monitor secure`
+**Example**:
+
+```yaml
+apps: monitor secure
+```
+
+## **airgapped_registry_name**
+
+**Required**: `false`
+**Description**: The URL of the airgapped (internal) docker registry. This URL
+is used for installations where the Kubernetes cluster can not pull images
+directly from Quay. See [airgap instructions multi-homed](../README.md#airgapped-with-multi-homed-installation-machine)
+and [full airgap instructions](../README.md#full-airgap-install) for more
+details.
+**Options**:
+**Default**:
+**Example**:
+
+```yaml
+airgapped_registry_name: my-awesome-domain.docker.io
+```
+
+## **airgapped_repository_prefix**
+
+**Required**: `false`
+**Description**: This defines custom repository prefix for airgapped_registry.
+Tags and pushes images as airgapped_registry_name/airgapped_repository_prefix/image_name:tag
+**Options**:
+**Default**: sysdig
+**Example**:
+
+```yaml
+# tags and pushes the image to /foo/bar/
+airgapped_repository_prefix: foo/bar
+```
+
+## **airgapped_registry_password**
+
+**Required**: `false`
+**Description**: The password for the configured
+`airgapped_registry_username`. Ignore this parameter if the registry does not
+require authentication.
+**Options**:
+**Default**:
+**Example**:
+
+```yaml
+airgapped_registry_password: my-@w350m3-p@55w0rd
+```
+
+## **airgapped_registry_username**
+
+**Required**: `false`
+**Description**: The username for the configured `airgapped_registry_name`.
+Ignore this parameter if the registry does not require authentication.
+**Options**:
+**Default**:
+**Example**:
+
+```yaml
+airgapped_registry_username: bob+alice
+```
+
+## **deployment**
+
+**Required**: `false`
+**Description**: The name of the Kubernetes installation.
+**Options**: `iks|kubernetes|openshift|goldman`
+**Default**: `kubernetes`
+**Example**:
+
+```yaml
+deployment: kubernetes
+```
+
+## **context**
+
+**Required**: `false`
+**Description**: Kubernetes context to use for deploying Sysdig Platform.
+If this param is not not or a blank value is specified, it will use the default context.
+**Options**:
+**Default**:
+**Example**:
+
+```yaml
+context: production
+```
+
+## **clusterDomain**
+
+**Required**: `false`
+**Description**: Domain of the kubernetes cluster.
+**Options**:
+**Default**: `cluster.local`
+**Example**:
+
+```yaml
+clusterDomain: cluster.local
+```
+
+## **namespace**
+
+**Required**: `false`
+**Description**: Kubernetes namespace to deploy Sysdig Platform to.
+**Options**:
+**Default**: `sysdig`
+**Example**:
+
+```yaml
+namespace: sysdig
+```
+
+## **scripts**
+
+**Required**: `false`
+**Description**: Defines which scripts needs to be run.
+
+- `generate`: performs templating and customization.
+- `diff`: generates diff against in-cluster configuration.
+- `deploy`: applies the generated script in Kubernetes environment.
+
+These options can be combined by space separating them.
+**Options**: `generate|diff|deploy|generate diff|generate deploy|diff deploy|generate diff deploy`
+**Default**: `generate deploy`
+**Example**:
+
+```yaml
+scripts: generate diff
+```
+
+## **storageClassName**
+
+**Required**: `false`
+**Description**: The name of the preconfigured [storage class](https://kubernetes.io/docs/concepts/storage/storage-classes/).
+If the storage class does not exist, Installer will attempt to create it using the `storageClassProvisioner` as the provisioner.
+This has no effect if `storageClassProvisioner` is configured to `none`.
+**Options**:
+**Default**: `sysdig`
+**Example**:
+
+```yaml
+storageClassName: sysdig
+```
+
+## ~~**cloudProvider.create_loadbalancer**~~ (**Deprecated**)
+
+**Required**: `false`
+**Description**: This is deprecated, prefer
+[`sysdig.ingressNetworking`](#sysdigingressnetworking) instead. When set to
+true a service of type
+[LoadBalancer](https://kubernetes.io/docs/concepts/services-networking/#loadbalancer)
+is created.
+**Options**: `true|false`
+**Default**: `false`
+**Example**:
+
+```yaml
+cloudProvider:
+ create_loadbalancer: true
+```
+
+## **cloudProvider.name**
+
+**Required**: `false`
+**Description**: The name of the cloud provider Sysdig Platform will run on.
+**Options**: `aws|gcp`
+**Default**:
+**Example**:
+
+```yaml
+cloudProvider:
+ name: aws
+```
+
+## **cloudProvider.isMultiAZ**
+
+**Required**: `false`
+**Description**: Specifies whether the underlying Kubernetes cluster is
+deployed in multiple availability zones. The parameter requires
+[`cloudProvider.name`](#cloudprovidername) to be configured.
+If enabled, all of the datastores will be deployed with `podAntiAffinity` on the zone label against other pods of the same statefulset.
+If kubernetesServerVersion > 1.19, Cassandra will be deployed with `topologySpreadConstraints` instead of `podAntiAffinity`.
+**Options**: `true|false`
+**Default**: `false`
+**Example**:
+
+```yaml
+cloudProvider:
+ isMultiAZ: false
+```
+
+## **cloudProvider.region**
+
+**Required**: `false`
+**Description**: The cloud provider region the underlying Kubernetes Cluster
+runs on. This parameter is required if
+[`cloudProvider.name`](#cloudprovidername) is configured.
+**Options**:
+**Default**:
+**Example**:
+
+```yaml
+cloudProvider:
+ region: us-east-1
+```
+
+## **elasticsearch.hostPathNodes**
+
+**Required**: `false`
+**Description**: An array of node hostnames printed out by the `kubectl get node -o name` command. ElasticSearch hostPath persistent volumes should be
+created on these nodes. The number of nodes must be at minimum whatever the
+value of
+[`sysdig.elasticsearchReplicaCount`](#sysdigelasticsearchreplicacount) is.
+This is required if configured
+[`storageClassProvisioner`](#storageclassprovisioner) is `hostPath`.
+**Options**:
+**Default**: []
+
+**Example**:
+
+```yaml
+elasticsearch:
+ hostPathNodes:
+ - my-cool-host1.com
+ - my-cool-host2.com
+ - my-cool-host3.com
+ - my-cool-host4.com
+ - my-cool-host5.com
+ - my-cool-host6.com
+```
+
+## **elasticsearch.hostPathMasterNodes**
+
+**Required**: `false`
+**Description**: An array of node hostnames printed out by the `kubectl get node -o name` command. ElasticSearch hostPath persistent volumes should be
+created on these nodes for Master nodes. The number of nodes must be at minimum whatever the
+value of
+[`sysdig.elasticsearchMastersReplicaCount`](#sysdigelasticsearchmastersreplicacount) is.
+This is required if configured
+[`storageClassProvisioner`](#storageclassprovisioner) is `hostPath` and `dedicatedMasters` is `true` .
+**Options**:
+**Default**: []
+
+**Example**:
+
+```yaml
+elasticsearch:
+ hostPathMasterNodes:
+ - my-cool-host1.com
+ - my-cool-host2.com
+ - my-cool-host3.com
+```
+
+## **elasticsearch.jvmOptions**
+
+**Required**: `false`
+**Description**: Custom configuration for Elasticsearch JVM.
+**Options**:
+**Default**:
+**Example**:
+
+```yaml
+elasticsearch:
+ jvmOptions: -Xms4G -Xmx4G
+```
+
+## **elasticsearch.external**
+
+**Required**: `false`
+**Description**: If set does not create a local Elasticsearch cluster, tries connecting to an external Elasticsearch cluster.
+This can be used in conjunction with [`elasticsearch.hostname`](#elasticsearchhostname)
+**Options**: `true|false`
+**Default**: `false`
+**Example**:
+
+```yaml
+elasticsearch:
+ external: true
+```
+
+## **elasticsearch.hostname**
+
+**Required**: `false`
+**Description**: External Elasticsearch hostname can be provided here and certificates for clients can be provided under certs/elasticsearch-tls-certs.
+**Options**:
+**Default**: 'sysdigcloud-elasticsearch'
+**Example**:
+
+```yaml
+elasticsearch:
+ external: true
+ hostname: external.elasticsearch.cluster
+```
+
+## **elasticsearch.jobs.rollNodes**
+
+**Required**: `false`
+**Description**: safely roll the elasticsearch nodes, if needed, after a change in the manifests. This can potentially take several minutes per node to restart. In case of an upgrade from elasticsearch to OpenSearch and this is false then a cluster restart will be performed, i.e. all Elasticsearch nodes will be restarted at the same time. WARNING: do not set this to true in a 5.x to 6.x upgrade scenario.
+**Options**: `true|false`
+**Default**: `false`
+**Example**:
+
+```yaml
+elasticsearch:
+ jobs:
+ rollNodes: true
+```
+
+## **elasticsearch.jobs.toolsImageVersion**
+
+**Required**: `false`
+**Description**: The docker image tag of the elasticsearch jobs
+**Options**:
+**Default**: 0.0.53
+**Example**:
+
+```yaml
+elasticsearch:
+ jobs:
+ toolsImageVersion: 0.0.53
+```
+
+## **elasticsearch.enableMetrics**
+
+**Required**: `false`
+**Description**:
+Allow Elasticsearch to export prometheus metrics.
+
+**Options**: `true|false`
+**Default**: `false`
+**Example**:
+
+```yaml
+elasticsearch:
+ enableMetrics: true
+```
+
+## **sysdig.elasticsearchExporterVersion**
+
+**Required**: `false`
+**Description**: Docker image tag of Elasticsearch Metrics Exporter, relevant when configured
+`elasticsearch.enableMetrics` is `true`.
+**Options**:
+**Default**: v1.2.0
+**Example**:
+
+```yaml
+sysdig:
+ elasticsearchExporterVersion: v1.2.0
+```
+
+## **elasticsearch.tlsencryption.adminUser**
+
+**Required**: `false`
+**Description**: The user bound to the ElasticSearch admin role.
+**Options**:
+**Default**: `sysdig`
+**Example**:
+
+```yaml
+elasticsearch:
+ tlsencryption:
+ adminUser: admin
+```
+
+## ~~**elasticsearch.searchguard.enabled**~~ (**Deprecated**)
+
+**Required**: `false`
+**Description**: Enables user authentication and TLS-encrypted data-in-transit
+with [Search Guard](https://search-guard.com/)
+If Search Guard is enabled Installer does the following in the provided order:
+
+1. Checks for user provided certificates under certs/elasticsearch-tls-certs if present uses that to setup elasticsearch(es) cluster.
+2. Checks for existing Search Guard certificates in the provided environment to setup ES cluster. (applicable for upgrades)
+3. If neither of them are present Installer autogenerates Search Guard certificates and uses them to setup es cluster.
+
+**Options**: `true|false`
+**Default**: `false`
+**Example**:
+
+```yaml
+elasticsearch:
+ searchguard:
+ enabled: false
+```
+
+## ~~**elasticsearch.searchguard.adminUser**~~ (**Deprecated**)
+
+**Required**: `false`
+**Description**: The user bound to the ElasticSearch Search Guard admin role.
+**Options**:
+**Default**: `sysdig`
+**Example**:
+
+```yaml
+elasticsearch:
+ searchguard:
+ adminUser: admin
+```
+
+## **elasticsearch.snitch.extractCMD**
+
+**Required**: `false`
+**Description**: The command used to determine [elasticsearch cluster routing
+allocation awareness
+attributes](https://www.elastic.co/guide/en/elasticsearch/reference/current/allocation-awareness.html).
+The command will be passed to the bash eval command and is expected to return
+a single string. For example: `cut -d- -f2 /host/etc/hostname`.
+**Options**:
+**Default**: `sysdig`
+**Example**:
+
+```yaml
+elasticsearch:
+ snitch:
+ extractCMD: cut -d- -f2 /host/etc/hostname
+```
+
+## **elasticsearch.snitch.hostnameFile**
+
+**Required**: `false`
+**Description**: The name of the location to bind mount the host's
+`/etc/hostname` file to. This can be combined with
+[`elasticsearch.snitch.extractCMD`](#elasticsearchsnitchextractcmd) to
+determine cluster routing allocation associated with the node's hostname.
+**Options**:
+**Default**: `sysdig`
+**Example**:
+
+```yaml
+elasticsearch:
+ snitch:
+ hostnameFile: /host/etc/hostname
+```
+
+## **hostPathCustomPaths.cassandra**
+
+**Required**: `false`
+**Description**: The directory to bind mount Cassandra pod's
+`/var/lib/cassandra` to on the host. This parameter is relevant only when
+`storageClassProvisioner` is `hostPath`.
+**Options**:
+**Default**: `/var/lib/cassandra`
+**Example**:
+
+```yaml
+hostPathCustomPaths:
+ cassandra: `/sysdig/cassandra`
+```
+
+## **hostPathCustomPaths.elasticsearch**
+
+**Required**: `false`
+**Description**: The directory to bind mount Elasticsearch pod's
+`/usr/share/elasticsearch` to on the host. This parameter is relevant only when
+`storageClassProvisioner` is `hostPath`.
+**Options**:
+**Default**: `/usr/share/elasticsearch`
+**Example**:
+
+```yaml
+hostPathCustomPaths:
+ elasticsearch: `/sysdig/elasticsearch`
+```
+
+## **hostPathCustomPaths.postgresql**
+
+**Required**: `false`
+**Description**: The directory to bind mount PostgreSQL pod's
+`/var/lib/postgresql/data/pgdata` to on the host. This parameter is relevant
+only when `storageClassProvisioner` is `hostPath`.
+**Options**:
+**Default**: `/var/lib/postgresql/data/pgdata`
+**Example**:
+
+```yaml
+hostPathCustomPaths:
+ postgresql: `/sysdig/pgdata`
+```
+
+## **hostPathCustomPaths.natsJs**
+
+**Required**: `false`
+**Description**: The directory to bind mount nats js pod's
+`/var/lib/natsjs` to on the host. This parameter is relevant
+only when `storageClassProvisioner` is `hostPath`.
+**Options**:
+**Default**: `/var/lib/natsjs`
+**Example**:
+
+```yaml
+hostPathCustomPaths:
+ natsJs: `/sysdig/natsjs`
+```
+
+## **nodeaffinityLabel.key**
+
+**Required**: `false`
+**Description**: The key of the label that is used to configure the nodes that the
+Sysdig Platform pods are expected to run on. The nodes are expected to have
+been labeled with the key.
+**Options**:
+**Default**:
+**Example**:
+
+```yaml
+nodeaffinityLabel:
+ key: instancegroup
+```
+
+## **nodeaffinityLabel.value**
+
+**Required**: `false`
+**Description**: The value of the label that is used to configure the nodes
+that the Sysdig Platform pods are expected to run on. The nodes are expected
+to have been labeled with the value of
+[`nodeaffinityLabel.key`](#nodeaffinitylabelkey), and is required if
+[`nodeaffinityLabel.key`](#nodeaffinitylabelkey) is configured.
+**Options**:
+**Default**:
+**Example**:
+
+```yaml
+nodeaffinityLabel:
+ value: sysdig
+```
+
+## **pvStorageSize.cassandra**
+
+**Required**: `false`
+**Description**: The size of the persistent volume assigned to Cassandra, regardless of the cluster `size` used. This option *does not* apply when [`storageClassProvisioner`](#storageclassprovisioner) is `hostPath`.
+**Options**:
+**Default**: 30Gi
+**Example**:
+
+```yaml
+pvStorageSize:
+ cassandra: 500Gi
+```
+
+## **pvStorageSize.large.cassandra**
+
+**Required**: `false`
+**Description**: The size of the persistent volume assigned to Cassandra in a cluster of [`size`](#size) large. This option *only* applies if [`storageClassProvisioner`](#storageclassprovisioner) is `hostPath`.
+**Options**:
+**Default**: 300Gi
+**Example**:
+
+```yaml
+pvStorageSize:
+ large:
+ cassandra: 500Gi
+```
+
+## **pvStorageSize.large.elasticsearch**
+
+**Required**: `false`
+**Description**: The size of the persistent volume assigned to Elasticsearch
+in a cluster of [`size`](#size) large. This option is ignored if
+[`storageClassProvisioner`](#storageclassprovisioner) is `hostPath`.
+**Options**:
+**Default**: 300Gi
+**Example**:
+
+```yaml
+pvStorageSize:
+ large:
+ elasticsearch: 500Gi
+```
+
+## **pvStorageSize.large.postgresql**
+
+**Required**: `false`
+**Description**: The size of the persistent volume assigned to PostgreSQL in a
+cluster of [`size`](#size) large. This option is ignored if
+[`storageClassProvisioner`](#storageclassprovisioner) is `hostPath`.
+**Options**:
+**Default**: 60Gi
+**Example**:
+
+```yaml
+pvStorageSize:
+ large:
+ postgresql: 100Gi
+```
+
+## **pvStorageSize.medium.cassandra**
+
+**Required**: `false`
+**Description**: The size of the persistent volume assigned to Cassandra in a cluster of [`size`](#size) medium. This option *only* applies if [`storageClassProvisioner`](#storageclassprovisioner) is `hostPath`.
+**Options**:
+**Default**: 150Gi
+**Example**:
+
+```yaml
+pvStorageSize:
+ medium:
+ cassandra: 300Gi
+```
+
+## **pvStorageSize.medium.elasticsearch**
+
+**Required**: `false`
+**Description**: The size of the persistent volume assigned to Elasticsearch in a
+cluster of [`size`](#size) medium. This option is ignored if
+[`storageClassProvisioner`](#storageclassprovisioner) is `hostPath`.
+**Options**:
+**Default**: 100Gi
+**Example**:
+
+```yaml
+pvStorageSize:
+ medium:
+ elasticsearch: 300Gi
+```
+
+## **pvStorageSize.medium.postgresql**
+
+**Required**: `false`
+**Description**: The size of the persistent volume assigned to PostgreSQL in a
+cluster of [`size`](#size) medium. This option is ignored if
+[`storageClassProvisioner`](#storageclassprovisioner) is `hostPath`.
+**Options**:
+**Default**: 60Gi
+**Example**:
+
+```yaml
+pvStorageSize:
+ medium:
+ postgresql: 100Gi
+```
+
+## **pvStorageSize.small.cassandra**
+
+**Required**: `false`
+**Description**: The size of the persistent volume assigned to Cassandra in a cluster of [`size`](#size) small. This option *only* applies if [`storageClassProvisioner`](#storageclassprovisioner) is `hostPath`.
+**Options**:
+**Default**: 30Gi
+**Example**:
+
+```yaml
+pvStorageSize:
+ small:
+ cassandra: 100Gi
+```
+
+## **pvStorageSize.small.elasticsearch**
+
+**Required**: `false`
+**Description**: The size of the persistent volume assigned to Elasticsearch
+in a cluster of [`size`](#size) small. This option is ignored if
+[`storageClassProvisioner`](#storageclassprovisioner) is `hostPath`.
+**Options**:
+**Default**: 30Gi
+**Example**:
+
+```yaml
+pvStorageSize:
+ small:
+ elasticsearch: 100Gi
+```
+
+## **pvStorageSize.small.postgresql**
+
+**Required**: `false`
+**Description**: The size of the persistent volume assigned to PostgreSQL in a
+cluster of [`size`](#size) small. This option is ignored if
+[`storageClassProvisioner`](#storageclassprovisioner) is `hostPath`.
+**Options**:
+**Default**: 30Gi
+**Example**:
+
+```yaml
+pvStorageSize:
+ small:
+ postgresql: 100Gi
+```
+
+## **pvStorageSize.large.natsJs**
+
+**Required**: `false`
+**Description**: The size of the persistent volume assigned to NATS JS HA in a
+cluster of [`size`](#size) small. This option is ignored if
+[`storageClassProvisioner`](#storageclassprovisioner) is `hostPath`.
+**Options**:
+**Default**: 50Gi
+**Example**:
+
+```yaml
+pvStorageSize:
+ large:
+ natsJs: 50Gi
+```
+
+## **pvStorageSize.medium.natsJs**
+
+**Required**: `false`
+**Description**: The size of the persistent volume assigned to NATS JS HA in a
+cluster of [`size`](#size) small. This option is ignored if
+[`storageClassProvisioner`](#storageclassprovisioner) is `hostPath`.
+**Options**:
+**Default**: 10Gi
+**Example**:
+
+```yaml
+pvStorageSize:
+ medium:
+ natsJs: 10Gi
+```
+
+## **pvStorageSize.small.natsJs**
+
+**Required**: `false`
+**Description**: The size of the persistent volume assigned to NATS JS HA in a
+cluster of [`size`](#size) small. This option is ignored if
+[`storageClassProvisioner`](#storageclassprovisioner) is `hostPath`.
+**Options**:
+**Default**: 50Gi
+**Example**:
+
+```yaml
+pvStorageSize:
+ small:
+ natsJs: 50Gi
+```
+
+## **pvStorageSize.small.neo4j**
+
+**Required**: `false`
+**Description**: The size of the persistent volume assigned to Neo4J HA in a
+cluster of [`size`](#size) small. This option is ignored if
+`sysdig.neo4j.neo4j.volumes.data.dynamic.requests.storage` is set.
+**Options**:
+**Default**: 10Gi
+**Example**:
+
+```yaml
+pvStorageSize:
+ large:
+ neo4j: 10Gi
+```
+
+## **pvStorageSize.medium.neo4j**
+
+**Required**: `false`
+**Description**: The size of the persistent volume assigned to Neo4J HA in a
+cluster of [`size`](#size) medium. This option is ignored if
+`sysdig.neo4j.neo4j.volumes.data.dynamic.requests.storage` is set.
+**Options**:
+**Default**: 50Gi
+**Example**:
+
+```yaml
+pvStorageSize:
+ medium:
+ natsJs: 50Gi
+```
+
+## **pvStorageSize.large.neo4j**
+
+**Required**: `false`
+**Description**: The size of the persistent volume assigned to Neo4J HA in a
+cluster of [`size`](#size) large. This option is ignored if
+`sysdig.neo4j.neo4j.volumes.data.dynamic.requests.storage` is set.
+**Options**:
+**Default**: 100Gi
+**Example**:
+
+```yaml
+pvStorageSize:
+ large:
+ neo4j: 100Gi
+```
+
+## **sysdig.neo4j.neo4j.volumes.data.dynamic.requests.storage**
+
+**Required**: `false`
+**Description**: The size of the persistent volume assigned to Neo4J HA.
+**Options**:
+**Default**:
+**Example**:
+
+```yaml
+neo4j:
+ neo4j:
+ volumes:
+ data:
+ dynamic:
+ requests:
+ storage: 50Gi
+```
+
+## **sysdig.anchoreVersion**
+
+**Required**: `false`
+**Description**: The docker image tag of the Sysdig Anchore Core.
+**Options**:
+**Default**: 0.8.1-53
+**Example**:
+
+```yaml
+sysdig:
+ anchoreVersion: 0.8.1-53
+```
+
+## **sysdig.accessKey**
+
+**Required**: `false`
+**Description**: The AWS (or AWS compatible) accessKey to be used by Sysdig
+components to communicate with AWS (or an AWS compatible API).
+**Options**:
+**Default**:
+**Example**:
+
+```yaml
+sysdig:
+ accessKey: my_awesome_aws_access_key
+```
+
+## **sysdig.awsRegion**
+
+**Required**: `false`
+**Description**: The AWS (or AWS compatible) region to be used by Sysdig
+components to communicate with AWS (or an AWS compatible API).
+**Options**:
+**Default**:
+**Example**:
+
+```yaml
+sysdig:
+ awsRegion: my_aws_region
+```
+
+## **sysdig.secretKey**
+
+**Required**: `false`
+**Description**: The AWS (or AWS compatible) secretKey to be used by Sysdig
+components to communicate with AWS (or an AWS compatible API).
+**Options**:
+**Default**:
+**Example**:
+
+```yaml
+sysdig:
+ secretKey: my_super_secret_secret_key
+```
+
+## **sysdig.s3.enabled**
+
+**Required**: `false`
+**Description**: Specifies if storing Sysdig Captures in S3 or S3-compatible storage is enabled.
+**Options**:`true|false`
+**Default**:false
+**Example**:
+
+```yaml
+sysdig:
+ s3:
+ enabled: true
+```
+
+## **sysdig.s3.endpoint**
+
+**Required**: `false`
+**Description**: S3-compatible endpoint for the bucket, this option is ignored if
+[`sysdig.s3.enabled`](#sysdigs3enabled) is not configured. This option is not required if using an AWS S3 Bucket for captures.
+**Options**:
+**Default**:
+**Example**:
+
+```yaml
+sysdig:
+ s3:
+ endpoint: s3.us-south.cloud-object-storage.appdomain.cloud
+```
+
+## **sysdig.s3.bucketName**
+
+**Required**: `false`
+**Description**: Name of the S3 bucket to be used for captures, this option is ignored if
+[`sysdig.s3.enabled`](#sysdigs3enabled) is not configured.
+**Options**:
+**Default**:
+**Example**:
+
+```yaml
+sysdig:
+ s3:
+ bucketName: my_awesome_bucket
+```
+
+## **sysdig.s3.capturesFolder**
+
+**Required**: `false`
+**Description**: Name of the folder in S3 bucket to be used for storing captures, this option is ignored if
+[`sysdig.s3.enabled`](#sysdigs3enabled) is not configured.
+**Options**:
+**Default**:
+**Example**:
+
+```yaml
+sysdig:
+ s3:
+ capturesFolder: my_captures_folder
+```
+
+## **sysdig.cassandraVersion**
+
+**Required**: `false`
+**Description**: The docker image tag of Cassandra.
+**Options**:
+**Default**: 4.1.3-0.0.14
+**Example**:
+
+```yaml
+sysdig:
+ cassandraVersion: 4.1.3-0.0.14
+```
+
+## **sysdig.cassandraExporterVersion**
+
+**Required**: `false`
+**Description**: The docker `image tag` of Cassandra's Prometheus JMX exporter. Default image: `//promcat-jmx-exporter:v0.17.0-ubi`
+**Options**:
+**Default**: v0.20.0-ubi
+**Example**:
+
+```yaml
+sysdig:
+ cassandraExporterVersion: latest
+```
+
+## **sysdig.cassandra.snitch.extractCMD**
+
+**Required**: `false`
+**Description**: Shell command applied to the zone label extracted from the Kubernetes worker to extract a string to use for the `rack`
+**Options**:
+**Default**: `""`
+**Example**:
+
+```yaml
+sysdig:
+ cassandra:
+ snitch:
+ extractCMD: "cat /node-labels/failure-domain.beta.kubernetes.io/zone || cat /node-labels/topology.kubernetes.io/zone"
+```
+
+## **sysdig.cassandra.useCassandra3** (**Deprecated**)
+
+**Required**: `false`
+**Description**: Deprecated: Use Cassandra 3 instead of Cassandra 2. Only available for fresh installs from 4.0.
+**Options**: `true|false`
+**Default**: `true`
+**Example**:
+
+```yaml
+sysdig:
+ cassandra:
+ useCassandra3: false
+```
+
+## **sysdig.Cassandra3Version** (**Deprecated**)
+
+**Required**: `false`
+**Description**: Deprecated: Specify the image version of Cassandra 3.x. Ignored if `sysdig.useCassandra3` is not set to `true`. Only supported in fresh installs from 4.0
+**Options**:
+**Default**: `3.11.11.1`
+**Example**:
+
+```yaml
+sysdig:
+ cassandra3Version: 3.11.11.1
+```
+
+## **sysdig.cassandra.external**
+
+**Required**: `false`
+**Description**: If set does not create a local Cassandra cluster, tries connecting to an external Cassandra cluster.
+This can be used in conjunction with [`sysdig.cassandra.endpoint`](#sysdigcassandraendpoint)
+**Options**: `true|false`
+**Default**: `false`
+**Example**:
+
+```yaml
+sysdig:
+ cassandra:
+ external: true
+```
+
+## **sysdig.cassandra.tolerations**
+
+**Required**: `false`
+**Description**: If set add tolerations to Cassandra statefulset
+**Options**:
+**Default**: `[]`
+**Example**:
+
+```yaml
+sysdig:
+ cassandra:
+ tolerations:
+ key: dedicated
+ operator: Equal
+ value: cassandra
+ effect: NoSchedule
+```
+
+## **sysdig.cassandra.nodeSelector**
+
+**Required**: `false`
+**Description**: If set add nodeSelector map to Cassandra statefulset
+**Options**:
+**Default**: `[]`
+**Example**:
+
+```yaml
+sysdig:
+ cassandra:
+ nodeSelector:
+ worker-role: cassandra
+```
+
+## **sysdig.cassandra.nodeaffinityLabel**
+
+**Required**: `false`
+**Description**: The key and the value of the label that is used to configure the nodes that the
+Cassandra pods are expected to run on.
+**Options**:
+**Default**:
+**Example**:
+
+```yaml
+sysdig:
+ cassandra:
+ nodeaffinityLabel:
+ key: sysdig/worker-pool
+ value: cassandra
+```
+
+## **sysdig.cassandra.endpoint**
+
+**Required**: `false`
+**Description**: External Cassandra endpoint can be provided here.
+**Options**:
+**Default**: 'sysdigcloud-cassandra'
+**Example**:
+
+```yaml
+sysdig:
+ cassandra:
+ external: true
+ endpoint: external.cassandra.cluster
+```
+
+## **sysdig.cassandra.secure**
+
+**Required**: `false`
+**Description**: Enables cassandra server and clients to use authentication.
+**Options**: `true|false`
+**Default**:`true`
+**Example**:
+
+```yaml
+sysdig:
+ cassandra:
+ secure: true
+ ssl: true
+```
+
+## **sysdig.cassandra.ssl**
+
+**Required**: `false`
+**Description**: Enables cassandra server and clients communicate over ssl. Defaults to `true` for Cassandra 3 installs (available from 4.0)
+**Options**: `true|false`
+**Default**: `true`
+**Example**:
+
+```yaml
+sysdig:
+ cassandra:
+ secure: true
+ ssl: true
+```
+
+## **sysdig.cassandra.enableMetrics**
+
+**Required**: `false`
+**Description**: Enables cassandra exporter as sidecar. Defaults to `false` for all Cassandra installs (available from 4.0)
+**Options**: `true|false`
+**Default**: `false`
+**Example**:
+
+```yaml
+sysdig:
+ cassandra:
+ enableMetrics: true
+```
+
+## **sysdig.cassandra.user**
+
+**Required**: `false`
+**Description**: Sets cassandra user. The only gotcha is the user cannot be a substring of sysdigcloud-cassandra.
+**Options**:
+**Default**: `sysdigcassandra`
+**Example**:
+
+```yaml
+sysdig:
+ cassandra:
+ user: cassandrauser
+```
+
+## **sysdig.cassandra.password**
+
+**Required**: `false`
+**Description**: Sets cassandra password
+**Options**:
+**Default**: Autogenerated 16 alphanumeric characters
+**Example**:
+
+```yaml
+sysdig:
+ cassandra:
+ user: cassandrauser
+ password: cassandrapassword
+```
+
+## **sysdig.cassandra.workloadName**
+
+**Required**: `false`
+**Description**: Name assigned to the Cassandra objects(statefulset and
+service)
+**Options**:
+**Default**: `sysdigcloud-cassandra`
+**Example**:
+
+```yaml
+sysdig:
+ cassandra:
+ workloadName: sysdigcloud-cassandra
+```
+
+## **sysdig.cassandra.customOverrides**
+
+**Required**: `false`
+**Description**: The custom overrides of Cassandra's default configuration. The parameter
+expects a YAML block of key-value pairs as described in the [Cassandra
+documentation](https://docs.datastax.com/en/archived/cassandra/2.1/cassandra/configuration/configCassandra_yaml_r.html).
+**Options**:
+**Default**:
+**Example**:
+
+```yaml
+sysdig:
+ cassandra:
+ customOverrides: |
+ concurrent_compactors: 6
+ read_request_timeout: 10000ms
+ write_request_timeout: 10000ms
+ request_timeout: 11000ms
+```
+
+## **sysdig.cassandra.datacenterName**
+
+**Required**: `false`
+**Description**: The datacenter name used for the [Cassandra
+Snitch](http://cassandra.apache.org/doc/latest/operating/snitch.html).
+**Options**:
+**Default**: In AWS the value is ec2Region as determined by the code
+[here](https://github.com/apache/cassandra/blob/a85afbc7a83709da8d96d92fc4154675794ca7fb/src/java/org/apache/cassandra/locator/Ec2Snitch.java#L61-L63),
+elsewhere defaults to an empty string.
+**Example**:
+
+```yaml
+sysdig:
+ cassandra:
+ datacenterName: my-cool-datacenter
+```
+
+## **sysdig.cassandra.jvmOptions**
+
+**Required**: `false`
+**Description**: The custom configuration for Cassandra JVM.
+**Options**:
+**Default**: `-Xms4g -Xmx4g`
+**Example**:
+
+```yaml
+sysdig:
+ cassandra:
+ jvmOptions: -Xms6G -Xmx6G -XX:+PrintGCDateStamps -XX:+PrintGCDetails
+```
+
+## **sysdig.cassandra.hostPathNodes**
+
+**Required**: `false`
+**Description**: An array of node hostnames printed out by the `kubectl get node -o name` command. These are the nodes where Cassandra hostPath persistent volumes should be created on. The number of nodes must be at minimum whatever the value of
+[`sysdig.cassandraReplicaCount`](#sysdigcassandrareplicacount) is. This is
+required if configured [`storageClassProvisioner`](#storageclassprovisioner)
+is `hostPath`.
+**Options**:
+**Default**: []
+
+**Example**:
+
+```yaml
+sysdig:
+ cassandra:
+ hostPathNodes:
+ - my-cool-host1.com
+ - my-cool-host2.com
+ - my-cool-host3.com
+ - my-cool-host4.com
+ - my-cool-host5.com
+ - my-cool-host6.com
+```
+
+## **sysdig.collectorPort**
+
+**Required**: `false`
+**Description**: The port to publicly serve Sysdig collector on.
+_**Note**: collectorPort is not configurable in openshift deployments. It is always 443._
+**Options**: `1024-65535`
+**Default**: `6443`
+**Example**:
+
+```yaml
+sysdig:
+ collectorPort: 7000
+```
+
+## **sysdig.certificate.customCA**
+
+**Required**: `false`
+**Description**:
+The Sysdig platform may sometimes open connections over SSL to certain external services, including:
+
+- LDAP over SSL
+- SAML over SSL
+- OpenID Connect over SSL
+- HTTPS Proxies
+- SMTPS SMTP over SSL
+
+If the signing authorities for the certificates presented by these services are not well-known to the Sysdig Platform
+(e.g., if you maintain your own Certificate Authority), they are not trusted by default.
+
+To allow the Sysdig platform to trust these certificates, use this configuration to upload one or more
+PEM-format CA certificates. You must ensure you've uploaded all certificates in the CA approval chain to the root CA.
+
+This configuration when set expects certificates with .crt, .pem or .p12 extensions under certs/custom-java-certs/
+in the same level as `values.yaml`.
+
+**Options**: `true|false`
+**Default**: false
+**Example**:
+
+```bash
+#In the example directory structure below, certificate1.crt and certificate2.crt will be added to the trusted list.
+# certificate3.p12 will be loaded to the keystore together with it's private key.
+bash-5.0$ find certs values.yaml
+certs
+certs/custom-java-certs
+certs/custom-java-certs/certificate1.crt
+certs/custom-java-certs/certificate2.crt
+certs/custom-java-certs/certificate3.p12
+certs/custom-java-certs/certificate3.p12.passwd
+
+
+values.yaml
+```
+
+```yaml
+sysdig:
+ certificate:
+ customCA: true
+```
+
+## **sysdig.dnsName**
+
+**Required**: `true`
+**Description**: The domain name the Sysdig APIs will be served on.
+**Options**:
+**Default**:
+**Example**:
+
+```yaml
+sysdig:
+ dnsName: my-awesome-domain-name.com
+```
+
+## **sysdig.elasticsearchVersion**
+
+**Required**: `false`
+**Description**: The docker image tag of Elasticsearch.
+**Options**:
+**Default**: 5.6.16.18
+**Example**:
+
+```yaml
+sysdig:
+ elasticsearchVersion: 5.6.16.18
+```
+
+## **sysdig.platformAuditTrail.enabled**
+
+**Required**: `false`
+**Description**: Global flag to enable Sysdig Platform Audit in all services.
+**Required**: `false`
+**Options**: `true|false`
+**Default**: `false`
+**Example**:
+
+```yaml
+sysdig:
+ platformAuditTrail:
+ enabled: true
+```
+
+## **sysdig.secure.events.audit.config.store.ip**
+
+**Required**: `false`
+**Description**: Global flag to enable storing of origin IP in Sysdig Platform Audit in all services.
+**Required**: `false`
+**Options**: `true|false`
+**Default**: `false`
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ events:
+ audit:
+ config:
+ store:
+ ip: true
+```
+
+## **sysdig.elasticsearch6Version**
+
+**Required**: `false`
+**Description**: The docker image tag of Elasticsearch.
+**Options**:
+**Default**: 6.8.6.12
+**Example**:
+
+```yaml
+sysdig:
+ elasticsearch6Version: 6.8.6.12
+```
+
+## **sysdig.opensearchImageName**
+
+**Required**: `false`
+**Description**: Docker Image name for Opensearch. Eg, for Opensearch 2: "opensearch-2".
+**Options**:
+**Default**: opensearch-2
+**Example**:
+
+```yaml
+sysdig:
+ opensearchImageName: "opensearch-2"
+```
+
+## **sysdig.opensearchVersion**
+
+**Required**: `false`
+**Description**: The docker image tag of Opensearch.
+**Options**:
+**Default**: 0.3.6
+**Example**:
+
+```yaml
+sysdig:
+ opensearchVersion: 0.3.6
+```
+
+## **sysdig.haproxyVersion**
+
+**Required**: `false`
+**Description**: The docker image tag of HAProxy ingress controller. The
+parameter is relevant only when configured `deployment` is `kubernetes`.
+**Options**:
+**Default**: v0.7-beta.7.1
+**Example**:
+
+```yaml
+sysdig:
+ haproxyVersion: v0.7-beta.7.1
+```
+
+---
+
+## **sysdig.skipIngressGeneration**
+
+**NOTE** - this is a recently added variable that bypasses the previous logic of skipping Ingress resource generation when networking was set to `external`. The goal is to generate the Ingress manifests either way, because even if a customer uses their own Ingress controller, they would still need the Ingress resources. The only reason to have this parameter is if we _explicitly_ need to avoid the generation of Ingress resources
+**Required**: `false`
+**Description**: Boolean parameter which can be used to skip the generation of the ingress resources if desired.
+**Options**: `true|false`
+**Default**: `false`
+**Example**:
+
+```yaml
+sysdig:
+ skipIngressGeneration: true
+```
+
+## **sysdig.ingressNetworking**
+
+**Required**: `false`
+**Description**: The networking construct used to expose the Sysdig API and collector.
+
+- hostnetwork, sets the hostnetworking in ingress daemonset and opens host ports for api and collector. This does not create a service.
+- loadbalancer, creates a service of type [`loadbalancer`](https://kubernetes.io/docs/concepts/services-networking/#loadbalancer)
+- nodeport, creates a service of type [`nodeport`](https://kubernetes.io/docs/concepts/services-networking/#nodeport). The node ports can be customized with:
+ - [`sysdig.ingressNetworkingInsecureApiNodePort`](#sysdigingressnetworkinginsecureapinodeport)
+ - [`sysdig.ingressNetworkingApiNodePort`](#sysdigingressnetworkingapinodeport)
+ - [`sysdig.ingressNetworkingCollectorNodePort`](#sysdigingressnetworkingcollectornodeport)
+- external, assumes external ingress is used and does not create ingress objects.
+
+**Options**:
+[`hostnetwork`](https://kubernetes.io/docs/concepts/policy/pod-security-policy/#host-namespaces)|[`loadbalancer`](https://kubernetes.io/docs/concepts/services-networking/#loadbalancer)|[`nodeport`](https://kubernetes.io/docs/concepts/services-networking/#nodeport)| external
+
+**Default**: `hostnetwork`
+**Example**:
+
+```yaml
+sysdig:
+ ingressNetworking: loadbalancer
+```
+
+## **sysdig.ingressClassName**
+
+**Required**: `false`
+**Description**: Ingress class name to assign on generated `Ingress` resources. This is useful in cases where the value of [`ingressNetworking`](#sysdigingressnetworking) is set to `external` and the targeted Ingress controller has a class name which is different from the default.
+
+**Options**:
+
+**Default**: `haproxy`
+**Example**:
+
+```yaml
+sysdig:
+ ingressClassName: haproxy
+```
+
+## **sysdig.ingressNetworkingInsecureApiNodePort**
+
+**Required**: `false`
+**Description**: When [`sysdig.ingressNetworking`](#sysdigingressnetworking)
+is configured as `nodeport`, this is the NodePort requested by Installer
+from Kubernetes for the Sysdig non-TLS API endpoint.
+**Options**:
+**Default**: `30000`
+**Example**:
+
+```yaml
+sysdig:
+ ingressNetworkingInsecureApiNodePort: 30000
+```
+
+## **sysdig.ingressLoadBalancerAnnotation**
+
+**Required**: `false`
+**Description**: Annotations that will be added to the
+`haproxy-ingress-service` object, this is useful to set annotations related to
+creating internal loadbalancers.
+**Options**:
+**Example**:
+
+```yaml
+sysdig:
+ ingressLoadBalancerAnnotation:
+ cloud.google.com/load-balancer-type: Internal
+```
+
+## **sysdig.ingressNetworkingApiNodePort**
+
+**Required**: `false`
+**Description**: When [`sysdig.ingressNetworking`](#sysdigingressnetworking)
+is configured as `nodeport`, this is the NodePort requested by Installer
+from Kubernetes for the Sysdig TLS API endpoint.
+**Options**:
+**Default**: `30001`
+**Example**:
+
+```yaml
+sysdig:
+ ingressNetworkingApiNodePort: 30001
+```
+
+## **sysdig.ingressNetworkingCollectorNodePort**
+
+**Required**: `false`
+**Description**: When [`sysdig.ingressNetworking`](#sysdigingressnetworking)
+is configured as `nodeport`, this is the NodePort requested by Installer
+from Kubernetes for the Sysdig collector endpoint.
+**Options**:
+**Default**: `30002`
+**Example**:
+
+```yaml
+sysdig:
+ ingressNetworkingCollectorNodePort: 30002
+```
+
+## **haproxyIngress.watchAllNamespaces**
+
+**Required**: `false`
+**Description**: When the 'watchAllNamespaces' setting is enabled, the HaProxy Ingress controller oversees Ingress resources throughout all namespaces within the cluster. By default, this setting is disabled, restricting monitoring to the namespace specifically configured for sysdig deployment.
+**Options**: `true|false`
+**Default**: `false`
+**Example**:
+
+```yaml
+haproxyIngress:
+ watchAllNamespaces: true
+```
+
+## **sysdig.license**
+
+**Required**: `true`
+**Description**: Sysdig license provided with the deployment.
+**Options**:
+**Default**:
+**Example**:
+
+```yaml
+sysdig:
+ license: replace_with_your_license
+```
+
+## **sysdig.monitorVersion**
+
+**Required**: `false`
+**Description**: The docker image tag of the Sysdig Monitor. **Do not modify
+this unless you know what you are doing as modifying it could have unintended
+consequences**
+**Options**:
+**Default**: 3.5.1.7018
+**Example**:
+
+```yaml
+sysdig:
+ monitorVersion: 3.5.1.7018
+```
+
+## **sysdig.secureVersion**
+
+**Required**: `false`
+**Description**: The docker image tag of the Sysdig Secure, if this is not
+configured it defaults to `sysdig.monitorVersion` **Do not modify
+this unless you know what you are doing as modifying it could have unintended
+consequences**
+**Options**:
+**Default**: 3.5.1.7018
+**Example**:
+
+```yaml
+sysdig:
+ secureVersion: 3.5.1.7018
+```
+
+## **sysdig.sysdigAPIVersion**
+
+**Required**: `false`
+**Description**: The docker image tag of Sysdig API components, if
+this is not configured it defaults to `sysdig.monitorVersion` **Do not modify
+this unless you know what you are doing as modifying it could have unintended
+consequences**
+**Options**:
+**Default**: 3.5.1.7018
+**Example**:
+
+```yaml
+sysdig:
+ sysdigAPIVersion: 3.5.1.7018
+```
+
+## **sysdig.sysdigCollectorVersion**
+
+**Required**: `false`
+**Description**: The docker image tag of Sysdig Collector components, if
+this is not configured it defaults to `sysdig.monitorVersion` **Do not modify
+this unless you know what you are doing as modifying it could have unintended
+consequences**
+**Options**:
+**Default**: 3.5.1.7018
+**Example**:
+
+```yaml
+sysdig:
+ sysdigCollectorVersion: 3.5.1.7018
+```
+
+## **sysdig.sysdigWorkerVersion**
+
+**Required**: `false`
+**Description**: The docker image tag of Sysdig Worker components, if
+this is not configured it defaults to `sysdig.monitorVersion` **Do not modify
+this unless you know what you are doing as modifying it could have unintended
+consequences**
+**Options**:
+**Default**: 3.5.1.7018
+**Example**:
+
+```yaml
+sysdig:
+ sysdigWorkerVersion: 3.5.1.7018
+```
+
+## **sysdig.alertingSystem.enabled**
+
+**Required**: `false`
+**Description**: Enable or disable the new alert-manager and alert-notifier deployment
+**Options**:`true|false`
+**Default**: `true`
+**Example**:
+
+```yaml
+sysdig:
+ alertingSystem:
+ enabled: true
+```
+
+## **sysdig.alertingSystem.alertManager.jvmOptions**
+
+**Required**: `false`
+**Description**: Custom configuration for Sysdig Alert Manager jvm.
+**Options**:
+**Default**:
+**Example**:
+
+```yaml
+sysdig:
+ alertingSystem:
+ alertManager:
+ jvmOptions: -Dsysdig.redismq.watermark.consumer.threads=20
+```
+
+## **sysdig.alertingSystem.alertManager.apiToken**
+
+**Required**: `false`
+**Description**: API token used by the Alert Manager to communicate with the sysdig API server
+**Options**:
+**Default**:
+**Example**:
+
+```yaml
+sysdig:
+ alertingSystem:
+ alertManager:
+ apiToken: A_VALID_TOKEN
+```
+
+## **sysdig.alertingSystem.alertNotifier.jvmOptions**
+
+**Required**: `false`
+**Description**: Custom configuration for Sysdig Alert Notifier jvm.
+**Options**:
+**Default**:
+**Example**:
+
+```yaml
+sysdig:
+ alertingSystem:
+ alertNotifier:
+ jvmOptions: -Dsysdig.redismq.watermark.consumer.threads=20
+```
+
+## **sysdig.alertingSystem.alertNotifier.apiToken**
+
+**Required**: `false`
+**Description**: API token used by the Alert Notifier to communicate with the sysdig API server
+**Options**:
+**Default**:
+**Example**:
+
+```yaml
+sysdig:
+ alertingSystem:
+ alertNotifier:
+ apiToken: A_VALID_TOKEN
+```
+
+## **sysdig.alertingSystem.alertNotifierReplicaCount**
+
+**Required**: `false`
+**Description**: Number of Replica for the alertNotifier
+**Options**:
+**Default**: small: 1, medium: 3, large: 5
+**Example**:
+
+```yaml
+sysdig:
+ alertingSystem:
+ alertNotifierReplicaCount: 3
+```
+
+## **sysdig.alertingSystem.alertManagerReplicaCount**
+
+**Required**: `false`
+**Description**: Number of Replica for the alertManager
+**Options**:
+**Default**: small: 1, medium: 3, large: 5
+**Example**:
+
+```yaml
+sysdig:
+ alertingSystem:
+ alertManagerReplicaCount: 3
+```
+
+## **sysdig.natsExporterVersion**
+
+**Required**: `false`
+**Description**: Docker image tag of the Prometheus exporter for NATS.
+**Options**:
+**Default**: 0.1.5
+**Example**:
+
+```yaml
+sysdig:
+ natsExporterVersion: 0.1.5
+```
+
+## **sysdig.natsStreamingVersion**
+
+**Required**: `false`
+**Description**: Docker image tag of NATS streaming.
+**Options**:
+**Default**: 0.22.0.7
+**Example**:
+
+```yaml
+sysdig:
+ natsStreamingVersion: 0.22.0.7
+```
+
+## **sysdig.natsStreamingInitVersion**
+
+**Required**: `false`
+**Description**: Docker image tag of NATS streaming init.
+**Options**:
+**Default**: 0.22.0.7
+**Example**:
+
+```yaml
+sysdig:
+ natsStreamingInitVersion: 0.22.0.7
+```
+
+## **sysdig.natsServerVersion**
+
+**Required**: `false`
+**Description**: Docker image tag of NATS.
+**Options**:
+**Default**: 0.1.11
+**Example**:
+
+```yaml
+sysdig:
+ natsExporterVersion: 0.1.11
+```
+
+## **sysdig.natsReloaderVersion**
+
+**Required**: `false`
+**Description**: Docker image tag of NATS Reloader.
+**Options**:
+**Default**: 0.1.4
+**Example**:
+
+```yaml
+sysdig:
+ natsExporterVersion: 0.1.4
+```
+
+## **sysdig.natsBoxVersion**
+
+**Required**: `false`
+**Description**: Docker image tag of NATS Box.
+**Options**:
+**Default**: 0.0.13
+**Example**:
+
+```yaml
+sysdig:
+ natsExporterVersion: 0.0.13
+```
+
+## **sysdig.openshiftUrl**
+
+**Required**: `false`
+**Description**: Openshift API url along with its port number, this is
+required if configured `deployment` is `openshift`.
+**Options**:
+**Default**:
+**Example**:
+
+```yaml
+sysdig:
+ openshiftUrl: https://api.my-awesome-openshift.com:6443
+```
+
+## **sysdig.openshiftUser**
+
+**Required**: `false`
+**Description**: Username of the user to access the configured
+`sysdig.openshiftUrl`, required if configured `deployment` is `openshift`.
+**Options**:
+**Default**:
+**Example**:
+
+```yaml
+sysdig:
+ openshiftUser: bob+alice
+```
+
+## **sysdig.openshiftPassword**
+
+**Required**: `false`
+**Description**: Password of the user(`sysdig.openshiftUser`) to access the
+configured `sysdig.openshiftUrl`, required if configured `deployment` is
+`openshift`.
+**Options**:
+**Default**:
+**Example**:
+
+```yaml
+sysdig:
+ openshiftPassword: my-@w350m3-p@55w0rd
+```
+
+## **sysdig.postgresVersion**
+
+**Required**: `false`
+**Description**: Docker image tag of Postgres, relevant when configured `apps`
+is `monitor secure` and when `postgres.HA.enabled` is false.
+**Options**:
+**Default**: 10.6.11
+**Example**:
+
+```yaml
+sysdig:
+ postgresVersion: 10.6.11
+```
+
+## **sysdig.postgresql.rootUser**
+
+**Required**: `false`
+**Description**: Root user of the in-cluster postgresql instance.
+**Options**:
+**Default**: `postgres`
+**Example**:
+
+```yaml
+sysdig:
+ postgresql:
+ rootUser: postgres
+```
+
+## **sysdig.postgresql.rootDb**
+
+**Required**: `false`
+**Description**: Root database of the in-cluster postgresql instance.
+**Options**:
+**Default**: `anchore`
+**Example**:
+
+```yaml
+sysdig:
+ postgresql:
+ rootDb: anchore
+```
+
+## **sysdig.postgresql.rootPassword**
+
+**Required**: `false`
+**Description**: Password for the root user of the in-cluster postgresql instance.
+**Options**:
+**Default**: Autogenerated 16 alphanumeric characters
+**Example**:
+
+```yaml
+sysdig:
+ postgresql:
+ rootPassword: my_root_password
+```
+
+## **sysdig.postgresql.primary**
+
+**Required**: `false`
+**Description**: Services will start in postgresql mode.
+**Options**: `true|false`
+**Default**: `true`
+**Example**:
+
+```yaml
+sysdig:
+ postgresql:
+ primary: true
+```
+
+## **sysdig.postgresql.external**
+
+**Required**: `false`
+**Description**: If set, the installer does not create a local postgresql cluster, instead it sets up the sysdig platform to connect to configured `sysdig.postgresDatabases.*.Host` databases.
+**Options**: `true|false`
+**Default**: `false`
+**Example**:
+
+```yaml
+sysdig:
+ postgresql:
+ external: true
+ postgresDatabases:
+ padvisor:
+ host: my-padvisor-db-external.com
+ sysdig:
+ host: my-sysdig-db-external.com
+```
+
+## **sysdig.postgresql.hostPathNodes**
+
+**Required**: `false`
+**Description**: An array of node hostnames has shown in `kubectl get node -o name` that postgresql hostPath persistent volumes should be created on. The
+number of nodes must be at minimum whatever the value of
+[`sysdig.postgresReplicaCount`](#sysdigpostgresreplicacount) is. This is
+required if configured [`storageClassProvisioner`](#storageclassprovisioner)
+is `hostPath`.
+**Options**:
+**Default**: []
+
+**Example**:
+
+```yaml
+sysdig:
+ postgresql:
+ hostPathNodes:
+ - my-cool-host1.com
+```
+
+## **sysdig.postgresql.pgParameters**
+
+**Required**: `false`
+**Description**: a dictionary of Postgres parameter names and values to apply to the cluster
+**Options**:
+**Default**: ``
+
+**Example**:
+
+```yaml
+sysdig:
+ postgresql:
+ pgParameters:
+ max_connections: "1024"
+ shared_buffers: "110MB"
+```
+
+## **sysdig.postgresql.ha.enabled**
+
+**Required**: `false`
+**Description**: true if you want to deploy postgreSQL in HA mode.
+**Options**: `true|false`
+**Default**: `false`
+
+**Example**:
+
+```yaml
+sysdig:
+ postgresql:
+ ha:
+ enabled: true
+```
+
+## **sysdig.postgresql.ha.spiloVersion**
+
+**Required**: `false`
+**Description**: Docker image tag of the postgreSQL node in HA mode.
+**Options**:
+**Default**: `2.0-p7`
+
+**Example**:
+
+```yaml
+sysdig:
+ postgresql:
+ ha:
+ spiloVersion: 2.0-p7
+```
+
+## **sysdig.postgresql.ha.operatorVersion**
+
+**Required**: `false`
+**Description**: Docker image tag of the postgreSQL operator pod that orchestrate postgreSQL nodes in HA mode.
+**Options**:
+**Default**: `v1.6.3`
+
+**Example**:
+
+```yaml
+sysdig:
+ postgresql:
+ ha:
+ operatorVersion: v1.6.3
+```
+
+## **sysdig.postgresql.ha.exporterVersion**
+
+**Required**: `false`
+**Description**: Docker image tag of the prometheus exporter for postgreSQL in HA mode.
+**Options**:
+**Default**: `latest`
+
+**Example**:
+
+```yaml
+sysdig:
+ postgresql:
+ ha:
+ exporterVersion: v0.3
+```
+
+## **sysdig.postgresql.ha.clusterDomain**
+
+**Required**: `false`
+**Description**: dns domain inside the cluster. Needed by the postgres operator to select the correct kubernetes api endpoint.
+**Options**:
+**Default**: `cluster.local`
+
+**Example**:
+
+```yaml
+sysdig:
+ postgresql:
+ ha:
+ clusterDomain: cluster.local
+```
+
+## **sysdig.postgresql.ha.replicas**
+
+**Required**: `false`
+**Description**: number of replicas for postgreSQL nodes in HA mode.
+**Options**:
+**Default**: `3`
+
+**Example**:
+
+```yaml
+sysdig:
+ postgresql:
+ ha:
+ replicas: 3
+```
+
+## **sysdig.postgresql.ha.checkCRDs**
+
+**Required**: `false`
+**Description**: Check if zalando pg operator CRDs are already present, if yes stop the installation. If disable the installation will continue to be performed even if the CRDs are present.
+**Options**:
+**Default**: `true`
+
+**Example**:
+
+```yaml
+sysdig:
+ postgresql:
+ ha:
+ checkCRD: true
+```
+
+## **sysdig.postgresql.ha.enableExporter**
+
+**Required**: `false`
+**Description**: Docker image tag of the prometheus exporter for postgreSQL in HA mode.
+**Options**:
+**Default**: `true`
+
+**Example**:
+
+```yaml
+sysdig:
+ postgresql:
+ ha:
+ enableExporter: true
+```
+
+## **sysdig.postgresql.ha.migrate.retryCount**
+
+**Required**: `false`
+**Description**: If true a sidecar prometheus exporter for postgres in HA mode is created.
+**Options**: `true|false`
+**Default**: `3600`
+
+**Example**:
+
+```yaml
+sysdig:
+ postgresql:
+ ha:
+ migrate:
+ retryCount: 3600
+```
+
+## **sysdig.postgresql.ha.migrate.retrySleepSeconds**
+
+**Required**: `false`
+**Description**: Wait time between checks for the migration job from postgreSQL in single mode to HA mode.
+**Options**:
+**Default**: `10`
+
+**Example**:
+
+```yaml
+sysdig:
+ postgresql:
+ ha:
+ migrate:
+ retrySleepSeconds: 10
+```
+
+## **sysdig.postgresql.ha.migrate.retainBackup**
+
+**Required**: `false`
+**Description**: If true the statefulset and pvc of the postgreSQL in single node mode is not deleted after the migration to HA mode.
+**Options**: `true|false`
+**Default**: `true`
+
+**Example**:
+
+```yaml
+sysdig:
+ postgresql:
+ ha:
+ migrate:
+ retainBackup: true
+```
+
+## **sysdig.postgresql.ha.migrate.migrationJobImageVersion**
+
+**Required**: `false`
+**Description**: Docker image tag of the migration job from postgres single node to HA mode.
+**Options**:
+**Default**: `postgres-to-postgres-ha-0.0.4`
+
+**Example**:
+
+```yaml
+sysdig:
+ postgresql:
+ ha:
+ migrate:
+ migrationJobImageVersion: v0.1
+```
+
+## **sysdig.postgresql.ha.customTls.enabled**
+
+**Required**: `false`
+**Description**: If set to true will pass to the target pg crd the option to add
+custom certificates and CA
+**Options**: `true|false`
+**Default**: `false`
+
+**Example**:
+
+```yaml
+sysdig:
+ postgresql:
+ ha:
+ customTls:
+ enabled: true
+```
+
+## **sysdig.postgresql.ha.customTls.crtSecretName**
+
+**Required**: `false`
+**Description**: in case of customtls enabled it's the name of the k8s secret
+that container certificate and key that will be used in postgres HA for ssl
+NOTE: the certficate and key files must be called `tls.crt` and `tls.key`
+**Options**: `secret-name`
+**Default**: `nil`
+
+**Example**:
+
+```yaml
+sysdig:
+ postgresql:
+ ha:
+ customTls:
+ enabled: true
+ crtSecretName: sysdigcloud-postgres-tls-crt
+```
+
+## **sysdig.postgresql.ha.customTls.caSecretName**
+
+**Required**: `false`
+**Description**: in case of customtls enabled it's the name of the k8s secret
+that container the CA certificate that will be used in postgres HA for ssl
+NOTE: the CA certificate file must be called `ca.crt`
+**Options**: `secret-name`
+**Default**: `nil`
+
+**Example**:
+
+```yaml
+sysdig:
+ postgresql:
+ ha:
+ customTls:
+ enabled: true
+ crtSecretName: sysdigcloud-postgres-tls-crt
+ caSecretName: sysdigcloud-postgres-tls-ca
+```
+
+## **sysdig.postgresDatabases.useNonAdminUsers**
+
+**Required**: `false`
+**Description**: If set, the services will connect to `anchore` and `profiling` databases in non-root mode: this also means that `anchore` and `profiling` connection details and credentials will be fetched from `sysdigcloud-postgres-config` configmap and `sysdigcloud-postgres-secret` secret, instead of `sysdigcloud-config` configmap and `sysdigcloud-anchore` secret. It only works if `sysdig.postgresql.external` is set.
+**Options**: `true|false`
+**Default**: `false`
+**Example**:
+
+```yaml
+sysdig:
+ postgresql:
+ external: true
+ postgresDatabases:
+ useNonAdminUsers: true
+ anchore:
+ host: my-anchore-db-external.com
+ profiling:
+ host: my-profiling-db-external.com
+```
+
+## **sysdig.postgresDatabases.anchore**
+
+**Required**: `false`
+**Description**: A map containing database connection details for external postgresql instance used as `anchore` database. To use in conjunction with `sysdig.postgresql.external`. Only relevant if `sysdig.postgresDatabases.useNonAdminUsers` is configured.
+**Example**:
+
+```yaml
+sysdig:
+ postgresql:
+ external: true
+ postgresDatabases:
+ useNonAdminUsers: true
+ anchore:
+ host: my-anchore-db-external.com
+ port: 5432
+ db: anchore_db
+ username: anchore_user
+ password: my_anchore_user_password
+ sslmode: disable
+ admindb: root_db
+ adminusername: root_user
+ adminpassword: my_root_user_password
+```
+
+## **sysdig.postgresDatabases.profiling**
+
+**Required**: `false`
+**Description**: A map containing database connection details for external postgresql instance used as `profiling` database. To use in conjunction with `sysdig.postgresql.external`. Only relevant if `sysdig.postgresDatabases.useNonAdminUsers` is configured.
+**Example**:
+
+```yaml
+sysdig:
+ postgresql:
+ external: true
+ postgresDatabases:
+ useNonAdminUsers: true
+ profiling:
+ host: my-profiling-db-external.com
+ port: 5432
+ db: anchore_db
+ username: profiling_user
+ password: my_profiling_user_password
+ sslmode: disable
+ admindb: root_db
+ adminusername: root_user
+ adminpassword: my_root_user_password
+```
+
+## **sysdig.postgresDatabases.policies**
+
+**Required**: `false`
+**Description**: A map containing database connection details for external postgresql instance used as `policies` database. To use in conjunction with `sysdig.postgresql.external`.
+**Example**:
+
+```yaml
+sysdig:
+ postgresql:
+ external: true
+ postgresDatabases:
+ policies:
+ host: my-policies-db-external.com
+ port: 5432
+ db: policies_db
+ username: policies_user
+ password: my_policies_user_password
+ sslmode: disable
+ admindb: root_db
+ adminusername: root_user
+ adminpassword: my_root_user_password
+```
+
+## **sysdig.postgresDatabases.scanning**
+
+**Required**: `false`
+**Description**: A map containing database connection details for external postgresql instance used as `scanning` database. To use in conjunction with `sysdig.postgresql.external`. Only relevant if `sysdig.postgresql.primary` is configured.
+**Example**:
+
+```yaml
+sysdig:
+ postgresql:
+ primary: true
+ external: true
+ postgresDatabases:
+ scanning:
+ host: my-scanning-db-external.com
+ port: 5432
+ db: scanning_db
+ username: scanning_user
+ password: my_scanning_user_password
+ sslmode: disable
+ admindb: root_db
+ adminusername: root_user
+ adminpassword: my_root_user_password
+```
+
+## **sysdig.postgresDatabases.reporting**
+
+**Required**: `false`
+**Description**: A map containing database connection details for external postgresql instance used as `reporting` database. To use in conjunction with `sysdig.postgresql.external`.
+**Example**:
+
+```yaml
+sysdig:
+ postgresql:
+ external: true
+ postgresDatabases:
+ reporting:
+ host: my-reporting-db-external.com
+ port: 5432
+ db: reporting_db
+ username: reporting_user
+ password: my_reporting_user_password
+ sslmode: disable
+ admindb: root_db
+ adminusername: root_user
+ adminpassword: my_root_user_password
+```
+
+## **sysdig.postgresDatabases.padvisor**
+
+**Required**: `false`
+**Description**: A map containing database connection details for external postgresql instance used as `padvisor` database. To use in conjunction with `sysdig.postgresql.external`. Only relevant if `sysdig.postgresql.primary` is configured.
+**Example**:
+
+```yaml
+sysdig:
+ postgresql:
+ primary: true
+ external: true
+ postgresDatabases:
+ padvisor:
+ host: my-padvisor-db-external.com
+ port: 5432
+ db: padvisor_db
+ username: padvisor_user
+ password: my_padvisor_user_password
+ sslmode: disable
+ admindb: root_db
+ adminusername: root_user
+ adminpassword: my_root_user_password
+```
+
+## **sysdig.postgresDatabases.sysdig**
+
+**Required**: `false`
+**Description**: A map containing database connection details for external postgresql instance used as `sysdig` database. To use in conjunction with `sysdig.postgresql.external`. Only relevant if `sysdig.postgresql.primary` is configured.
+**Example**:
+
+```yaml
+sysdig:
+ postgresql:
+ primary: true
+ external: true
+ postgresDatabases:
+ sysdig:
+ host: my-sysdig-db-external.com
+ port: 5432
+ db: sysdig_db
+ username: sysdig_user
+ password: my_sysdig_user_password
+ sslmode: disable
+ admindb: root_db
+ adminusername: root_user
+ adminpassword: my_root_user_password
+```
+
+## **sysdig.postgresDatabases.serviceOwnerManagement**
+
+**Required**: `false`
+**Description**: A map containing database connection details for external postgresql instance used as `serviceOwnerManagement` database. To use in conjunction with `sysdig.postgresql.external`. Only relevant if `sysdig.postgresql.primary` is configured.
+**Example**:
+
+```yaml
+sysdig:
+ postgresql:
+ primary: true
+ external: true
+ postgresDatabases:
+ serviceOwnerManagement:
+ host: my-som-db-external.com
+ port: 5432
+ db: som_db
+ username: som_user
+ password: my_som_user_password
+ sslmode: disable
+ admindb: root_db
+ adminusername: root_user
+ adminpassword: my_root_user_password
+```
+
+## **sysdig.postgresDatabases.beacon**
+
+**Required**: `false`
+**Description**: A map containing database connection details for external postgresql instance used as `beacon` database. To use in conjunction with `sysdig.postgresql.external`. Only relevant if `sysdig.postgresql.primary` is configured and Beacon for IBM PlatformMetrics is enabled.
+**Example**:
+
+```yaml
+sysdig:
+ postgresql:
+ primary: true
+ external: true
+ postgresDatabases:
+ beacon:
+ host: my-beacon-db-external.com
+ port: 5432
+ db: beacon_db
+ username: beacon_user
+ password: my_beacon_user_password
+ sslmode: disable
+ admindb: root_db
+ adminusername: root_user
+ adminpassword: my_root_user_password
+```
+
+## **sysdig.postgresDatabases.promBeacon**
+
+**Required**: `false`
+**Description**: A map containing database connection details for external postgresql instance used as `promBeacon` database. To use in conjunction with `sysdig.postgresql.external`. Only relevant if `sysdig.postgresql.primary` is configured and Generalized Beacon is enabled.
+**Example**:
+
+```yaml
+sysdig:
+ postgresql:
+ primary: true
+ external: true
+ postgresDatabases:
+ promBeacon:
+ host: my-prom-beacon-db-external.com
+ port: 5432
+ db: prom_beacon_db
+ username: prom_beacon_user
+ password: my_prom_beacon_user_password
+ sslmode: disable
+ admindb: root_db
+ adminusername: root_user
+ adminpassword: my_root_user_password
+```
+
+## **sysdig.postgresDatabases.quartz**
+
+**Required**: `false`
+**Description**: A map containing database connection details for external postgresql instance used as `quartz` database. To use in conjunction with `sysdig.postgresql.external`. Only relevant if `sysdig.postgresql.primary` is configured.
+**Example**:
+
+```yaml
+sysdig:
+ postgresql:
+ primary: true
+ external: true
+ postgresDatabases:
+ quartz:
+ host: my-quartz-db-external.com
+ port: 5432
+ db: quartz_db
+ username: quartz_user
+ password: my_quartz_user_password
+ sslmode: disable
+ admindb: root_db
+ adminusername: root_user
+ adminpassword: my_root_user_password
+```
+
+## **sysdig.postgresDatabases.compliance**
+
+**Required**: `false`
+**Description**: A map containing database connection details for external postgresql instance used as `compliance` database. To use in conjunction with `sysdig.postgresql.external`.
+**Example**:
+
+```yaml
+sysdig:
+ postgresql:
+ external: true
+ postgresDatabases:
+ compliance:
+ host: my-compliance-db-external.com
+ port: 5432
+ db: compliance_db
+ username: compliance_user
+ password: my_compliance_user_password
+ sslmode: disable
+ admindb: root_db
+ adminusername: root_user
+ adminpassword: my_root_user_password
+```
+
+## **sysdig.postgresDatabases.admissionController**
+
+**Required**: `false`
+**Description**: A map containing database connection details for external postgresql instance used as `admissionController` database. To use in conjunction with `sysdig.postgresql.external`.
+**Example**:
+
+```yaml
+sysdig:
+ postgresql:
+ external: true
+ postgresDatabases:
+ admissionController:
+ host: my-admission-controller-db-external.com
+ port: 5432
+ db: admission_controller_db
+ username: admission_controller_user
+ password: my_admission_controller_user_password
+ sslmode: disable
+ admindb: root_db
+ adminusername: root_user
+ adminpassword: my_root_user_password
+```
+
+## **sysdig.postgresDatabases.rapidResponse**
+
+**Required**: `false`
+**Description**: A map containing database connection details for external postgresql instance used as `rapidResponse` database. To use in conjunction with `sysdig.postgresql.external`.
+**Example**:
+
+```yaml
+sysdig:
+ postgresql:
+ external: true
+ postgresDatabases:
+ rapidResponse:
+ host: my-rapid-response-db-external.com
+ port: 5432
+ db: rapid_response_db
+ username: rapid_response_user
+ password: my_rapid_response_user_password
+ sslmode: disable
+ admindb: root_db
+ adminusername: root_user
+ adminpassword: my_root_user_password
+```
+
+## **sysdig.proxy.defaultNoProxy**
+
+**Required**: `false`
+**Description**: Default comma separated list of addresses or domain names
+that can be reached without going through the configured web proxy. This is
+only relevant if [`sysdig.proxy.enable`](#sysdigproxyenable) is configured and
+should only be used if there is an intent to override the defaults provided by
+Installer otherwise consider [`sysdig.proxy.noProxy`](#sysdigproxynoproxy)
+instead.
+**Options**:
+**Default**: `127.0.0.1, localhost, sysdigcloud-anchore-core, sysdigcloud-anchore-api`
+
+**Example**:
+
+```yaml
+sysdig:
+ proxy:
+ enable: true
+ defaultNoProxy: 127.0.0.1, localhost, sysdigcloud-anchore-core, sysdigcloud-anchore-api
+```
+
+## **sysdig.proxy.enable**
+
+**Required**: `false`
+**Description**: Determines if a [web
+proxy](https://en.wikipedia.org/wiki/Proxy_server#Web_proxy_servers) should be
+used by Anchore for fetching CVE feed from
+[https://api.sysdigcloud.com/api/scanning-feeds/v1/feeds](https://api.sysdigcloud.com/api/scanning-feeds/v1/feeds) in scanningV1, by the events forwarder to forward to HTTP based targets and for the scanningv2 feeds download (remote SaaS cloud environment to get a pre-signed object-storage URL + cloud provider object-storage HTTP download).
+**Options**:
+**Default**: `false`
+
+**Example**:
+
+```yaml
+sysdig:
+ proxy:
+ enable: true
+```
+
+## **sysdig.proxy.host**
+
+**Required**: `false`
+**Description**: The address of the web proxy, this could be a domain name or
+an IP address. This is required if [`sysdig.proxy.enable`](#sysdigproxyenable)
+is configured.
+**Options**:
+**Default**:
+
+**Example**:
+
+```yaml
+sysdig:
+ proxy:
+ enable: true
+ host: my-awesome-proxy.my-awesome-domain.com
+```
+
+## **sysdig.proxy.noProxy**
+
+**Required**: `false`
+**Description**: Comma separated list of addresses or domain names
+that can be reached without going through the configured web proxy. This is
+only relevant if [`sysdig.proxy.enable`](#sysdigproxyenable) is configured and
+appended to the list in
+[`sysdig.proxy.defaultNoProxy`](#sysdigproxydefaultnoproxy]).
+**Options**:
+**Default**: `127.0.0.1, localhost, sysdigcloud-anchore-core, sysdigcloud-anchore-api`
+
+**Example**:
+
+```yaml
+sysdig:
+ proxy:
+ enable: true
+ noProxy: my-awesome.domain.com, 192.168.0.0/16
+```
+
+## **sysdig.proxy.password**
+
+**Required**: `false`
+**Description**: The password used to access the configured
+[`sysdig.proxy.host`](#sysdigproxyhost).
+**Options**:
+**Default**:
+
+**Example**:
+
+```yaml
+sysdig:
+ proxy:
+ enable: true
+ password: F00B@r!
+```
+
+## **sysdig.proxy.port**
+
+**Required**: `false`
+**Description**: The port the configured
+[`sysdig.proxy.host`](#sysdigproxyhost) is listening on. If this is not
+configured it defaults to 80.
+**Options**:
+**Default**: `80`
+
+**Example**:
+
+```yaml
+sysdig:
+ proxy:
+ enable: true
+ port: 3128
+```
+
+## **sysdig.proxy.protocol**
+
+**Required**: `false`
+**Description**: The protocol to use to communicate with the configured
+[`sysdig.proxy.host`](#sysdigproxyhost).
+**Options**: `http|https`
+**Default**: `http`
+
+**Example**:
+
+```yaml
+sysdig:
+ proxy:
+ enable: true
+ protocol: https
+```
+
+## **sysdig.proxy.user**
+
+**Required**: `false`
+**Description**: The user used to access the configured
+[`sysdig.proxy.host`](#sysdigproxyhost).
+**Options**:
+**Default**:
+
+**Example**:
+
+```yaml
+sysdig:
+ proxy:
+ enable: true
+ user: alice
+```
+
+## **sysdig.slack.client.id**
+
+**Required**: `false`
+**Description**: Your Slack application client_id, needed for Sysdig Platform to send Slack notifications
+**Options**:
+**Default**: `awesomeclientid`
+
+**Example**:
+
+```yaml
+sysdig:
+ slack:
+ client:
+ id: 2255883163.123123123534
+```
+
+## **sysdig.slack.client.secret**
+
+**Required**: `false`
+**Description**: Your Slack application client_secret, needed for Sysdig Platform to send Slack notifications
+**Options**:
+**Default**: `awesomeclientsecret`
+
+**Example**:
+
+```yaml
+sysdig:
+ slack:
+ client:
+ secret: 8a8af18123128acd312d12d12da
+```
+
+## **sysdig.slack.client.scope**
+
+**Required**: `false`
+**Description**: Your Slack application scope, needed for Sysdig Platform to send Slack notifications
+**Options**:
+**Default**: `incoming-webhook`
+
+**Example**:
+
+```yaml
+sysdig:
+ slack:
+ client:
+ scope: incoming-webhook
+```
+
+## **sysdig.slack.client.endpoint**
+
+**Required**: `false`
+**Description**: Your Slack application authorization endpoint, needed for Sysdig Platform to send Slack notifications
+**Options**:
+**Default**: `https://slack.com/oauth/v2/authorize`
+
+**Example**:
+
+```yaml
+sysdig:
+ slack:
+ client:
+ endpoint: https://slack.com/oauth/v2/authorize
+```
+
+## **sysdig.slack.client.oauth.endpoint**
+
+**Required**: `false`
+**Description**: Your Slack application oauth endpoint, needed for Sysdig Platform to send Slack notifications
+**Options**:
+**Default**: `https://slack.com/api/oauth.v2.access`
+
+**Example**:
+
+```yaml
+sysdig:
+ slack:
+ client:
+ oauth:
+ endpoint: https://slack.com/api/oauth.v2.access
+```
+
+## **sysdig.saml.certificate.name**
+
+**Required**: `false`
+**Description**: The filename of the certificate that will be used for signing SAML requests.
+The certificate file needs to be passed via `sysdig.certificate.customCA` and the filename should match
+the certificate name used when creating the certificate.
+**Options**:
+**Default**:
+
+**Example**:
+
+```yaml
+sysdig:
+ saml:
+ certificate:
+ name: saml-cert.p12
+```
+
+## **sysdig.saml.certificate.password**
+
+**Required**: `false`
+**Description**: The password required to read the certificate that will be used for signing SAML requests.
+If `sysdig.saml.certificate.name` is set, this parameter needs to be set as well.
+**Options**:
+**Default**:
+
+**Example**:
+
+```yaml
+sysdig:
+ saml:
+ certificate:
+ name: saml-cert.p12
+ password: changeit
+```
+
+## **sysdig.inactivitySettings.trackerEnabled**
+
+**Required**: `false`
+**Description**: Enables inactivity tracker. If the user performed no actions, they will be logged out automatically.
+**Options**: `true|false`
+**Default**: `false`
+
+**Example**:
+
+```yaml
+sysdig:
+ inactivitySettings:
+ trackerEnabled: true
+```
+
+## **sysdig.inactivitySettings.trackerTimeout**
+
+**Required**: `false`
+**Description**: Sets the timeout value (in seconds) for inactivity tracker.
+**Options**: `60-1209600`
+**Default**: `1800`
+
+**Example**:
+
+```yaml
+sysdig:
+ inactivitySettings:
+ trackerTimeout: 900
+```
+
+## **sysdig.secure.anchore.customCerts**
+
+**Required**: `false`
+**Description**:
+To allow the Anchore to trust these certificates, use this configuration to upload one or more PEM-format CA certificates. You must ensure you've uploaded all certificates in the CA approval chain to the root CA.
+
+This configuration when set expects certificates with .crt, .pem extension under certs/anchore-custom-certs/ in the same level as `values.yaml`
+**Options**: `true|false`
+**Default**: false
+**Example**:
+
+```bash
+#In the example directory structure below, certificate1.crt and certificate2.crt will be added to the trusted list.
+bash-5.0$ find certs values.yaml
+certs
+certs/anchore-custom-certs
+certs/anchore-custom-certs/certificate1.crt
+certs/anchore-custom-certs/certificate2.crt
+values.yaml
+```
+
+```yaml
+sysdig:
+ secure:
+ anchore:
+ customCerts: true
+```
+
+## **sysdig.secure.anchore.enableMetrics**
+
+**Required**: `false`
+**Description**:
+Allow Anchore to export prometheus metrics.
+
+**Options**: `true|false`
+**Default**: false
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ anchore:
+ enableMetrics: true
+```
+
+## ~~**sysdig.redis.deploy**~~ (**Deprecated**)
+
+**Required**: `false`
+**Description**: Determines if redis should be deployed by the installer **deprecated use redisTls instead**
+**Options**: `true|false`
+**Default**: `true`
+**Example**:
+
+```yaml
+sysdig:
+ redis:
+ deploy: false
+```
+
+## **sysdig.redisVersion**
+
+**Required**: `false`
+**Description**: Docker image tag of Redis.
+**Options**:
+**Default**: 4.0.12.7
+**Example**:
+
+```yaml
+sysdig:
+ redisVersion: 4.0.12.7
+```
+
+## **sysdig.redisHaVersion**
+
+**Required**: `false`
+**Description**: Docker image tag of HA Redis, relevant when configured
+`sysdig.redisHa` is `true`.
+**Options**:
+**Default**: 4.0.12-1.0.1
+**Example**:
+
+```yaml
+sysdig:
+ redisHaVersion: 4.0.12-1.0.1
+```
+
+## ~~**sysdig.redisHa**~~ (**Deprecated**)
+
+**Required**: `false`
+**Description**: Determines if redis should run in HA mode **deprecated use redisTls instead**
+**Options**: `true|false`
+**Default**: `false`
+**Example**:
+
+```yaml
+sysdig:
+ redisHa: false
+```
+
+## ~~**sysdig.useRedis6**~~ (**Deprecated**)
+
+**Required**: `false`
+**Description**: Determines if redis should be installed with version 6.x **deprecated use redisTls instead**
+**Options**: `true|false`
+**Default**: `true`
+**Example**:
+
+```yaml
+sysdig:
+ useRedis6: false
+```
+
+## **sysdig.redis6Version**
+
+**Required**: `false`
+**Description**: Docker image tag of Redis 6, relevant when configured
+`sysdig.useRedis6` is `true`.
+**Options**:
+**Default**: 1.0.0
+**Example**:
+
+```yaml
+sysdig:
+ redis6Version: 1.0.0
+```
+
+## **sysdig.redis6SentinelVersion**
+
+**Required**: `false`
+**Description**: Docker image tag of Redis Sentinel, relevant when configured
+`sysdig.useRedis6` is `true`.
+**Options**:
+**Default**: 1.0.0
+**Example**:
+
+```yaml
+sysdig:
+ redis6SentinelVersion: 1.0.0
+```
+
+## **sysdig.redis6ExporterVersion**
+
+**Required**: `false`
+**Description**: Docker image tag of Redis Metrics Exporter, relevant when configured
+`sysdig.useRedis6` is `true`.
+**Options**:
+**Default**: 1.0.9
+**Example**:
+
+```yaml
+sysdig:
+ redis6ExporterVersion: 1.0.9
+```
+
+## **sysdig.redis6ImageName**
+
+**Required**: `false`
+**Description**: Docker image name of Redis 6, relevant when configured
+`sysdig.useRedis6` is `true`.
+**Options**:
+**Default**: redis-6
+**Example**:
+
+```yaml
+sysdig:
+ redis6ImageName: redis-6
+```
+
+## **sysdig.redis6SentinelImageName**
+
+**Required**: `false`
+**Description**: Docker image name of Redis Sentinel, relevant when configured
+`sysdig.useRedis6` is `true`.
+**Options**:
+**Default**: redis-sentinel-6
+**Example**:
+
+```yaml
+sysdig:
+ redis6SentinelImageName: redis-sentinel-6
+```
+
+## **sysdig.redis6ExporterImageName**
+
+**Required**: `false`
+**Description**: Docker image name of Redis Metrics Exporter, relevant when configured
+`sysdig.useRedis6` is `true`.
+**Options**:
+**Default**: redis-exporter-1
+**Example**:
+
+```yaml
+sysdig:
+ redis6ExporterImageName: redis-exporter-1
+```
+
+## **sysdig.useRedisTls**
+
+**Required**: `false`
+**Description**: Determines if legacy Redis env (only present in Monitor) should target _Redis with TLS_ deployed by installer
(**will be deprecated**). Legacy Redis env (es. REDIS_ENDPOINT) will deleted in favor of prefixed Redis env (es. IBM_CACHE_REDIS_ENDPOINT)
+**Options**: true|false
+**Default**: false
+**Example**:
+
+```yaml
+sysdig:
+ useRedisTLS: true
+```
+
+## **redisTls.enabled**
+
+**Required**: `false`
+**Description**: Create _Redis TLS_ secrets for apps using it. When used in conjuction with `redisTls.deploy` also deploys a _Redis with TLS_ and _Sentinel_ support
+**Options**: true|false
+**Default**: false
+**Example**:
+
+```yaml
+redisTls:
+ enabled: true
+```
+
+## **redisTls.deploy**
+
+**Required**: `false`
+**Description**: When also `redisTls.enabled` is `true`, installs a _Redis with TLS_ and _Sentinel_ support
+**Options**: true|false
+**Default**: true
+**Example**:
+
+```yaml
+redisTls:
+ install: true
+```
+
+## **redisTls.password**
+
+**Required**: `false`
+**Description**: _Redis with TLS_ password
+**Options**:
+**Default**:
+**Example**:
+
+```yaml
+redisTls:
+ password: "yourSecret!"
+```
+
+## **redisTls.ha**
+
+**Required**: `false`
+**Description**: Creates 3 _Redis with TLS_ in replication mode. If `false` only one Redis and Sentinel server will be available
+**Options**: true|false
+**Default**: false
+**Example**:
+
+```yaml
+redisTls:
+ ha: true
+```
+
+## **redisTls.imageName**
+
+**Required**: `false`
+**Description**: Docker image name of Redis, relevant when configured
+`redisTls.enabled` and `redisTls.deploy` are `true`.
+**Options**:
+**Default**:redis-6
+**Example**:
+
+```yaml
+redisTls:
+ imageName: redis-6
+```
+
+## **redisTls.version**
+
+**Required**: `false`
+**Description**: Docker image tag of Redis, relevant when configured
+`redisTls.enabled` and `redisTls.deploy` are `true`.
+**Options**:
+**Default**: 1.0.0
+**Example**:
+
+```yaml
+redisTls:
+ version: 1.0.0
+```
+
+## **redisTls.sentinel.imageName**
+
+**Required**: `false`
+**Description**: Docker image name of Redis Sentinel, relevant when configured
+`redisTls.enabled` and `redisTls.deploy` are `true`.
+**Options**:
+**Default**:redis-sentinel-6
+**Example**:
+
+```yaml
+redisTls:
+ sentinel:
+ imageName: redis-sentinel-6
+```
+
+## **redisTls.sentinel.version**
+
+**Required**: `false`
+**Description**: Docker image tag of Redis Sentinel, relevant when configured
+`redisTls.enabled` and `redisTls.deploy` are `true`.
+**Options**:
+**Default**: 1.0.0
+**Example**:
+
+```yaml
+redisTls:
+ sentinel:
+ version: 1.0.0
+```
+
+## **redisTls.exporter.imageName**
+
+**Required**: `false`
+**Description**: Docker image name of Redis exporter, relevant when configured
+`redisTls.enabled` and `redisTls.deploy` are `true`.
+**Options**:
+**Default**:redis-exporter-1
+**Example**:
+
+```yaml
+redisTls:
+ exporter:
+ imageName: redis-exporter-1
+```
+
+## **redisTls.exporter.version**
+
+**Required**: `false`
+**Description**: Docker image tag of Redis exporter, relevant when configured
+`redisTls.enabled` and `redisTls.deploy` are `true`.
+**Options**:
+**Default**: 1.0.9
+**Example**:
+
+```yaml
+redisTls:
+ exporter:
+ version: 1.0.9
+```
+
+## **redisClientsMonitor**
+
+**Required**: `false`
+**Description**: Setup component connection to a specified Redis for Monitor. Is possible to define on which Redis to connect: _Redis standalone/Redis HA_, _Redis with TLS_ or to an _external Redis_. _Redis standalone/Redis HA_ are defined using `useRedis6` and `redisHa` values. Current available components:
+
+- agent
+- common
+- cache
+- distributedJobs
+- ibmCache
+- promchap
+- policiesCache
+- alerting
+- meerkat
+- metering
+- prws
+
+A Monitor service can have multiple [component connection](https://docs.google.com/spreadsheets/d/1vuNIc4tPInTbAiMwlV8xgFdjWKoTmP8AYm04hwnqHN8/edit#gid=700533343):
+
+| Instance | Component |
+| --------- | --------------------------------------------------------- |
+| agent | agent |
+| common | common |
+| monitor-1 | cache, distributedJobs, ibmCache, promchap, policiesCache |
+| monitor-2 | alerting, meerkat, metering, prws |
+
+**Options**: _Redis standalone/Redis HA_ | _Redis with TLS_ | _external Redis_
+**Default**: _Redis standalone/Redis HA_
+**Example**:
+
+If `tls` is `true` the component `ibmCache` will use the TLS solution (`redisTls.enabled` to `true` is required)
+
+```yaml
+redisClientsMonitor:
+ ibmCache:
+ tls: true
+```
+
+If `tls` is `false` the component `ibmCache` continue to use the non TLS solution. This is the default, not needed to specify
+
+```yaml
+redisClientsMonitor:
+ ibmCache:
+ tls: false
+```
+
+Connect the component `ibmCache` to an external Redis
+
+```yaml
+redisClientsMonitor:
+ ibmCache:
+ endpoint: redis-service-or-host.domain
+ port: 6379
+ user: "provided-username"
+ password: "yourPassword!"
+ sentinel:
+ enabled: false
+ pubCaCrt: |
+ -----BEGIN CERTIFICATE-----
+ clear-text-certificate-with-no-base64-encoding
+ -----END CERTIFICATE-----
+```
+
+## **redisClientsSecure**
+
+**Required**: `false`
+**Description**: Setup component connection to a specified Redis for Secure. Is possible to define on which Redis to connect: _Redis standalone/Redis HA_, _Redis with TLS_ or to an external Redis. _Redis standalone/Redis HA_ are defined using `useRedis6` and `redisHa` values. Current available components:
+
+- scanning
+- forensic
+- events
+- eventsForwarder
+- rapidResponse
+- profiling
+- overview
+- compliance
+- cloudsec
+- policies
+- netsec
+- padvisor
+
+A Secure service can have multiple [component connection](https://docs.google.com/spreadsheets/d/1vuNIc4tPInTbAiMwlV8xgFdjWKoTmP8AYm04hwnqHN8/edit#gid=700533343):
+
+| Instance | Component |
+| --------- | ----------------------------------------------------------------------------------------------------- |
+| profiling | profiling |
+| secure-1 | scanning, forensic, events, rapidResponse, overview, compliance, cloudsec, policies, netsec, padvisor |
+
+**Options**: _Redis standalone/Redis HA_ | _Redis with TLS_ | _external Redis_
+**Default**: _Redis standalone/Redis HA_
+**Example**:
+
+If `tls` is `true` the component `scanning` will use the TLS solution (`redisTls.enabled` to `true` is required)
+
+```yaml
+redisClientsSecure:
+ scanning:
+ tls: true
+```
+
+If `tls` is `false` the component `scanning` continue to use the non TLS solution. This is the default, not needed to specify
+
+```yaml
+redisClientsSecure:
+ scanning:
+ tls: false
+```
+
+Connect the component `scanning` to an external Redis
+
+```yaml
+redisClientsSecure:
+ scanning:
+ endpoint: redis-external-host.domain
+ user: "provided-username"
+ password: "yourPassword!"
+ tls: true
+ sentinel:
+ enabled: false
+```
+
+If a CA is needed for `scanning` to trust the connection you must add it in the installer path `certs/redis-certs/`. IE most cloud provider Redis aaS doesn't need that
+
+```yaml
+certs/redis-certs/scanning_ca.crt
+```
+
+## redisExporters
+
+**Required**: `false`
+**Description**: Setup a Redis exporter per managed cloud or external instance. Is possible to define on which Redis to connect:
+
+- agent
+- common
+- monitor-1
+- monitor-2
+- profiling
+- secure-1
+
+Connect managed instances for a Monitor only setup sharing the public certificate:
+
+```yaml
+redisExporters:
+ agent:
+ redisAddr: rediss://redis-host.domain:port
+ redisUser: provided-username
+ redisPassword: "yourPasword!"
+ redisCertificateExistingSecret: redis-exporter-common-ca-pub-cert
+ common:
+ redisAddr: rediss://redis-host.domain:port
+ redisUser: provided-username
+ redisPassword: "yourPasword!"
+ redisCertificate: |
+ -----BEGIN CERTIFICATE-----
+ clear-text-certificate-with-no-base64-encoding
+ -----END CERTIFICATE-----
+ monitor-1:
+ redisAddr: rediss://redis-host.domain:port
+ redisUser: provided-username
+ redisPassword: "yourPasword!"
+ redisCertificateExistingSecret: redis-exporter-common-ca-pub-cert
+ monitor-2:
+ redisAddr: rediss://redis-host.domain:port
+ redisUser: provided-username
+ redisPassword: "yourPasword!"
+ redisCertificateExistingSecret: redis-exporter-common-ca-pub-cert
+```
+
+## **sysdig.resources.cassandra.limits.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu assigned to cassandra pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 2 |
+| medium | 4 |
+| large | 8 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ cassandra:
+ limits:
+ cpu: 2
+```
+
+## **sysdig.resources.cassandra.limits.memory**
+
+**Required**: `false`
+**Description**: The amount of memory assigned to cassandra pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 8Gi |
+| medium | 8Gi |
+| large | 8Gi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ cassandra:
+ limits:
+ memory: 8Gi
+```
+
+## **sysdig.resources.cassandra.requests.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu required to schedule cassandra pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 1 |
+| medium | 2 |
+| large | 4 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ cassandra:
+ requests:
+ cpu: 2
+```
+
+## **sysdig.resources.cassandra.requests.memory**
+
+**Required**: `false`
+**Description**: The amount of memory required to schedule cassandra pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 8Gi |
+| medium | 8Gi |
+| large | 8Gi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ cassandra:
+ requests:
+ memory: 8Gi
+```
+
+## **sysdig.resources.elasticsearch.limits.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu assigned to elasticsearch pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 2 |
+| medium | 4 |
+| large | 8 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ elasticsearch:
+ limits:
+ cpu: 2
+```
+
+## **sysdig.resources.elasticsearch.limits.memory**
+
+**Required**: `false`
+**Description**: The amount of memory assigned to elasticsearch pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 8Gi |
+| medium | 8Gi |
+| large | 8Gi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ elasticsearch:
+ limits:
+ memory: 8Gi
+```
+
+## **sysdig.resources.elasticsearch.requests.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu required to schedule elasticsearch pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 1 |
+| medium | 2 |
+| large | 4 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ elasticsearch:
+ requests:
+ cpu: 2
+```
+
+## **sysdig.resources.elasticsearch.requests.memory**
+
+**Required**: `false`
+**Description**: The amount of memory required to schedule elasticsearch pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 4Gi |
+| medium | 4Gi |
+| large | 4Gi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ elasticsearch:
+ requests:
+ memory: 2Gi
+```
+
+## **sysdig.resources.postgresql.limits.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu assigned to postgresql pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 2 |
+| medium | 4 |
+| large | 4 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ postgresql:
+ limits:
+ cpu: 2
+```
+
+## **sysdig.resources.postgresql.limits.memory**
+
+**Required**: `false`
+**Description**: The amount of memory assigned to postgresql pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 4Gi |
+| medium | 4Gi |
+| large | 8Gi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ postgresql:
+ limits:
+ memory: 8Gi
+```
+
+## **sysdig.resources.postgresql.requests.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu required to schedule postgresql pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 500m |
+| medium | 1 |
+| large | 2 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ postgresql:
+ requests:
+ cpu: 2
+```
+
+## **sysdig.resources.postgresql.requests.memory**
+
+**Required**: `false`
+**Description**: The amount of memory required to schedule postgresql pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 500Mi |
+| medium | 1Gi |
+| large | 2Gi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ postgresql:
+ requests:
+ memory: 2Gi
+```
+
+## **sysdig.resources.redis.limits.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu assigned to redis pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 2 |
+| medium | 2 |
+| large | 2 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ redis:
+ limits:
+ cpu: 2
+```
+
+## **sysdig.resources.redis.limits.memory**
+
+**Required**: `false`
+**Description**: The amount of memory assigned to redis pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 2Gi |
+| medium | 2Gi |
+| large | 2Gi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ redis:
+ limits:
+ memory: 1Gi
+```
+
+## **sysdig.resources.redis.requests.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu required to schedule redis pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 100m |
+| medium | 100m |
+| large | 100m |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ redis:
+ requests:
+ cpu: 2
+```
+
+## **sysdig.resources.redis.requests.memory**
+
+**Required**: `false`
+**Description**: The amount of memory required to schedule redis pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 100Mi |
+| medium | 100Mi |
+| large | 100Mi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ redis:
+ requests:
+ memory: 2Gi
+```
+
+## **sysdig.resources.redis-sentinel.limits.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu assigned to redis-sentinel pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 300m |
+| medium | 300m |
+| large | 300m |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ redis-sentinel:
+ limits:
+ cpu: 2
+```
+
+## **sysdig.resources.redis-sentinel.requests.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu required to schedule redis-sentinel pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 50m |
+| medium | 50m |
+| large | 50m |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ redis-sentinel:
+ requests:
+ cpu: 2
+```
+
+## **sysdig.resources.redis-sentinel.requests.memory**
+
+**Required**: `false`
+**Description**: The amount of memory required to schedule redis-sentinel pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 5Mi |
+| medium | 5Mi |
+| large | 5Mi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ redis-sentinel:
+ requests:
+ memory: 200Mi
+```
+
+## **sysdig.resources.redis-sentinel.limits.memory**
+
+**Required**: `false`
+**Description**: The amount of memory assigned to redis-sentinel pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 20Mi |
+| medium | 20Mi |
+| large | 20Mi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ redis-sentinel:
+ limits:
+ memory: 10Mi
+```
+
+## **sysdig.resources.timescale-adapter.limits.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu assigned to timescale-adapter containers
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 4 |
+| medium | 4 |
+| large | 16 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ timescale-adapter:
+ limits:
+ cpu: 2
+```
+
+## **sysdig.resources.timescale-adapter.limits.memory**
+
+**Required**: `false`
+**Description**: The amount of memory assigned to timescale-adapter containers
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 4Gi |
+| medium | 4Gi |
+| large | 16Gi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ timescale-adapter:
+ limits:
+ memory: 10Mi
+```
+
+## **sysdig.resources.timescale-adapter.requests.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu required to schedule timescale-adapter containers
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 1 |
+| medium | 1 |
+| large | 4 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ timescale-adapter:
+ requests:
+ cpu: 2
+```
+
+## **sysdig.resources.timescale-adapter.requests.memory**
+
+**Required**: `false`
+**Description**: The amount of memory required to schedule timescale-adapter containers
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 1Gi |
+| medium | 1Gi |
+| large | 4Gi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ timescale-adapter:
+ requests:
+ memory: 200Mi
+```
+
+## **sysdig.resources.ingressControllerHaProxy.limits.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu assigned to haproxy-ingress containers in haproxy-ingress daemon set
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 500m |
+| medium | 1 |
+| large | 1 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ ingressControllerHaProxy:
+ limits:
+ cpu: 2
+```
+
+## **sysdig.resources.ingressControllerHaProxy.limits.memory**
+
+**Required**: `false`
+**Description**: The amount of memory assigned to haproxy-ingress containers in haproxy-ingress daemon set
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 250Mi |
+| medium | 500Mi |
+| large | 500Mi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ ingressControllerHaProxy:
+ limits:
+ memory: 2Gi
+```
+
+## **sysdig.resources.ingressControllerHaProxy.requests.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu required to schedule haproxy-ingress containers in haproxyCollectorAPI daemon set
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 50m |
+| medium | 100m |
+| large | 100m |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ ingressControllerHaProxy:
+ requests:
+ cpu: 2
+```
+
+## **sysdig.resources.ingressControllerHaProxy.requests.memory**
+
+**Required**: `false`
+**Description**: The amount of memory required to schedule haproxy-ingress containers in haproxy-ingress daemon set
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 50Mi |
+| medium | 100Mi |
+| large | 100Mi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ ingressControllerHaProxy:
+ requests:
+ memory: 1Gi
+```
+
+## **sysdig.resources.ingressControllerRsyslog.limits.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu assigned to rsyslog-server containers in haproxy-ingress daemon set
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 125m |
+| medium | 250m |
+| large | 250m |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ ingressControllerRsyslog:
+ limits:
+ cpu: 2
+```
+
+## **sysdig.resources.ingressControllerRsyslog.limits.memory**
+
+**Required**: `false`
+**Description**: The amount of memory assigned to rsyslog-server containers in haproxy-ingress daemon set
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 50Mi |
+| medium | 100Mi |
+| large | 100Mi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ ingressControllerRsyslog:
+ limits:
+ memory: 1Gi
+```
+
+## **sysdig.resources.ingressControllerRsyslog.requests.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu required to schedule rsyslog-server containers in haproxy-ingress daemon set
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 50m |
+| medium | 50m |
+| large | 50m |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ ingressControllerRsyslog:
+ requests:
+ cpu: 500m
+```
+
+## **sysdig.resources.ingressControllerRsyslog.requests.memory**
+
+**Required**: `false`
+**Description**: The amount of memory required to schedule rsyslog-server containers in haproxy-ingress daemon set
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 20Mi |
+| medium | 20Mi |
+| large | 20Mi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ ingressControllerRsyslog:
+ requests:
+ memory: 500Mi
+```
+
+## **sysdig.resources.api.limits.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu assigned to api containers in api pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 4 |
+| medium | 4 |
+| large | 16 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ api:
+ limits:
+ cpu: 2
+```
+
+## **sysdig.resources.api.limits.memory**
+
+**Required**: `false`
+**Description**: The amount of memory assigned to api containers in api pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 4Gi |
+| medium | 4Gi |
+| large | 16Gi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ api:
+ limits:
+ memory: 10Mi
+```
+
+## **sysdig.resources.api.requests.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu required to schedule api containers in api pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 1 |
+| medium | 1 |
+| large | 4 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ api:
+ requests:
+ cpu: 2
+```
+
+## **sysdig.resources.api.requests.memory**
+
+**Required**: `false`
+**Description**: The amount of memory required to schedule api containers in api pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 1Gi |
+| medium | 1Gi |
+| large | 4Gi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ api:
+ requests:
+ memory: 200Mi
+```
+
+## **sysdig.resources.apiNginx.limits.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu assigned to nginx containers in api pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 1 |
+| medium | 1 |
+| large | 1 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ apiNginx:
+ limits:
+ cpu: 1
+```
+
+## **sysdig.resources.apiNginx.limits.memory**
+
+**Required**: `false`
+**Description**: The amount of memory assigned to nginx containers in api pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 500Mi |
+| medium | 500Mi |
+| large | 500Mi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ apiNginx:
+ limits:
+ memory: 500Mi
+```
+
+## **sysdig.resources.apiNginx.requests.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu required to schedule nginx containers in api pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 500m |
+| medium | 500m |
+| large | 500m |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ apiNginx:
+ requests:
+ cpu: 500m
+```
+
+## **sysdig.resources.apiNginx.requests.memory**
+
+**Required**: `false`
+**Description**: The amount of memory required to schedule nginx containers in api pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 100Mi |
+| medium | 100Mi |
+| large | 100Mi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ apiNginx:
+ requests:
+ memory: 100Mi
+```
+
+## **sysdig.resources.worker.limits.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu assigned to worker pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 4 |
+| medium | 8 |
+| large | 16 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ worker:
+ limits:
+ cpu: 2
+```
+
+## **sysdig.resources.worker.limits.memory**
+
+**Required**: `false`
+**Description**: The amount of memory assigned to worker pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 4Gi |
+| medium | 8Gi |
+| large | 16Gi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ worker:
+ limits:
+ memory: 10Mi
+```
+
+## **sysdig.resources.worker.requests.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu required to schedule worker pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 1 |
+| medium | 2 |
+| large | 4 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ worker:
+ requests:
+ cpu: 2
+```
+
+## **sysdig.resources.worker.requests.memory**
+
+**Required**: `false`
+**Description**: The amount of memory required to schedule worker pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 1Gi |
+| medium | 2Gi |
+| large | 4Gi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ worker:
+ requests:
+ memory: 200Mi
+```
+
+## **sysdig.resources.collector.limits.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu assigned to collector pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 4 |
+| medium | 4 |
+| large | 16 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ collector:
+ limits:
+ cpu: 2
+```
+
+## **sysdig.resources.collector.limits.memory**
+
+**Required**: `false`
+**Description**: The amount of memory assigned to collector pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 4Gi |
+| medium | 4Gi |
+| large | 16Gi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ collector:
+ limits:
+ memory: 10Mi
+```
+
+## **sysdig.resources.collector.requests.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu required to schedule collector pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 1 |
+| medium | 1 |
+| large | 4 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ collector:
+ requests:
+ cpu: 2
+```
+
+## **sysdig.resources.collector.requests.memory**
+
+**Required**: `false`
+**Description**: The amount of memory required to schedule collector pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 1Gi |
+| medium | 1Gi |
+| large | 4Gi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ collector:
+ requests:
+ memory: 200Mi
+```
+
+## **sysdig.resources.anchore-core.limits.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu assigned to anchore-core pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 1 |
+| medium | 1 |
+| large | 1 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ anchore-core:
+ limits:
+ cpu: 1
+```
+
+## **sysdig.resources.anchore-api.limits.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu assigned to anchore-api pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 1 |
+| medium | 1 |
+| large | 1 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ anchore-api:
+ limits:
+ cpu: 1
+```
+
+## **sysdig.resources.anchore-catalog.limits.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu assigned to anchore-catalog pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 1 |
+| medium | 1 |
+| large | 1 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ anchore-catalog:
+ limits:
+ cpu: 1
+```
+
+## **sysdig.resources.anchore-policy-engine.limits.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu assigned to anchore-policy-engine pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 1 |
+| medium | 1 |
+| large | 1 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ anchore-policy-engine:
+ limits:
+ cpu: 1
+```
+
+## **sysdig.resources.anchore-core.limits.memory**
+
+**Required**: `false`
+**Description**: The amount of memory assigned to anchore-core pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 1Gi |
+| medium | 1Gi |
+| large | 1Gi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ anchore-core:
+ limits:
+ memory: 10Mi
+```
+
+## **sysdig.resources.anchore-api.limits.memory**
+
+**Required**: `false`
+**Description**: The amount of memory assigned to anchore-api pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 1Gi |
+| medium | 1Gi |
+| large | 1Gi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ anchore-api:
+ limits:
+ memory: 10Mi
+```
+
+## **sysdig.resources.anchore-catalog.limits.memory**
+
+**Required**: `false`
+**Description**: The amount of memory assigned to anchore-catalog pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 2Gi |
+| medium | 2Gi |
+| large | 3Gi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ anchore-catalog:
+ limits:
+ memory: 10Mi
+```
+
+## **sysdig.resources.anchore-policy-engine.limits.memory**
+
+**Required**: `false`
+**Description**: The amount of memory assigned to anchore-policy-engine pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 2Gi |
+| medium | 2Gi |
+| large | 3Gi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ anchore-policy-engine:
+ limits:
+ memory: 10Mi
+```
+
+## **sysdig.resources.anchore-core.requests.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu required to schedule anchore-core pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 500m |
+| medium | 500m |
+| large | 500m |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ anchore-core:
+ requests:
+ cpu: 2
+```
+
+## **sysdig.resources.anchore-api.requests.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu required to schedule anchore-api pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 500m |
+| medium | 500m |
+| large | 500m |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ anchore-api:
+ requests:
+ cpu: 2
+```
+
+## **sysdig.resources.anchore-catalog.requests.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu required to schedule anchore-catalog pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 500m |
+| medium | 500m |
+| large | 500m |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ anchore-catalog:
+ requests:
+ cpu: 2
+```
+
+## **sysdig.resources.anchore-policy-engine.requests.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu required to schedule anchore-policy-engine pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 500m |
+| medium | 500m |
+| large | 500m |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ anchore-policy-engine:
+ requests:
+ cpu: 2
+```
+
+## **sysdig.resources.anchore-core.requests.memory**
+
+**Required**: `false`
+**Description**: The amount of memory required to schedule anchore-core pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 256Mi |
+| medium | 256Mi |
+| large | 256Mi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ anchore-core:
+ requests:
+ memory: 200Mi
+```
+
+## **sysdig.resources.anchore-api.requests.memory**
+
+**Required**: `false`
+**Description**: The amount of memory required to schedule anchore-api pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 256Mi |
+| medium | 256Mi |
+| large | 256Mi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ anchore-api:
+ requests:
+ memory: 200Mi
+```
+
+## **sysdig.resources.anchore-catalog.requests.memory**
+
+**Required**: `false`
+**Description**: The amount of memory required to schedule anchore-catalog pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 1Gi |
+| medium | 1Gi |
+| large | 1Gi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ anchore-catalog:
+ requests:
+ memory: 200Mi
+```
+
+## **sysdig.resources.anchore-policy-engine.requests.memory**
+
+**Required**: `false`
+**Description**: The amount of memory required to schedule anchore-policy-engine pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 1Gi |
+| medium | 1Gi |
+| large | 1Gi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ anchore-policy-engine:
+ requests:
+ memory: 200Mi
+```
+
+## **sysdig.resources.anchore-worker.limits.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu assigned to anchore-worker pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 4 |
+| medium | 4 |
+| large | 4 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ anchore-worker:
+ limits:
+ cpu: 2
+```
+
+## **sysdig.resources.anchore-worker.limits.memory**
+
+**Required**: `false`
+**Description**: The amount of memory assigned to anchore-worker pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 4Gi |
+| medium | 4Gi |
+| large | 4Gi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ anchore-worker:
+ limits:
+ memory: 10Mi
+```
+
+## **sysdig.resources.anchore-worker.requests.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu required to schedule anchore-worker pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 500m |
+| medium | 1 |
+| large | 1 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ anchore-worker:
+ requests:
+ cpu: 2
+```
+
+## **sysdig.resources.anchore-worker.requests.memory**
+
+**Required**: `false`
+**Description**: The amount of memory required to schedule anchore-worker pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 1Gi |
+| medium | 1Gi |
+| large | 1Gi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ anchore-worker:
+ requests:
+ memory: 200Mi
+```
+
+## **sysdig.resources.scanning-api.limits.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu assigned to scanning-api pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 4 |
+| medium | 4 |
+| large | 4 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ scanning-api:
+ limits:
+ cpu: 2
+```
+
+## **sysdig.resources.scanning-api.limits.memory**
+
+**Required**: `false`
+**Description**: The amount of memory assigned to scanning-api pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 4Gi |
+| medium | 4Gi |
+| large | 4Gi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ scanning-api:
+ limits:
+ memory: 10Mi
+```
+
+## **sysdig.resources.scanning-api.requests.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu required to schedule scanning-api pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 500m |
+| medium | 1 |
+| large | 1 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ scanning-api:
+ requests:
+ cpu: 2
+```
+
+## **sysdig.resources.scanning-api.requests.memory**
+
+**Required**: `false`
+**Description**: The amount of memory required to schedule scanning-api pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 1Gi |
+| medium | 1Gi |
+| large | 1Gi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ scanning-api:
+ requests:
+ memory: 200Mi
+```
+
+## **sysdig.resources.scanningalertmgr.limits.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu assigned to scanningalertmgr pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 4 |
+| medium | 4 |
+| large | 4 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ scanningalertmgr:
+ limits:
+ cpu: 2
+```
+
+## **sysdig.resources.scanningalertmgr.limits.memory**
+
+**Required**: `false`
+**Description**: The amount of memory assigned to scanningalertmgr pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 4Gi |
+| medium | 4Gi |
+| large | 4Gi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ scanningalertmgr:
+ limits:
+ memory: 10Mi
+```
+
+## **sysdig.resources.scanningalertmgr.requests.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu required to schedule scanningalertmgr pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 500m |
+| medium | 1 |
+| large | 1 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ scanningalertmgr:
+ requests:
+ cpu: 2
+```
+
+## **sysdig.resources.scanningalertmgr.requests.memory**
+
+**Required**: `false`
+**Description**: The amount of memory required to schedule scanningalertmgr pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 1Gi |
+| medium | 1Gi |
+| large | 1Gi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ scanningalertmgr:
+ requests:
+ memory: 200Mi
+```
+
+## **sysdig.resources.scanning-retention-mgr.limits.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu assigned to scanning retention-mgr pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 4 |
+| medium | 4 |
+| large | 4 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ scanning-retention-mgr:
+ limits:
+ cpu: 2
+```
+
+## **sysdig.resources.scanning-retention-mgr.limits.memory**
+
+**Required**: `false`
+**Description**: The amount of memory assigned to scanning retention-mgr pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 4Gi |
+| medium | 4Gi |
+| large | 4Gi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ scanning-retention-mgr:
+ limits:
+ memory: 10Mi
+```
+
+## **sysdig.resources.scanning-retention-mgr.requests.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu required to schedule scanning retention-mgr pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 500m |
+| medium | 1 |
+| large | 1 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ scanning-retention-mgr:
+ requests:
+ cpu: 2
+```
+
+## **sysdig.resources.scanning-retention-mgr.requests.memory**
+
+**Required**: `false`
+**Description**: The amount of memory required to schedule scanning retention-mgr pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 1Gi |
+| medium | 1Gi |
+| large | 1Gi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ scanning-retention-mgr:
+ requests:
+ memory: 200Mi
+```
+
+## **sysdig.secure.scanning.retentionMgr.cronjob**
+
+**Required**: `false`
+**Description**: Retention manager Cronjob
+**Options**:
+**Default**: "0 3 \* \* \*"
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ scanning:
+ retentionMgr:
+ cronjob: 0 3 * * *
+```
+
+## **sysdig.secure.scanning.retentionMgr.retentionPolicyMaxExecutionDuration**
+
+**Required**: `false`
+**Description**: Max execution duration for the retention policy
+**Options**:
+**Default**: 23h
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ scanning:
+ retentionMgr:
+ retentionPolicyMaxExecutionDuration: 23h
+```
+
+## **sysdig.secure.scanning.retentionMgr.retentionPolicyGracePeriodDuration**
+
+**Required**: `false`
+**Description**: Grace period for the retention policy
+**Options**:
+**Default**: 168h
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ scanning:
+ retentionMgr:
+ retentionPolicyGracePeriodDuration: 168h
+```
+
+## **sysdig.secure.scanning.retentionMgr.retentionPolicyArtificialDelayAfterDelete**
+
+**Required**: `false`
+**Description**: Artifical delay after each image deletion
+**Options**:
+**Default**: 1s
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ scanning:
+ retentionMgr:
+ retentionPolicyArtificialDelayAfterDelete: 1s
+```
+
+## **sysdig.secure.scanning.retentionMgr.scanningGRPCEndpoint**
+
+**Required**: `false`
+**Description**: Scanning GRPC endpoint
+**Options**:
+**Default**: sysdigcloud-scanning-api:6000
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ scanning:
+ retentionMgr:
+ scanningGRPCEndpoint: sysdigcloud-scanning-api:6000
+```
+
+## **sysdig.secure.scanning.retentionMgr.scanningDBEngine**
+
+**Required**: `false`
+**Description**: Scanning DB engine
+**Options**: postgres|inmem
+**Default**: postgres
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ scanning:
+ retentionMgr:
+ scanningDBEngine: postgres
+```
+
+## **sysdig.secure.scanning.retentionMgr.defaultValues.datePolicy**
+
+**Required**: `false`
+**Description**: Default value for the date policy
+**Options**:
+**Default**: 90
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ scanning:
+ retentionMgr:
+ defaultValues:
+ datePolicy: 90
+```
+
+## **sysdig.secure.scanning.retentionMgr.defaultValues.tagsPolicy**
+
+**Required**: `false`
+**Description**: Default value for the tags policy
+**Options**:
+**Default**: 5
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ scanning:
+ retentionMgr:
+ defaultValues:
+ tagsPolicy: 5
+```
+
+## **sysdig.secure.scanning.retentionMgr.defaultValues.digestsPolicy**
+
+**Required**: `false`
+**Description**: Default value for the digests policy
+**Options**:
+**Default**: 5
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ scanning:
+ retentionMgr:
+ defaultValues:
+ digestsPolicy: 5
+```
+
+## **sysdig.secure.scanning.retentionMgr.defaultValues.deleteSpuriousImages**
+
+**Required**: `false`
+**Description**: Flag to enable/disable the deletion of spurious images
+**Options**:
+**Default**: "true"
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ scanning:
+ retentionMgr:
+ defaultValues:
+ deleteSpuriousImages: "true"
+```
+
+## **sysdig.resources.scanning-ve-janitor.limits.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu assigned to scanning-ve-janitor cronjob
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 300m |
+| medium | 500m |
+| large | 1 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ scanning-ve-janitor:
+ limits:
+ cpu: 2
+```
+
+## **sysdig.resources.scanning-ve-janitor.limits.memory**
+
+**Required**: `false`
+**Description**: The amount of memory assigned to scanning-ve-janitor cronjob
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 256Mi |
+| medium | 2Gi |
+| large | 4Gi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ scanning-ve-janitor:
+ limits:
+ memory: 10Mi
+```
+
+## **sysdig.resources.scanning-ve-janitor.requests.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu required to schedule scanning-ve-janitor cronjob
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 100m |
+| medium | 100m |
+| large | 100m |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ scanning-ve-janitor:
+ requests:
+ cpu: 2
+```
+
+## **sysdig.resources.scanning-ve-janitor.requests.memory**
+
+**Required**: `false`
+**Description**: The amount of memory required to schedule scanning-ve-janitor cronjob
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 256Mi |
+| medium | 256Mi |
+| large | 256Mi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ scanning-ve-janitor:
+ requests:
+ memory: 200Mi
+```
+
+## **sysdig.resources.scanningAdmissionControllerApi.limits.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu assigned to admission-controller-api containers
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 1 |
+| medium | 1 |
+| large | 1 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ scanningAdmissionControllerApi:
+ limits:
+ cpu: 1
+```
+
+## **sysdig.resources.scanningAdmissionControllerApi.limits.memory**
+
+**Required**: `false`
+**Description**: The amount of memory assigned to admission-controller-api containers
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 500Mi |
+| medium | 500Mi |
+| large | 500Mi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ scanningAdmissionControllerApi:
+ limits:
+ memory: 500Mi
+```
+
+## **sysdig.resources.scanningAdmissionControllerApi.requests.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu required to schedule admission-controller-api containers
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 250m |
+| medium | 250m |
+| large | 250m |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ scanningAdmissionControllerApi:
+ requests:
+ cpu: 250m
+```
+
+## **sysdig.resources.scanningAdmissionControllerApi.requests.memory**
+
+**Required**: `false`
+**Description**: The amount of memory required to schedule admission-controller-api containers
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 50Mi |
+| medium | 50Mi |
+| large | 50Mi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ admission-controller-api:
+ requests:
+ memory: 50Mi
+```
+
+## **sysdig.resources.reporting-init.limits.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu assigned to reporting-init pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 1 |
+| medium | 1 |
+| large | 1 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ reporting-init:
+ limits:
+ cpu: 1
+```
+
+## **sysdig.resources.reporting-init.limits.memory**
+
+**Required**: `false`
+**Description**: The amount of memory assigned to reporting-init pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 256Mi |
+| medium | 256Mi |
+| large | 256Mi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ reporting-init:
+ limits:
+ memory: 256Mi
+```
+
+## **sysdig.resources.reporting-init.requests.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu required to schedule reporting-init pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 100m |
+| medium | 100m |
+| large | 100m |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ reporting-init:
+ requests:
+ cpu: 100m
+```
+
+## **sysdig.resources.reporting-init.requests.memory**
+
+**Required**: `false`
+**Description**: The amount of memory required to schedule reporting-init pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 50Mi |
+| medium | 50Mi |
+| large | 50Mi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ reporting-init:
+ requests:
+ memory: 50Mi
+```
+
+## **sysdig.resources.reporting-api.limits.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu assigned to reporting-api pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 1500m |
+| medium | 1500m |
+| large | 1500m |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ reporting-api:
+ limits:
+ cpu: 1500m
+```
+
+## **sysdig.resources.reporting-api.limits.memory**
+
+**Required**: `false`
+**Description**: The amount of memory assigned to reporting-api pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 1536Mi |
+| medium | 1536Mi |
+| large | 1536Mi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ reporting-api:
+ limits:
+ memory: 1536Mi
+```
+
+## **sysdig.resources.reporting-api.requests.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu required to schedule reporting-api pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 200m |
+| medium | 200m |
+| large | 200m |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ reporting-api:
+ requests:
+ cpu: 200m
+```
+
+## **sysdig.resources.reporting-api.requests.memory**
+
+**Required**: `false`
+**Description**: The amount of memory required to schedule reporting-api pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 256Mi |
+| medium | 256Mi |
+| large | 256Mi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ reporting-api:
+ requests:
+ memory: 256Mi
+```
+
+## **sysdig.resources.reporting-worker.limits.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu assigned to reporting-worker pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 2 |
+| medium | 2 |
+| large | 2 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ reporting-worker:
+ limits:
+ cpu: 2
+```
+
+## **sysdig.resources.reporting-worker.limits.memory**
+
+**Required**: `false`
+**Description**: The amount of memory assigned to reporting-worker pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 16Gi |
+| medium | 16Gi |
+| large | 16Gi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ reporting-worker:
+ limits:
+ memory: 16Gi
+```
+
+## **sysdig.resources.reporting-worker.requests.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu required to schedule reporting-worker pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 200m |
+| medium | 200m |
+| large | 200m |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ reporting-worker:
+ requests:
+ cpu: 200m
+```
+
+## **sysdig.resources.reporting-worker.requests.memory**
+
+**Required**: `false`
+**Description**: The amount of memory required to schedule reporting-worker pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 10Gi |
+| medium | 10Gi |
+| large | 10Gi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ reporting-worker:
+ requests:
+ memory: 10Gi
+```
+
+## **sysdig.secure.scanning.reporting.debug**
+
+**Required**: `false`
+**Description**: Enable logging at debug level
+**Options**:
+**Default**: false
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ scanning:
+ reporting:
+ debug: false
+```
+
+## **sysdig.secure.scanning.reporting.apiGRPCEndpoint**
+
+**Required**: `false`
+**Description**: Reporting GRPC endpoint
+**Options**:
+**Default**: sysdigcloud-scanning-reporting-api-grpc:6000
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ scanning:
+ reporting:
+ apiGRPCEndpoint: sysdigcloud-scanning-reporting-api-grpc:6000
+```
+
+## **sysdig.secure.scanning.reporting.scanningGRPCEndpoint**
+
+**Required**: `false`
+**Description**: Scanning GRPC endpoint
+**Options**:
+**Default**: sysdigcloud-scanning-api:6000
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ scanning:
+ reporting:
+ scanningGRPCEndpoint: sysdigcloud-scanning-api:6000
+```
+
+## **sysdig.secure.scanning.reporting.storageDriver**
+
+**Required**: `false`
+**Description**: Storage kind for generated reports
+**Options**: postgres, fs, s3
+**Default**: postgres
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ scanning:
+ reporting:
+ storageDriver: postgres
+```
+
+## **sysdig.secure.scanning.reporting.storageCompression**
+
+**Required**: `false`
+**Description**: Compression format for generated reports
+**Options**: zip, gzip, none
+**Default**: zip
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ scanning:
+ reporting:
+ storageCompression: zip
+```
+
+## **sysdig.secure.scanning.reporting.storageFsDir**
+
+**Required**: `false`
+**Description**: The directory where reports will saved (required when using `fs` driver)
+**Options**:
+**Default**: .
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ scanning:
+ reporting:
+ storageFsDir: /reports
+```
+
+## **sysdig.secure.scanning.reporting.storagePostgresRetentionDays**
+
+**Required**: `false`
+**Description**: The number of days the generated reports will be kept for download (available when using `postgres` driver)
+**Options**:
+**Default**: 1
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ scanning:
+ reporting:
+ storagePostgresRetentionDays: 1
+```
+
+## **sysdig.secure.scanning.reporting.storageS3Bucket**
+
+**Required**: `false`
+**Description**: The bucket name where reports will be saved (required when using `s3` driver)
+**Options**:
+**Default**:
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ scanning:
+ reporting:
+ storageS3Bucket: secure-scanning-reporting
+```
+
+## **sysdig.secure.scanning.reporting.storageS3Prefix**
+
+**Required**: `false`
+**Description**: The object name prefix (directory) used when saving reports in a S3 bucket
+**Options**:
+**Default**:
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ scanning:
+ reporting:
+ storageS3Prefix: reports
+```
+
+## **sysdig.secure.scanning.reporting.storageS3Endpoint**
+
+**Required**: `false`
+**Description**: The service endpoint of a S3-compatible storage (required when using `s3` driver in a non-AWS deployment)
+**Options**:
+**Default**:
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ scanning:
+ reporting:
+ storageS3Endpoint: s3.example.com
+```
+
+## **sysdig.secure.scanning.reporting.storageS3Region**
+
+**Required**: `false`
+**Description**: The AWS region where the S3 bucket is created (required when using `s3` driver in a AWS deployment)
+**Options**:
+**Default**:
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ scanning:
+ reporting:
+ storageS3Region: us-east-1
+```
+
+## **sysdig.secure.scanning.reporting.storageS3AccessKeyID**
+
+**Required**: `false`
+**Description**: The Access Key ID used to authenticate with a S3-compatible storage (required when using `s3` driver in a non-AWS deployment)
+**Options**:
+**Default**:
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ scanning:
+ reporting:
+ storageS3AccessKeyID: AKIAIOSFODNN7EXAMPLE
+```
+
+## **sysdig.secure.scanning.reporting.storageS3SecretAccessKey**
+
+**Required**: `false`
+**Description**: The Secret Access Key used to authenticate with a S3-compatible storage (required when using `s3` driver in a non-AWS deployment)
+**Options**:
+**Default**:
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ scanning:
+ reporting:
+ storageS3SecretAccessKey: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
+```
+
+## **sysdig.secure.scanning.reporting.onDemandGenerationEnabled**
+
+**Required**: `true`
+**Description**: The flag to enable on-demand generation of reports globally
+**Options**: false, true
+**Default**: false
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ scanning:
+ reporting:
+ onDemandGenerationEnabled: true
+```
+
+## **sysdig.secure.scanning.reporting.onDemandGenerationCustomers**
+
+**Required**: `false`
+**Description**: The list of customers where on-demand generation of reports has to be enabled, if on-demand generation wasn't enabled globally
+**Options**:
+**Default**:
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ scanning:
+ reporting:
+ onDemandGenerationCustomers: "1,12,123"
+```
+
+## **sysdig.secure.scanning.reporting.workerSleepTime**
+
+**Required**: `false`
+**Description**: The sleep interval between two runs of the reporting worker
+**Options**:
+**Default**: 120s
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ scanning:
+ reporting:
+ workerSleepTime: 120s
+```
+
+## **sysdig.resources.policy-advisor.limits.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu assigned to policy-advisor pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 4 |
+| medium | 4 |
+| large | 4 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ policy-advisor:
+ limits:
+ cpu: 2
+```
+
+## **sysdig.resources.policy-advisor.limits.memory**
+
+**Required**: `false`
+**Description**: The amount of memory assigned to policy-advisor pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 4Gi |
+| medium | 4Gi |
+| large | 4Gi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ policy-advisor:
+ limits:
+ memory: 10Mi
+```
+
+## **sysdig.resources.policy-advisor.requests.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu required to schedule policy-advisor pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 1 |
+| medium | 1 |
+| large | 1 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ policy-advisor:
+ requests:
+ cpu: 2
+```
+
+## **sysdig.resources.policy-advisor.requests.memory**
+
+**Required**: `false`
+**Description**: The amount of memory required to schedule policy-advisor pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 1Gi |
+| medium | 1Gi |
+| large | 1Gi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ policy-advisor:
+ requests:
+ memory: 200Mi
+```
+
+## **sysdig.resources.netsec-api.limits.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu assigned to netsec-api pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 1 |
+| medium | 2 |
+| large | 2 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ netsec-api:
+ limits:
+ cpu: 1
+```
+
+## **sysdig.resources.netsec-api.limits.memory**
+
+**Required**: `false`
+**Description**: The amount of memory assigned to netsec-api pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 1Gi |
+| medium | 2Gi |
+| large | 2Gi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ netsec-api:
+ limits:
+ memory: 1Gi
+```
+
+## **sysdig.resources.netsec-api.requests.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu required to schedule netsec-api pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 300m |
+| medium | 500m |
+| large | 1 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ netsec-api:
+ requests:
+ cpu: 300m
+```
+
+## **sysdig.resources.netsec-api.requests.memory**
+
+**Required**: `false`
+**Description**: The amount of memory required to schedule netsec-api pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 1Gi |
+| medium | 1Gi |
+| large | 1Gi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ netsec-api:
+ requests:
+ memory: 1Gi
+```
+
+## **sysdig.resources.netsec-ingest.limits.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu assigned to netsec-ingest pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 1 |
+| medium | 2 |
+| large | 2 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ netsec-ingest:
+ limits:
+ cpu: 1
+```
+
+## **sysdig.resources.netsec-ingest.limits.memory**
+
+**Required**: `false`
+**Description**: The amount of memory assigned to netsec-ingest pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 4Gi |
+| medium | 6Gi |
+| large | 8Gi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ netsec-ingest:
+ limits:
+ memory: 4Gi
+```
+
+## **sysdig.resources.netsec-ingest.requests.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu required to schedule netsec-ingest pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 500m |
+| medium | 1 |
+| large | 1 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ netsec-ingest:
+ requests:
+ cpu: 500m
+```
+
+## **sysdig.resources.netsec-ingest.requests.memory**
+
+**Required**: `false`
+**Description**: The amount of memory required to schedule to netsec-ingest pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 1Gi |
+| medium | 2Gi |
+| large | 4Gi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ netsec-ingest:
+ limits:
+ memory: 2Gi
+```
+
+## **sysdig.resources.netsec-janitor.limits.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu assigned to netsec-janitor pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 1 |
+| medium | 2 |
+| large | 2 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ netsec-janitor:
+ limits:
+ cpu: 1
+```
+
+## **sysdig.resources.netsec-janitor.limits.memory**
+
+**Required**: `false`
+**Description**: The amount of memory assigned to netsec-janitor pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 1Gi |
+| medium | 2Gi |
+| large | 2Gi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ netsec-janitor:
+ limits:
+ memory: 1Gi
+```
+
+## **sysdig.resources.netsec-janitor.requests.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu required to schedule netsec-janitor pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 300m |
+| medium | 500m |
+| large | 1 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ netsec-janitor:
+ requests:
+ cpu: 1
+```
+
+## **sysdig.resources.netsec-janitor.requests.memory**
+
+**Required**: `false`
+**Description**: The amount of memory required to schedule netsec-janitor pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 1Gi |
+| medium | 1Gi |
+| large | 1Gi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ netsec-janitor:
+ requests:
+ memory: 1Gi
+```
+
+## **sysdig.natsJs.enabled**
+
+**Required**: `false`
+**Description**: Enable nats js deploy
+**Options**: true|false
+**Default**: true
+**Example**:
+
+```yaml
+sysdig:
+ natsJs:
+ enabled: true
+```
+
+## **sysdig.natsJs.nats.fullnameOverride**
+
+**Required**: `false`
+**Description**: the name of the nats js deployment
+**Options**:
+**Default**: nats
+
+**Example**:
+
+```yaml
+sysdig:
+ natsJs:
+ nats:
+ fullnameOverride: nats
+```
+
+## **sysdig.natsJs.nats.natsbox.enabled**
+
+**Required**: `false`
+**Description**: Enable nats js box deploy
+**Options**: true|false
+**Default**: false
+**Example**:
+
+```yaml
+sysdig:
+ natsJs:
+ natsbox:
+ enabled: true
+```
+
+## **sysdig.natsJs.natsTLSGenerator.enabled**
+
+**Required**: `false`
+**Description**: Enable the use of cert manager. Creates Issuer and Certficate resources
+**Options**:
+**Options**: true|false
+**Default**: false
+
+**Example**:
+
+```yaml
+sysdig:
+ natsJs:
+ natsTLSGenerator: true
+```
+
+## **sysdig.natsJs.ha.enabled**
+
+**Required**: `false`
+**Description**: This feature ensures that there are multiple replicas of your NATS JetStream server running at any given time, providing data redundancy and mitigating the risk of server failure. It accomplishes this by utilizing cluster mode, where data is distributed across multiple nodes.
+
+If you disable High Availability, the number of JetStream replicas will be set to 1. In this scenario, there is no data redundancy since there is only a single instance of the server. Therefore, any issues with this single instance could lead to data loss or service disruption. Also, the cluster mode of NATS will be disabled, meaning that your data will no be distributed across multiple nodes, potentially leading to increased risk of data loss and less efficient use of resources.
+
+It is recommended to keep High Availability enabled for production use of NATS JetStream
+
+**Options**:
+**Default**: true
+**Example**:
+
+```yaml
+sysdig:
+ natsJs:
+ ha:
+ enabled: false
+```
+
+## **sysdig.natsJs.hostPathNodes**
+
+**Required**: `false`
+**Description**: An array of node hostnames has shown in `kubectl get node -o name` that nats js hostPath persistent volumes should be created on. The number of nodes must be 3. This is
+required if configured [`storageClassProvisioner`](#storageclassprovisioner)
+is `hostPath`.
+**Options**:
+**Default**: []
+
+**Example**:
+
+```yaml
+sysdig:
+ natsJs:
+ hostPathNodes:
+ - my-cool-host1.com
+```
+
+## **sysdig.natsJs.nats.tolerations**
+
+**Required**: `false`
+**Description**: If set add tolerations to NatsJs statefulset
+**Options**:
+**Default**: `[]`
+**Example**:
+
+```yaml
+sysdig:
+ natsJs:
+ nats:
+ tolerations:
+ key: dedicated
+ operator: Equal
+ value: cassandra
+ effect: NoSchedule
+```
+
+## **sysdig.natsJs.nats.affinity**
+
+**Required**: `false`
+**Description**: If set add affinity to NatsJs statefulset
+**Options**:
+**Default**: ``
+**Example**:
+
+```yaml
+sysdig:
+ natsJs:
+ nats:
+ affinity:
+ nodeAffinity:
+ requiredDuringSchedulingIgnoredDuringExecution:
+ nodeSelectorTerms:
+ - matchExpressions:
+ - key: name
+ operator: In
+ values:
+ - blue
+```
+
+## **sysdig.resources.natsJs.limits.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu assigned to nats pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 1 |
+| medium | 2 |
+| large | 3 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ natsJs:
+ limits:
+ cpu: 2
+```
+
+## **sysdig.resources.natsJs.limits.memory**
+
+**Required**: `false`
+**Description**: The amount of memory assigned to nats pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 1Gi |
+| medium | 2Gi |
+| large | 3Gi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ natsJs:
+ limits:
+ memory: 2Gi
+```
+
+## **sysdig.resources.natsJs.requests.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu required to schedule nats pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 250m |
+| medium | 1 |
+| large | 2 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ natsJs:
+ requests:
+ cpu: 250m
+```
+
+## **sysdig.resources.natsJs.requests.memory**
+
+**Required**: `false`
+**Description**: The amount of memory required to schedule nats pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 1Gi |
+| medium | 2Gi |
+| large | 3Gi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ natsJs:
+ requests:
+ memory: 1Gi
+```
+
+## **sysdig.natsJs.nats.nats.gomemlimit**
+
+**Required**: `false`
+**Description**: The amount of memory dedicated to go. Configure it to the 90% of memory limit
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 900MiB |
+| medium | 1800MiB |
+| large | 2600MiB |
+
+**Example**:
+
+```yaml
+sysdig:
+ natsJs:
+ nats:
+ nats:
+ gomemlimit: 900MiB
+```
+
+## **sysdig.resources.activity-audit-api.limits.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu assigned to activity-audit-api pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 2 |
+| medium | 2 |
+| large | 2 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ activity-audit-api:
+ limits:
+ cpu: 2
+```
+
+## **sysdig.resources.activity-audit-api.limits.memory**
+
+**Required**: `false`
+**Description**: The amount of memory assigned to activity-audit-api pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 500Mi |
+| medium | 500Mi |
+| large | 500Mi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ activity-audit-api:
+ limits:
+ memory: 500Mi
+```
+
+## **sysdig.resources.activity-audit-api.requests.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu required to schedule activity-audit-api pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 250m |
+| medium | 250m |
+| large | 250m |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ activity-audit-api:
+ requests:
+ cpu: 250m
+```
+
+## **sysdig.resources.activity-audit-api.requests.memory**
+
+**Required**: `false`
+**Description**: The amount of memory required to schedule activity-audit-api pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 50Mi |
+| medium | 50Mi |
+| large | 50Mi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ activity-audit-api:
+ requests:
+ memory: 50Mi
+```
+
+## **sysdig.resources.activity-audit-worker.limits.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu assigned to activity-audit-worker pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 1 |
+| medium | 1 |
+| large | 1 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ activity-audit-worker:
+ limits:
+ cpu: 1
+```
+
+## **sysdig.resources.activity-audit-worker.limits.memory**
+
+**Required**: `false`
+**Description**: The amount of memory assigned to activity-audit-worker pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 500Mi |
+| medium | 500Mi |
+| large | 500Mi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ activity-audit-worker:
+ limits:
+ memory: 500Mi
+```
+
+## **sysdig.resources.activity-audit-worker.requests.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu required to schedule activity-audit-worker pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 250m |
+| medium | 250m |
+| large | 250m |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ activity-audit-worker:
+ requests:
+ cpu: 250m
+```
+
+## **sysdig.resources.activity-audit-worker.requests.memory**
+
+**Required**: `false`
+**Description**: The amount of memory required to schedule activity-audit-worker pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 50Mi |
+| medium | 50Mi |
+| large | 50Mi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ activity-audit-worker:
+ requests:
+ memory: 50Mi
+```
+
+## **sysdig.resources.activity-audit-janitor.limits.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu assigned to activity-audit-janitor pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 250m |
+| medium | 250m |
+| large | 250m |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ activity-audit-janitor:
+ limits:
+ cpu: 250m
+```
+
+## **sysdig.resources.activity-audit-janitor.limits.memory**
+
+**Required**: `false`
+**Description**: The amount of memory assigned to activity-audit-janitor pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 200Mi |
+| medium | 200Mi |
+| large | 200Mi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ activity-audit-janitor:
+ limits:
+ memory: 200Mi
+```
+
+## **sysdig.resources.activity-audit-janitor.requests.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu required to schedule activity-audit-janitor pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 250m |
+| medium | 250m |
+| large | 250m |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ activity-audit-janitor:
+ requests:
+ cpu: 250m
+```
+
+## **sysdig.resources.activity-audit-janitor.requests.memory**
+
+**Required**: `false`
+**Description**: The amount of memory required to schedule activity-audit-janitor pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 50Mi |
+| medium | 50Mi |
+| large | 50Mi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ activity-audit-janitor:
+ requests:
+ memory: 50Mi
+```
+
+## **sysdig.resources.profiling-api.limits.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu assigned to profiling-api pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 2 |
+| medium | 2 |
+| large | 2 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ profiling-api:
+ limits:
+ cpu: 2
+```
+
+## **sysdig.resources.profiling-api.limits.memory**
+
+**Required**: `false`
+**Description**: The amount of memory assigned to profiling-api pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 500Mi |
+| medium | 500Mi |
+| large | 500Mi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ profiling-api:
+ limits:
+ memory: 500Mi
+```
+
+## **sysdig.resources.profiling-api.requests.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu required to schedule profiling-api pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 250m |
+| medium | 250m |
+| large | 250m |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ profiling-api:
+ requests:
+ cpu: 250m
+```
+
+## **sysdig.resources.profiling-api.requests.memory**
+
+**Required**: `false`
+**Description**: The amount of memory required to schedule profiling-api pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 50Mi |
+| medium | 50Mi |
+| large | 50Mi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ profiling-api:
+ requests:
+ memory: 50Mi
+```
+
+## **sysdig.resources.profiling-worker.limits.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu assigned to profiling-worker pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 1 |
+| medium | 1 |
+| large | 1 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ profiling-worker:
+ limits:
+ cpu: 1
+```
+
+## **sysdig.resources.profiling-worker.limits.memory**
+
+**Required**: `false`
+**Description**: The amount of memory assigned to profiling-worker pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 500Mi |
+| medium | 500Mi |
+| large | 500Mi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ profiling-worker:
+ limits:
+ memory: 500Mi
+```
+
+## **sysdig.resources.profiling-worker.requests.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu required to schedule profiling-worker pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 250m |
+| medium | 250m |
+| large | 250m |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ profiling-worker:
+ requests:
+ cpu: 250m
+```
+
+## **sysdig.resources.profiling-worker.requests.memory**
+
+**Required**: `false`
+**Description**: The amount of memory required to schedule profiling-worker pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 50Mi |
+| medium | 50Mi |
+| large | 50Mi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ profiling-worker:
+ requests:
+ memory: 50Mi
+```
+
+## **sysdig.resources.secure-prometheus.limits.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu assigned to secure-prometheus containers
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 2 |
+| medium | 2 |
+| large | 2 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ secure-prometheus:
+ limits:
+ cpu: 2
+```
+
+## **sysdig.resources.secure-prometheus.limits.memory**
+
+**Required**: `false`
+**Description**: The amount of memory assigned to secure-prometheus containers
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 8Gi |
+| medium | 8Gi |
+| large | 8Gi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ secure-prometheus:
+ limits:
+ memory: 8Gi
+```
+
+## **sysdig.resources.secure-prometheus.requests.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu required to schedule secure-prometheus containers
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 500m |
+| medium | 500m |
+| large | 500m |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ secure-prometheus:
+ requests:
+ cpu: 500m
+```
+
+## **sysdig.resources.secure-prometheus.requests.memory**
+
+**Required**: `false`
+**Description**: The amount of memory required to schedule secure-prometheus containers
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 2Gi |
+| medium | 2Gi |
+| large | 2Gi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ secure-prometheus:
+ requests:
+ memory: 2Gi
+```
+
+## **sysdig.resources.events-api.limits.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu assigned to events-api pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 1 |
+| medium | 1 |
+| large | 1 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ events-api:
+ limits:
+ cpu: 1
+```
+
+## **sysdig.resources.events-api.limits.memory**
+
+**Required**: `false`
+**Description**: The amount of memory assigned to events-api pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 500Mi |
+| medium | 500Mi |
+| large | 500Mi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ events-api:
+ limits:
+ memory: 500Mi
+```
+
+## **sysdig.resources.events-api.requests.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu required to schedule events-api pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 250m |
+| medium | 250m |
+| large | 250m |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ events-api:
+ requests:
+ cpu: 250m
+```
+
+## **sysdig.resources.events-api.requests.memory**
+
+**Required**: `false`
+**Description**: The amount of memory required to schedule events-api pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 50Mi |
+| medium | 50Mi |
+| large | 50Mi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ events-api:
+ requests:
+ memory: 50Mi
+```
+
+## **sysdig.resources.events-gatherer.limits.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu assigned to events-gatherer pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 2 |
+| medium | 2 |
+| large | 2 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ events-gatherer:
+ limits:
+ cpu: 2
+```
+
+## **sysdig.resources.events-gatherer.limits.memory**
+
+**Required**: `false`
+**Description**: The amount of memory assigned to events-gatherer pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 1Gi |
+| medium | 1Gi |
+| large | 1Gi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ events-gatherer:
+ limits:
+ memory: 1Gi
+```
+
+## **sysdig.resources.events-gatherer.requests.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu required to schedule events-gatherer pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 250m |
+| medium | 250m |
+| large | 250m |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ events-gatherer:
+ requests:
+ cpu: 250m
+```
+
+## **sysdig.resources.events-gatherer.requests.memory**
+
+**Required**: `false`
+**Description**: The amount of memory required to schedule events-gatherer pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 250Mi |
+| medium | 250Mi |
+| large | 250Mi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ events-gatherer:
+ requests:
+ memory: 250Mi
+```
+
+## **sysdig.resources.events-dispatcher.limits.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu assigned to events-dispatcher pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 1 |
+| medium | 1 |
+| large | 1 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ events-dispatcher:
+ limits:
+ cpu: 1
+```
+
+## **sysdig.resources.events-dispatcher.limits.memory**
+
+**Required**: `false`
+**Description**: The amount of memory assigned to events-dispatcher pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 250Mi |
+| medium | 250Mi |
+| large | 250Mi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ events-dispatcher:
+ limits:
+ memory: 250Mi
+```
+
+## **sysdig.resources.events-dispatcher.requests.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu required to schedule events-dispatcher pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 250m |
+| medium | 250m |
+| large | 250m |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ events-dispatcher:
+ requests:
+ cpu: 250m
+```
+
+## **sysdig.resources.events-dispatcher.requests.memory**
+
+**Required**: `false`
+**Description**: The amount of memory required to schedule events-dispatcher pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 50Mi |
+| medium | 50Mi |
+| large | 50Mi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ events-dispatcher:
+ requests:
+ memory: 50Mi
+```
+
+## **sysdig.resources.events-forwarder-api.limits.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu assigned to events-forwarder-api pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 1 |
+| medium | 1 |
+| large | 1 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ events-forwarder-api:
+ limits:
+ cpu: 1
+```
+
+## **sysdig.resources.events-forwarder-api.limits.memory**
+
+**Required**: `false`
+**Description**: The amount of memory assigned to events-forwarder-api pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 500Mi |
+| medium | 500Mi |
+| large | 500Mi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ events-forwarder-api:
+ limits:
+ memory: 500Mi
+```
+
+## **sysdig.resources.events-forwarder-api.requests.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu required to schedule events-forwarder-api pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 250m |
+| medium | 250m |
+| large | 250m |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ events-forwarder-api:
+ requests:
+ cpu: 250m
+```
+
+## **sysdig.resources.events-forwarder-api.requests.memory**
+
+**Required**: `false`
+**Description**: The amount of memory required to schedule events-forwarder-api pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 50Mi |
+| medium | 50Mi |
+| large | 50Mi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ events-forwarder-api:
+ requests:
+ memory: 50Mi
+```
+
+## **sysdig.resources.events-forwarder.limits.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu assigned to events-forwarder pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 1 |
+| medium | 1 |
+| large | 1 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ events-forwarder:
+ limits:
+ cpu: 1
+```
+
+## **sysdig.resources.events-forwarder.limits.memory**
+
+**Required**: `false`
+**Description**: The amount of memory assigned to events-forwarder pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 500Mi |
+| medium | 500Mi |
+| large | 500Mi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ events-forwarder:
+ limits:
+ memory: 500Mi
+```
+
+## **sysdig.resources.events-forwarder.requests.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu required to schedule events-forwarder pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 250m |
+| medium | 250m |
+| large | 250m |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ events-forwarder:
+ requests:
+ cpu: 250m
+```
+
+## **sysdig.resources.events-forwarder.requests.memory**
+
+**Required**: `false`
+**Description**: The amount of memory required to schedule events-forwarder pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 50Mi |
+| medium | 50Mi |
+| large | 50Mi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ events-forwarder:
+ requests:
+ memory: 50Mi
+```
+
+## **sysdig.resources.events-janitor.limits.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu assigned to events-janitor pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 1 |
+| medium | 1 |
+| large | 1 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ events-janitor:
+ limits:
+ cpu: 1
+```
+
+## **sysdig.resources.events-janitor.limits.memory**
+
+**Required**: `false`
+**Description**: The amount of memory assigned to events-janitor pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 200Mi |
+| medium | 200Mi |
+| large | 200Mi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ events-janitor:
+ limits:
+ memory: 200Mi
+```
+
+## **sysdig.resources.events-janitor.requests.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu required to schedule events-janitor pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 250m |
+| medium | 250m |
+| large | 250m |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ events-janitor:
+ requests:
+ cpu: 250m
+```
+
+## **sysdig.resources.events-janitor.requests.memory**
+
+**Required**: `false`
+**Description**: The amount of memory required to schedule events-janitor pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 50Mi |
+| medium | 50Mi |
+| large | 50Mi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ events-janitor:
+ requests:
+ memory: 50Mi
+```
+
+## **sysdig.restrictPasswordLogin**
+
+**Required**: `false`
+**Description**: Restricts password login to only super admin user forcing all
+non-default users to login using the configured
+[IdP](https://en.wikipedia.org/wiki/Identity_provider).
+**Options**: `true|false`
+**Default**: `false`
+**Example**:
+
+```yaml
+sysdig:
+ restrictPasswordLogin: true
+```
+
+## **sysdig.rsyslogVersion**
+
+**Required**: `false`
+**Description**: Docker image tag of rsyslog, relevant only when configured
+`deployment` is `kubernetes`.
+**Options**:
+**Default**: 8.34.0.7
+**Example**:
+
+```yaml
+sysdig:
+ rsyslogVersion: 8.34.0.7
+```
+
+## **sysdig.smtpFromAddress**
+
+**Required**: `Conditional - True if smptServer is configured`
+**Description**: Email address to use for the FROM field of sent emails.
+**Options**:
+**Default**:
+**Example**:
+
+```yaml
+sysdig:
+ smtpFromAddress: from-address@my-company.com
+```
+
+## **sysdig.smtpPassword**
+
+**Required**: `false`
+**Description**: Password for the configured `sysdig.smtpUser`.
+**Options**:
+**Default**:
+**Example**:
+
+```yaml
+sysdig:
+ smtpPassword: my-@w350m3-p@55w0rd
+```
+
+## **sysdig.smtpProtocolSSL**
+
+**Required**: `false`
+**Description**: Specifies if SSL should be used when sending emails via SMTP.
+**Options**: `true|false`
+**Default**:
+**Example**:
+
+```yaml
+sysdig:
+ smtpProtocolSSL: true
+```
+
+## **sysdig.smtpProtocolTLS**
+
+**Required**: `false`
+**Description**: Specifies if TLS should be used when sending emails via SMTP
+**Options**: `true|false`
+**Default**:
+**Example**:
+
+```yaml
+sysdig:
+ smtpProtocolTLS: true
+```
+
+## **sysdig.smtpServer**
+
+**Required**: `false`
+**Description**: SMTP server to use to send emails
+**Options**:
+**Default**:
+**Example**:
+
+```yaml
+sysdig:
+ smtpServer: smtp.gmail.com
+```
+
+## **sysdig.smtpServerPort**
+
+**Required**: `false`
+**Description**: Port of the configured `sysdig.smtpServer`
+**Options**: `1-65535`
+**Default**: `25`
+**Example**:
+
+```yaml
+sysdig:
+ smtpServerPort: 587
+```
+
+## **sysdig.smtpUser**
+
+**Required**: `false`
+**Description**: User for the configured `sysdig.smtpServer`
+**Options**:
+**Default**:
+**Example**:
+
+```yaml
+sysdig:
+ smtpUser: bob+alice@gmail.com
+```
+
+## **sysdig.tolerations**
+
+**Required**: `false`
+**Description**:
+[Toleration](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/)
+that will be created on Sysdig platform pods, this can be combined with
+[nodeaffinityLabel.key](#nodeaffinitylabelkey) and
+[nodeaffinityLabel.value](#nodeaffinitylabelvalue) to ensure only Sysdig
+Platform pods run on particular nodes
+**Options**:
+**Default**:
+**Example**:
+
+```yaml
+sysdig:
+ tolerations:
+ - key: "dedicated"
+ operator: "Equal"
+ value: sysdig
+ effect: "NoSchedule"
+```
+
+## **sysdig.anchoreCoreReplicaCount**
+
+**Required**: `false`
+**Description**: Number of Sysdig Anchore Core replicas, this is a noop for
+clusters of `size` `small`.
+**Options**:
+**Default**:
+
+| cluster-size | count |
+| ------------ | ----- |
+| small | 1 |
+| medium | 1 |
+| large | 1 |
+
+**Example**:
+
+```yaml
+sysdig:
+ anchoreCoreReplicaCount: 5
+```
+
+## **sysdig.anchoreAPIReplicaCount**
+
+**Required**: `false`
+**Description**: Number of Sysdig Anchore API replicas, this is a noop for
+clusters of `size` `small`.
+**Options**:
+**Default**:
+
+| cluster-size | count |
+| ------------ | ----- |
+| small | 1 |
+| medium | 2 |
+| large | 2 |
+
+**Example**:
+
+```yaml
+sysdig:
+ anchoreAPIReplicaCount: 4
+```
+
+## **sysdig.anchoreCatalogReplicaCount**
+
+**Required**: `false`
+**Description**: Number of Sysdig Anchore Catalog replicas, this is a noop for
+clusters of `size` `small`.
+**Options**:
+**Default**:
+
+| cluster-size | count |
+| ------------ | ----- |
+| small | 1 |
+| medium | 2 |
+| large | 4 |
+
+**Example**:
+
+```yaml
+sysdig:
+ anchoreCatalogReplicaCount: 4
+```
+
+## **sysdig.anchorePolicyEngineReplicaCount**
+
+**Required**: `false`
+**Description**: Number of Sysdig Anchore Policy Engine replicas, this is a noop for
+clusters of `size` `small`.
+**Options**:
+**Default**:
+
+| cluster-size | count |
+| ------------ | ----- |
+| small | 1 |
+| medium | 2 |
+| large | 4 |
+
+**Example**:
+
+```yaml
+sysdig:
+ anchorePolicyEngineReplicaCount: 4
+```
+
+## **sysdig.anchoreWorkerReplicaCount**
+
+**Required**: `false`
+**Description**: Number of Sysdig Anchore Worker replicas.
+**Options**:
+**Default**:
+
+| cluster-size | count |
+| ------------ | ----- |
+| small | 1 |
+| medium | 1 |
+| large | 1 |
+
+**Example**:
+
+```yaml
+sysdig:
+ anchoreWorkerReplicaCount: 5
+```
+
+## **sysdig.apiReplicaCount**
+
+**Required**: `false`
+**Description**: Number of Sysdig API replicas, this is a noop for clusters of
+`size` `small`.
+**Options**:
+**Default**:
+
+| cluster-size | count |
+| ------------ | ----- |
+| small | 1 |
+| medium | 3 |
+| large | 5 |
+
+**Example**:
+
+```yaml
+sysdig:
+ apiReplicaCount: 5
+```
+
+## **sysdig.cassandraReplicaCount**
+
+**Required**: `false`
+**Description**: Number of Cassandra replicas
+**Options**:
+**Default**:
+
+| cluster-size | count |
+| ------------ | ----- |
+| small | 3 |
+| medium | 3 |
+| large | 6 |
+
+**Example**:
+
+```yaml
+sysdig:
+ cassandraReplicaCount: 20
+```
+
+## **sysdig.collectorReplicaCount**
+
+**Required**: `false`
+**Description**: Number of Sysdig collector replicas, this is a noop for
+clusters of `size` `small`.
+**Options**:
+**Default**:
+
+| cluster-size | count |
+| ------------ | ----- |
+| small | 1 |
+| medium | 3 |
+| large | 5 |
+
+**Example**:
+
+```yaml
+sysdig:
+ collectorReplicaCount: 7
+```
+
+## **sysdig.activityAuditWorkerReplicaCount**
+
+**Required**: `false`
+**Description**: Number of Activity Audit Worker replicas.
+**Options**:
+**Default**:
+
+| cluster-size | count |
+| ------------ | ----- |
+| small | 1 |
+| medium | 2 |
+| large | 4 |
+
+**Example**:
+
+```yaml
+sysdig:
+ activityAuditWorkerReplicaCount: 20
+```
+
+## **sysdig.activityAuditApiReplicaCount**
+
+**Required**: `false`
+**Description**: Number of Activity Audit API replicas.
+**Options**:
+**Default**:
+
+| cluster-size | count |
+| ------------ | ----- |
+| small | 1 |
+| medium | 1 |
+| large | 1 |
+
+**Example**:
+
+```yaml
+sysdig:
+ activityAuditApiReplicaCount: 20
+```
+
+## **sysdig.policyAdvisorReplicaCount**
+
+**Required**: `false`
+**Description**: Number of Policy Advisor replicas.
+**Options**:
+**Default**:
+
+| cluster-size | count |
+| ------------ | ----- |
+| small | 1 |
+| medium | 1 |
+| large | 1 |
+
+**Example**:
+
+```yaml
+sysdig:
+ policyAdvisorReplicaCount: 20
+```
+
+## **sysdig.scanningAdmissionControllerAPIReplicaCount**
+
+**Required**: `false`
+**Description**: Number of scanning Admission Controller API replicas, this is
+a noop for clusters of `size` `small`.
+**Options**:
+**Default**:
+
+| cluster-size | count |
+| ------------ | ----- |
+| small | 1 |
+| medium | 1 |
+| large | 1 |
+
+**Example**:
+
+```yaml
+sysdig:
+ scanningAdmissionControllerAPIReplicaCount: 1
+```
+
+## **sysdig.netsecApiReplicaCount**
+
+**Required**: `false`
+**Description**: Number of Netsec API replicas.
+**Options**:
+**Default**:
+
+| cluster-size | count |
+| ------------ | ----- |
+| small | 1 |
+| medium | 1 |
+| large | 1 |
+
+**Example**:
+
+```yaml
+sysdig:
+ netsecApiReplicaCount: 1
+```
+
+## **sysdig.netsecIngestReplicaCount**
+
+**Required**: `false`
+**Description**: Number of Netsec Ingest replicas.
+**Options**:
+**Default**:
+
+| cluster-size | count |
+| ------------ | ----- |
+| small | 1 |
+| medium | 1 |
+| large | 1 |
+
+**Example**:
+
+```yaml
+sysdig:
+ netsecIngestReplicaCount: 1
+```
+
+## **sysdig.netsecCommunicationShards**
+
+**Required**: `false`
+**Description**: Number of Netsec communications index shards.
+**Options**:
+**Default**:
+
+| cluster-size | count |
+| ------------ | ----- |
+| small | 3 |
+| medium | 9 |
+| large | 15 |
+
+**Example**:
+
+```yaml
+sysdig:
+ netsecCommunicationShards: 5
+```
+
+## **sysdig.scanningApiReplicaCount**
+
+**Required**: `false`
+**Description**: Number of Scanning API replicas.
+**Options**:
+**Default**:
+
+| cluster-size | count |
+| ------------ | ----- |
+| small | 1 |
+| medium | 1 |
+| large | 1 |
+
+**Example**:
+
+```yaml
+sysdig:
+ scanningApiReplicaCount: 3
+```
+
+## **sysdig.elasticsearchReplicaCount**
+
+**Required**: `false`
+**Description**: Number of ElasticSearch replicas
+**Options**:
+**Default**:
+
+| cluster-size | count |
+| ------------ | ----- |
+| small | 3 |
+| medium | 3 |
+| large | 6 |
+
+**Example**:
+
+```yaml
+sysdig:
+ elasticsearchReplicaCount: 20
+```
+
+## **sysdig.elasticsearchMastersReplicaCount**
+
+**Required**: `false`
+**Description**: Number of ElasticSearch Master replicas, this is a noop for clusters of
+`size` `small`.
+**Options**:
+**Default**:
+
+| cluster-size | count |
+| ------------ | ----- |
+| small | 3 |
+| medium | 3 |
+| large | 3 |
+
+**Example**:
+
+```yaml
+sysdig:
+ elasticsearchMastersReplicaCount: 3
+```
+
+## **sysdig.workerReplicaCount**
+
+**Required**: `false`
+**Description**: Number of Sysdig worker replicas, this is a noop for clusters
+of `size` `small`.
+**Options**:
+**Default**:
+
+| cluster-size | count |
+| ------------ | ----- |
+| small | 1 |
+| medium | 3 |
+| large | 5 |
+
+**Example**:
+
+```yaml
+sysdig:
+ workerReplicaCount: 7
+```
+
+## **sysdig.eventsGathererReplicaCount**
+
+**Required**: `false`
+**Description**: Number of events gatherer replicas, this is a noop for clusters
+of `size` `small`.
+**Options**:
+**Default**:
+
+| cluster-size | count |
+| ------------ | ----- |
+| small | 1 |
+| medium | 2 |
+| large | 4 |
+
+**Example**:
+
+```yaml
+sysdig:
+ eventsGathererReplicaCount: 2
+```
+
+## **sysdig.eventsAPIReplicaCount**
+
+**Required**: `false`
+**Description**: Number of events API replicas, this is a noop for clusters
+of `size` `small`.
+**Options**:
+**Default**:
+
+| cluster-size | count |
+| ------------ | ----- |
+| small | 1 |
+| medium | 1 |
+| large | 1 |
+
+**Example**:
+
+```yaml
+sysdig:
+ eventsAPIReplicaCount: 1
+```
+
+## **sysdig.eventsDispatcherReplicaCount**
+
+**Required**: `false`
+**Description**: Number of events dispatcher replicas, this is a noop for clusters
+of `size` `small`.
+**Options**:
+**Default**:
+
+| cluster-size | count |
+| ------------ | ----- |
+| small | 1 |
+| medium | 1 |
+| large | 1 |
+
+**Example**:
+
+```yaml
+sysdig:
+ eventsDispatcherReplicaCount: 1
+```
+
+## **sysdig.eventsForwarderReplicaCount**
+
+**Required**: `false`
+**Description**: Number of events forwarder replicas, this is a noop for clusters
+of `size` `small`.
+**Options**:
+**Default**:
+
+| cluster-size | count |
+| ------------ | ----- |
+| small | 1 |
+| medium | 1 |
+| large | 2 |
+
+**Example**:
+
+```yaml
+sysdig:
+ eventsForwarderReplicaCount: 2
+```
+
+## **sysdig.eventsForwarderAPIReplicaCount**
+
+**Required**: `false`
+**Description**: Number of events forwarder API replicas, this is a noop for clusters
+of `size` `small`.
+**Options**:
+**Default**:
+
+| cluster-size | count |
+| ------------ | ----- |
+| small | 1 |
+| medium | 1 |
+| large | 1 |
+
+**Example**:
+
+```yaml
+sysdig:
+ eventsForwarderAPIReplicaCount: 1
+```
+
+## **sysdig.admin.username**
+
+**Required**: `true`
+**Description**: Sysdig Platform super admin user. This will be used for
+initial login to the web interface. Make sure this is a valid email address
+that you can receive emails at.
+**Options**:
+**Default**:
+**Example**:
+
+```yaml
+sysdig:
+ admin:
+ username: my-awesome-email@my-awesome-domain-name.com
+```
+
+## **sysdig.admin.password**
+
+**Required**: `false`
+**Description**: Sysdig Platform super admin password. This along with
+`sysdig.admin.username` will be used for initial login to the web interface.
+It is auto-generated when not explicitly configured.
+**Options**:
+**Default**:
+**Example**:
+
+```yaml
+sysdig:
+ admin:
+ password: my-@w350m3-p@55w0rd
+```
+
+## **sysdig.api.enabled**
+
+**Required**: `false`
+**Description**: Enables Sysdig API component
+**Options**:`true|false`
+**Default**: `true`
+**Example**:
+
+```yaml
+sysdig:
+ api:
+ enabled: true
+```
+
+## **sysdig.api.jvmOptions**
+
+**Required**: `false`
+**Description**: Custom configuration for Sysdig API jvm.
+**Options**:
+**Default**:
+**Example**:
+
+```yaml
+sysdig:
+ api:
+ jvmOptions: -Xms4G -Xmx4G -Ddraios.jvm-monitoring.ticker.enabled=true
+ -XX:-UseContainerSupport -Ddraios.metrics-push.query.enabled=true
+```
+
+## **sysdig.certificate.generate**
+
+**Required**: `false`
+**Description**: Determines if Installer should generate self-signed
+certificates for the domain configured in `sysdig.dnsName`.
+**Options**: `true|false`
+**Default**: `true`
+**Example**:
+
+```yaml
+sysdig:
+ certificate:
+ generate: true
+```
+
+## **sysdig.certificate.crt**
+
+**Required**: `false`
+**Description**: Path(the path must be in same directory as `values.yaml` file
+and must be relative to `values.yaml`) to user provided certificate that will
+be used in serving the Sysdig api, if `sysdig.certificate.generate` is set to
+`false` this has to be configured. The certificate common name or subject
+altername name must match configured `sysdig.dnsName`.
+**Options**:
+**Default**: `true`
+**Example**:
+
+```yaml
+sysdig:
+ certificate:
+ crt: certs/server.crt
+```
+
+## **sysdig.certificate.key**
+
+**Required**: `false`
+**Description**: Path(the path must be in same directory as `values.yaml` file
+and must be relative to `values.yaml`) to user provided key that will be used
+in serving the sysdig api, if `sysdig.certificate.generate` is set to `false`
+this has to be configured. The key must match the certificate in
+`sysdig.certificate.crt`.
+**Options**:
+**Default**: `true`
+**Example**:
+
+```yaml
+sysdig:
+ certificate:
+ key: certs/server.key
+```
+
+## **sysdig.collector.enabled**
+
+**Required**: `false`
+**Description**: Enables Sysdig Collector component
+**Options**:`true|false`
+**Default**: `true`
+**Example**:
+
+```yaml
+sysdig:
+ collector:
+ enabled: true
+```
+
+## **sysdig.collector.dnsName**
+
+**Required**: `false`
+**Description**: Domain name the Sysdig collector will be served on, when not
+configured it defaults to whatever is configured for `sysdig.dnsName`.
+**Options**:
+**Default**:
+**Example**:
+
+```yaml
+sysdig:
+ collector:
+ dnsName: collector.my-awesome-domain-name.com
+```
+
+## **sysdig.collector.jvmOptions**
+
+**Required**: `false`
+**Description**: Custom configuration for Sysdig collector jvm.
+**Options**:
+**Default**:
+**Example**:
+
+```yaml
+sysdig:
+ collector:
+ jvmOptions: -Xms4G -Xmx4G -Ddraios.jvm-monitoring.ticker.enabled=true
+```
+
+## **sysdig.collector.certificate.generate**
+
+**Required**: `false`
+**Description**: This determines if Installer should generate self-signed
+certificates for the domain configured in `sysdig.collector.dnsName`.
+**Options**: `true|false`
+**Default**: `true`
+**Example**:
+
+```yaml
+sysdig:
+ collector:
+ certificate:
+ generate: true
+```
+
+## **sysdig.collector.certificate.crt**
+
+**Required**: `false`
+**Description**: Path(the path must be in same directory as `values.yaml` file
+and must be relative to `values.yaml`) to user provided certificate that will
+be used in serving the sysdig collector, if
+`sysdig.collector.certificate.generate` is set to `false` this has to be
+configured. The certificate common name or subject altername name must match
+configured `sysdig.collector.dnsName`.
+**Options**:
+**Default**:
+**Example**:
+
+```yaml
+sysdig:
+ collector:
+ certificate:
+ crt: certs/collector.crt
+```
+
+## **sysdig.collector.certificate.key**
+
+**Required**: `false`
+**Description**: Path(the path must be in same directory as `values.yaml` file
+and must be relative to `values.yaml`) to user provided key that will be used
+in serving the sysdig collector, if `sysdig.collector.certificate.generate` is
+set to `false` this has to be configured. The key must match the certificate
+in `sysdig.collector.certificate.crt`.
+**Options**:
+**Default**:
+**Example**:
+
+```yaml
+sysdig:
+ collector:
+ certificate:
+ key: certs/collector.key
+```
+
+## **sysdig.worker.enabled**
+
+**Required**: `false`
+**Description**: Enables Sysdig Worker component
+**Options**:`true|false`
+**Default**: `true`
+**Example**:
+
+```yaml
+sysdig:
+ worker:
+ enabled: true
+```
+
+## **sysdig.worker.jvmOptions**
+
+**Required**: `false`
+**Description**: Custom configuration for Sysdig worker jvm.
+**Options**:
+**Default**:
+**Example**:
+
+```yaml
+sysdig:
+ worker:
+ jvmOptions: -Xms4G -Xmx4G -Ddraios.jvm-monitoring.ticker.enabled=true
+```
+
+## **sysdig.secure.eventsForwarder.enabledIntegrations**
+
+**Required**: `false`
+**Description**: List of enabled integrations, e.g. "MCM,QRADAR"
+**Options**:
+**Default**: ""
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ eventsForwarder:
+ enabledIntegrations: "MCM,QRADAR"
+```
+
+## **sysdig.secure.scanning.admissionControllerAPI.maxDurationBeforeDisconnection**
+
+**Required**: `false`
+**Description**: Max duration after the last ping from an AC before it is considered
+disconnected. It cannot be greater than 30m. See also pingTTLDuration
+**Options**:
+**Default**: 10m
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ scanning:
+ admissionControllerAPI:
+ maxDurationBeforeDisconnection: 20m
+```
+
+## **sysdig.secure.scanning.admissionControllerAPI.confTTLDuration**
+
+**Required**: `false`
+**Description**: TTL of the cache for the cluster configuration. It should be
+used by the AC as polling interval to retrieve the updated cluster configuration
+from the API. It cannot be greater than 30m
+**Options**:
+**Default**: 5m
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ scanning:
+ admissionControllerAPI:
+ confTTLDuration: 10m
+```
+
+## **sysdig.secure.scanning.admissionControllerAPI.pingTTLDuration**
+
+**Required**: `false`
+**Description**: TTL of an AC ping. It should be used by the AC as polling
+interval to perform a HEAD on the ping endpoint to notify it's still alive and
+connected. It cannot be greater than 30m and it cannot be greater than
+maxDurationBeforeDisconnection
+**Options**:
+**Default**: 5m
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ scanning:
+ admissionControllerAPI:
+ pingTTLDuration: 8m
+```
+
+## **sysdig.secure.scanning.admissionControllerAPI.clusterConfCacheMaxDuration**
+
+**Required**: `false`
+**Description**: Max duration of the cluster configuration cache. The API returns
+this value as max-age in seconds and the FE uses it for caching the cluster
+configuration. FE also asks for a new cluster configuration using this value
+as time interval. It cannot be greater than 30m
+**Options**:
+**Default**: 5m
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ scanning:
+ admissionControllerAPI:
+ clusterConfCacheMaxDuration: 9m
+```
+
+## **sysdig.scanningAnalysiscollectorConcurrentUploads**
+
+**Required**: `false`
+**Description**: Number of concurrent uploads for Scanning Analysis Collector
+**Options**:
+**Default**: "5"
+**Example**:
+
+```yaml
+sysdig:
+ scanningAnalysiscollectorConcurrentUploads: 5
+```
+
+## **sysdig.scanningAlertMgrForceAutoScan**
+
+**Required**: `false`
+**Description**: Enable the runtime image autoscan feature. Note that for adopting a more distributed way of scanning runtime images, the Node Image Analyzer (NIA) is preferable.
+**Options**:
+**Default**: `false`
+**Example**:
+
+```yaml
+sysdig:
+ scanningAlertMgrForceAutoScan: false
+```
+
+## **sysdig.secure.scanning.veJanitor.cronjob**
+
+**Required**: `false`
+**Description**: Cronjob schedule
+**Options**:
+**Default**: "0 0 \* \* \*"
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ veJanitor:
+ cronjob: "5 0 * * *"
+```
+
+## **sysdig.secure.scanning.veJanitor.anchoreDBsslmode**
+
+**Required**: `false`
+**Description**: Anchore db ssl mode. More info:
+**Options**:
+**Default**: "disable"
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ veJanitor:
+ anchoreDBsslmode: "disable"
+```
+
+## **sysdig.secure.scanning.veJanitor.scanningDbEngine**
+
+**Required**: `false`
+**Description**: which scanning database engine to use.
+**Options**: postgres
+**Default**: postgres
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ veJanitor:
+ scanningDbEngine: postgres
+```
+
+## **sysdig.metadataService.enabled**
+
+**Required**: `false`
+**Description**: Whether to enable metadata-service or not
+**Options**:`true|false`
+**Default**: `true`
+**Example**:
+
+```yaml
+sysdig:
+ metadataService:
+ enabled: true
+```
+
+## **sysdig.metadataService.operatorEnabled**
+
+**Required**: `false`
+**Description**: Whether to enable metadata-service-operator or not, this controls the HA capabilities of the Metadata Service but it requires several k8s permissions in the cluster.
+**Options**:`true|false`
+**Default**: `true`
+**Example**:
+
+```yaml
+sysdig:
+ metadataService:
+ operatorEnabled: true
+```
+
+## **sysdig.resources.metadataService.limits.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu assigned to metadataService pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 4 |
+| medium | 8 |
+| large | 16 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ metadataService:
+ limits:
+ cpu: 2
+```
+
+## **sysdig.resources.metadataService.limits.memory**
+
+**Required**: `false`
+**Description**: The amount of memory assigned to metadataService pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 4Gi |
+| medium | 8Gi |
+| large | 16Gi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ metadataService:
+ limits:
+ memory: 10Mi
+```
+
+## **sysdig.resources.metadataService.requests.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu required to schedule metadataService pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 1 |
+| medium | 2 |
+| large | 4 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ metadataService:
+ requests:
+ cpu: 2
+```
+
+## **sysdig.resources.metadataService.requests.memory**
+
+**Required**: `false`
+**Description**: The amount of memory required to schedule metadataService pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 1Gi |
+| medium | 2Gi |
+| large | 4Gi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ metadataService:
+ requests:
+ memory: 200Mi
+```
+
+## **sysdig.mdsDeploymentReplicaCount**
+
+**Required**: `false`
+**Description**: Number of Sysdig MetadataService Deployment replicas, this is a noop for clusters
+of `size` `small`.
+**Options**:
+**Default**:
+
+| cluster-size | count |
+| ------------ | ----- |
+| small | 1 |
+| medium | 2 |
+| large | 2 |
+
+**Example**:
+
+```yaml
+sysdig:
+ mdsDeploymentCount: 2
+```
+
+## **sysdig.mdsOperatorReplicaCount**
+
+**Required**: `false`
+**Description**: Number of Sysdig metadataService operator replicas, this is a noop for clusters
+of `size` `small`.
+**Options**:
+**Default**:
+
+| cluster-size | count |
+| ------------ | ----- |
+| small | 1 |
+| medium | 2 |
+| large | 2 |
+
+**Example**:
+
+```yaml
+sysdig:
+ mdsOperatorReplicaCount: 2
+```
+
+## **sysdig.mdsPodReplicaCount**
+
+**Required**: `false`
+**Description**: Number of Sysdig MetadataService Pod count, this is a noop for clusters
+of `size` `small`.
+**Options**:
+**Default**:
+
+| cluster-size | count |
+| ------------ | ----- |
+| small | 2 |
+| medium | 4 |
+| large | 8 |
+
+**Example**:
+
+```yaml
+sysdig:
+ mdsPodCount: 2
+```
+
+## **sysdig.MdsOperatorVersion**
+
+**Required**: `false`
+**Description**: Docker image tag of metadataService, relevant when `sysdig.metadataService.operatorEnabled` is `true`.
+**Options**:
+**Default**: 1.0.1.27
+**Example**:
+
+```yaml
+sysdig:
+ mdsOperatorVersion: 1.0.1.27
+```
+
+## **sysdig.ArtifactDeployerTag**
+
+**Required**: `false`
+**Description**: Docker image tag for `artifactDeployer`, default is `latest`.
+**Options**:
+**Default**: latest
+**Example**:
+
+```yaml
+sysdig:
+ artifactDeployerTag: latest
+```
+
+## **sysdig.RulesDeployerTag**
+
+**Required**: `false`
+**Description**: Docker image tag for `rulesDeployer`, default is `latest`.
+**Options**:
+**Default**: latest
+**Example**:
+
+```yaml
+sysdig:
+ rulesDeployerTag: latest
+```
+
+## **sysdig.MdsServerVersion**
+
+**Required**: `false`
+**Description**: Docker image tag of metadataServiceServer, relevant when `sysdig.metadataService.enabled` is `true`.
+**Options**:
+**Default**: 1.10.250-vf2bcc4a
+**Example**:
+
+```yaml
+sysdig:
+ mdsServerVersion: 1.10.250-vf2bcc4a
+```
+
+
+## **sysdig.helmRenderer.enabled**
+
+**Required**: `false`
+**Description**: Whether to enable helm-renderer or not
+**Options**:`true|false`
+**Default**: `true`
+**Example**:
+
+```yaml
+sysdig:
+ helmRenderer:
+ enabled: true
+```
+
+## **sysdig.resources.helmRenderer.limits.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu assigned to helmRenderer pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 2 |
+| medium | 2 |
+| large | 2 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ helmRenderer:
+ limits:
+ cpu: 2
+```
+
+## **sysdig.resources.helmRenderer.limits.memory**
+
+**Required**: `false`
+**Description**: The amount of memory assigned to helmRenderer pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 1Gi |
+| medium | 1Gi |
+| large | 1Gi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ helmRenderer:
+ limits:
+ memory: 10Mi
+```
+
+## **sysdig.resources.helmRenderer.requests.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu required to schedule helmRenderer pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 500m |
+| medium | 500m |
+| large | 500m |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ helmRenderer:
+ requests:
+ cpu: 2
+```
+
+## **sysdig.resources.helmRenderer.requests.memory**
+
+**Required**: `false`
+**Description**: The amount of memory required to schedule helmRenderer pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 512Mi |
+| medium | 512Mi |
+| large | 512Mi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ helmRenderer:
+ requests:
+ memory: 200Mi
+```
+
+## **sysdig.helmRendererReplicaCount**
+
+**Required**: `false`
+**Description**: Number of Sysdig helmRenderer replicas, this is a noop for clusters
+of `size` `small`.
+**Options**:
+**Default**:
+
+| cluster-size | count |
+| ------------ | ----- |
+| small | 1 |
+| medium | 1 |
+| large | 1 |
+
+**Example**:
+
+```yaml
+sysdig:
+ helmRendererReplicaCount: 1
+```
+
+## **sysdig.helmRendererVersion**
+
+**Required**: `false`
+**Description**: Docker image tag of helmRenderer, relevant when `sysdig.helmRenderer.enabled` is `true`.
+**Options**:
+**Default**: 1.0.296
+**Example**:
+
+```yaml
+sysdig:
+ helmRendererVersion: 1.0.296
+```
+
+## **sysdig.secure.activityAudit.enabled**
+
+**Required**: `false`
+**Description**: Enable activity audit for Sysdig secure.
+**Options**:
+**Default**: true
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ activityAudit:
+ enabled: true
+```
+
+## **sysdig.secure.activityAudit.janitor.retentionDays**
+
+**Required**: `false`
+**Description**: Retention period for Activity Audit data.
+**Options**:
+**Default**: 90
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ activityAudit:
+ janitor:
+ retentionDays: 90
+```
+
+## **sysdig.secure.events.janitor.policiesRetentionDays**
+
+**Required**: `false`
+**Description**: Retention period for Policy Events.
+**Options**:
+**Default**: 90
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ events:
+ janitor:
+ policiesRetentionDays: 90
+```
+
+## **sysdig.secure.events.janitor.scanningRetentionDays**
+
+**Required**: `false`
+**Description**: Retention period for Scanning Events.
+**Options**:
+**Default**: 90
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ events:
+ janitor:
+ scanningRetentionDays: 90
+```
+
+## **sysdig.secure.events.janitor.benchmarksRetentionDays**
+
+**Required**: `false`
+**Description**: Retention period for Benchmarks Events.
+**Options**:
+**Default**: 365
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ events:
+ janitor:
+ benchmarksRetentionDays: 365
+```
+
+## **sysdig.secure.events.janitor.complianceRetentionDays**
+
+**Required**: `false`
+**Description**: Retention period for Compliance Events.
+**Options**:
+**Default**: 90
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ events:
+ janitor:
+ complianceRetentionDays: 90
+```
+
+## **sysdig.secure.events.janitor.profilingDetectionRetentionDays**
+
+**Required**: `false`
+**Description**: Retention period for Profiling-Detection Events.
+**Options**:
+**Default**: 90
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ events:
+ janitor:
+ profilingDetectionRetentionDays: 90
+```
+
+## **sysdig.secure.anchore.enabled**
+
+**Required**: `false`
+**Description**: Enable anchore for Sysdig Secure.
+**Options**:
+**Default**: true
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ anchore:
+ enabled: true
+```
+
+## **sysdig.secure.compliance.enabled**
+
+**Required**: `false`
+**Description**: Enable compliance for Sysdig Secure.
+**Options**:
+**Default**: true
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ compliance:
+ enabled: true
+```
+
+## **sysdig.secure.compliance.benchmarks.readFromCompIndex**
+
+**Required**: `false`
+**Description**: Fetch benchmarks reports from Compliance v2 Index.
+**Options**:
+**Default**: true
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ compliance:
+ benchmarks:
+ readFromCompIndex: true
+```
+
+## **sysdig.secure.compliance.benchmarks.writeToCompIndex**
+
+**Required**: `false`
+**Description**: Write benchmarks events to new Compliance Index for Compliance v2. Current Benchmarks index will be deprecated soon
+**Options**:
+**Default**: true
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ compliance:
+ benchmarks:
+ writeToCompIndex: false
+```
+
+## **sysdig.secure.netsec.enabled**
+
+**Required**: `false`
+**Description**: Enable netsec for Sysdig Secure.
+**Options**:
+**Default**: true
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ netsec:
+ enabled: true
+```
+
+## **sysdig.secure.padvisor.enabled**
+
+**Required**: `false`
+**Description**: Enable policy advisor for Sysdig Secure.
+**Options**:
+**Default**: true
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ padvisor:
+ enabled: false
+```
+
+## **sysdig.secure.profiling.enabled**
+
+**Required**: `false`
+**Description**: Enable profiling for Sysdig Secure.
+**Options**:
+**Default**: true
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ profiling:
+ enabled: true
+```
+
+## **sysdig.secure.scanning.reporting.enabled**
+
+**Required**: `false`
+**Description**: Enable reporting for Sysdig Secure.
+**Options**:
+**Default**: true
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ scanning:
+ reporting:
+ enabled: true
+```
+
+## **sysdig.secure.scanning.enabled**
+
+**Required**: `false`
+**Description**: Enable scanning for Sysdig Secure.
+**Options**:
+**Default**: true
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ scanning:
+ enabled: true
+```
+
+## **sysdig.secure.events.enabled**
+
+**Required**: `false`
+**Description**: Enable events for Sysdig Secure.
+**Options**:
+**Default**: true
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ events:
+ enabled: true
+```
+
+## **sysdig.secure.eventsForwarder.enabled**
+
+**Required**: `false`
+**Description**: Enable events forwarder for Sysdig Secure.
+**Options**:
+**Default**: true
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ eventsForwarder:
+ enabled: true
+```
+
+## **sysdig.resources.rapid-response-connector.limits.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu assigned to rapid-response-connector pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 1 |
+| medium | 1 |
+| large | 1 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ rapid-response-connector:
+ limits:
+ cpu: 1
+```
+
+## **sysdig.resources.rapid-response-connector.limits.memory**
+
+**Required**: `false`
+**Description**: The amount of memory assigned to rapid-response-connector pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 500Mi |
+| medium | 500Mi |
+| large | 500Mi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ rapid-response-connector:
+ limits:
+ memory: 500Mi
+```
+
+## **sysdig.resources.rapid-response-connector.requests.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu required to schedule rapid-response-connector pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 250m |
+| medium | 250m |
+| large | 250m |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ rapid-response-connector:
+ requests:
+ cpu: 250m
+```
+
+## **sysdig.resources.rapid-response-connector.requests.memory**
+
+**Required**: `false`
+**Description**: The amount of memory required to schedule rapid-response-connector pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 50Mi |
+| medium | 50Mi |
+| large | 50Mi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ rapid-response-connector:
+ requests:
+ memory: 50Mi
+```
+
+## **sysdig.rapidResponseConnectorReplicaCount**
+
+**Required**: `false`
+**Description**: Number of Sysdig rapid-response-connector replicas.
+**Options**:
+**Default**:
+
+| cluster-size | count |
+| ------------ | ----- |
+| small | 1 |
+| medium | 1 |
+| large | 1 |
+
+**Example**:
+
+```yaml
+sysdig:
+ rapidResponseConnectorReplicaCount: 1
+```
+
+## **sysdig.secure.rapidResponse.enabled**
+
+**Required**: `false`
+**Description**: Whether to deploy rapid response or not.
+**Options**:
+**Default**: false
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ rapidResponse:
+ enabled: false
+```
+
+## **sysdig.secure.rapidResponse.validationCodeLength**
+
+**Required**: `false`
+**Description**: Length of mfa validation code sent via e-mail.
+**Options**:
+**Default**: 6
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ rapidResponse:
+ validationCodeLength: 8
+```
+
+## **sysdig.secure.rapidResponse.validationCodeSecondsDuration**
+
+**Required**: `false`
+**Description**: Duration in seconds of mfa validation code sent via e-mail.
+**Options**:
+**Default**: 180
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ rapidResponse:
+ validationCodeSecondsDuration: 8
+```
+
+## **sysdig.secure.rapidResponse.sessionTotalSecondsTTL**
+
+**Required**: `false`
+**Description**: Global duration of session in seconds.
+**Options**:
+**Default**: 7200
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ rapidResponse:
+ sessionTotalSecondsTTL: 7200
+```
+
+## **sysdig.secure.rapidResponse.sessionIdleSecondsTTL**
+
+**Required**: `false`
+**Description**: Idle duration of session in seconds.
+**Options**:
+**Default**: 300
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ rapidResponse:
+ sessionIdleSecondsTTL: 300
+```
+
+## **sysdig.secure.scanning.feedsEnabled**
+
+**Required**: `false`
+**Description**: Deploys a local Sysdig Secure feeds API and DB for airgapped installs that cannot reach out to one of Sysdig SaaS products
+**Options**: `true|false`
+**Default**: `false`
+
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ scanning:
+ feedsEnabled: true
+```
+
+## **sysdig.feedsAPIVersion**
+
+**Required**: `false`
+**Description**: Sets feeds API version
+**Options**:
+**Default**: `latest`
+
+**Example**:
+
+```yaml
+sysdig:
+ feedsAPIVersion: 0.5.0
+```
+
+## **sysdig.feedsDBVersion**
+
+**Required**: `false`
+**Description**: Sets feeds database version
+**Options**:
+**Default**: `latest`
+
+**Example**:
+
+```yaml
+sysdig:
+ feedsDBVersion: 0.5.0-2020-03-11
+```
+
+## **sysdig.feedsVerifySSL**
+
+**Required**: `false`
+**Description**: Whether to validate the SSL certificate, especially useful when connecting via a proxy using self-signed certificate.
+**Options**:
+**Default**: `true`
+
+**Example**:
+
+```yaml
+sysdig:
+ feedsVerifySSL: false
+```
+
+## **networkPolicies**
+
+Please check the [dedicated page](05-networkPolicies.md)
+
+## **pvStorageSize.small.kafka**
+
+**Required**: `false`
+**Description**: The size of the persistent volume assigned to Kafka in a
+cluster of [`size`](#size) small. This option is ignored if
+[`storageClassProvisioner`](#storageclassprovisioner) is `hostPath`.
+**Options**:
+**Default**: 50Gi
+**Example**:
+
+```yaml
+pvStorageSize:
+ small:
+ kafka: 100Gi
+```
+
+## **pvStorageSize.small.zookeeper**
+
+**Required**: `false`
+**Description**: The size of the persistent volume assigned to ZooKeeper in a
+cluster of [`size`](#size) small. This option is ignored if
+[`storageClassProvisioner`](#storageclassprovisioner) is `hostPath`.
+**Options**:
+**Default**: 20Gi
+**Example**:
+
+```yaml
+pvStorageSize:
+ small:
+ zookeeper: 100Gi
+```
+
+## **pvStorageSize.medium.kafka**
+
+**Required**: `false`
+**Description**: The size of the persistent volume assigned to Kafka in a
+cluster of [`size`](#size) medium. This option is ignored if
+[`storageClassProvisioner`](#storageclassprovisioner) is `hostPath`.
+**Options**:
+**Default**: 100Gi
+**Example**:
+
+```yaml
+pvStorageSize:
+ medium:
+ kafka: 100Gi
+```
+
+## **pvStorageSize.medium.zookeeper**
+
+**Required**: `false`
+**Description**: The size of the persistent volume assigned to ZooKeeper in a
+cluster of [`size`](#size) medium. This option is ignored if
+[`storageClassProvisioner`](#storageclassprovisioner) is `hostPath`.
+**Options**:
+**Default**: 20Gi
+**Example**:
+
+```yaml
+pvStorageSize:
+ medium:
+ zookeeper: 100Gi
+```
+
+## **pvStorageSize.large.kafka**
+
+**Required**: `false`
+**Description**: The size of the persistent volume assigned to Kafka in a
+cluster of [`size`](#size) large. This option is ignored if
+[`storageClassProvisioner`](#storageclassprovisioner) is `hostPath`.
+**Options**:
+**Default**: 500Gi
+**Example**:
+
+```yaml
+pvStorageSize:
+ large:
+ kafka: 100Gi
+```
+
+## **pvStorageSize.large.zookeeper**
+
+**Required**: `false`
+**Description**: The size of the persistent volume assigned to ZooKeeper in a
+cluster of [`size`](#size) large. This option is ignored if
+[`storageClassProvisioner`](#storageclassprovisioner) is `hostPath`.
+**Options**:
+**Default**: 20Gi
+**Example**:
+
+```yaml
+pvStorageSize:
+ large:
+ zookeeper: 100Gi
+```
+
+## **sysdig.meerkat.enabled**
+
+**Required**: `false`
+**Description**: Enables Meerkat. Meerkat represents collections of components that make up Sysdig's new, more computationally efficient, metrics store.
+**Options**: `true|false`
+**Default**: `true`
+**Example**:
+
+```yaml
+sysdig:
+ meerkat:
+ enabled: true
+```
+
+## **sysdig.meerkatVersion**
+
+**Required**: `false`
+**Description**: Docker image tag of Meerkat, relevant when `sysdig.meerkat.enabled` is `true`.
+**Options**:
+**Default**: [`sysdig.monitorVersion`](configuration_parameters.md#sysdigmonitorversion)
+**Example**:
+
+```yaml
+sysdig:
+ meerkatVersion: 2.4.1.5032
+```
+
+## **sysdig.meerkatCollectorReplicaCount**
+
+**Required**: `false`
+**Description**: Number of Meerkat collector replicas.
+**Options**:
+**Default**:
+
+| cluster-size | count |
+| ------------ | ----- |
+| small | 1 |
+| medium | 3 |
+| large | 5 |
+
+## **sysdig.meerkatAggregatorReplicaCount**
+
+**Required**: `false`
+**Description**: Number of Meerkat aggregator replicas.
+**Options**:
+**Default**:
+
+| cluster-size | count |
+| ------------ | ----- |
+| small | 1 |
+| medium | 3 |
+| large | 5 |
+
+## **sysdig.meerkatAggregatorWorkerReplicaCount**
+
+**Required**: `false`
+**Description**: Number of Meerkat aggregator worker replicas.
+**Options**:
+**Default**:
+
+| cluster-size | count |
+| ------------ | ----- |
+| small | 1 |
+| medium | 3 |
+| large | 5 |
+
+## **sysdig.meerkatApiReplicaCount**
+
+**Required**: `false`
+**Description**: Number of Meerkat api replicas.
+**Options**:
+**Default**:
+
+| cluster-size | count |
+| ------------ | ----- |
+| small | 1 |
+| medium | 3 |
+| large | 5 |
+
+## **sysdig.meerkatDatastreamReplicaCount**
+
+**Required**: `false`
+**Description**: Number of Meerkat Datastream replicas.
+**Options**:
+**Default**:
+
+| cluster-size | count |
+| ------------ | ----- |
+| small | 1 |
+| medium | 2 |
+| large | 4 |
+
+## **sysdig.resources.meerkatApi.requests.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu required to schedule each Meerkat Api pod
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 1 |
+| medium | 2 |
+| large | 4 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ meerkatApi:
+ requests:
+ cpu: 2
+```
+
+## **sysdig.resources.meerkatApi.requests.memory**
+
+**Required**: `false`
+**Description**: The amount of memory required to schedule each Meerkat Api pod
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 1Gi |
+| medium | 2Gi |
+| large | 4Gi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ meerkatApi:
+ requests:
+ memory: 2Gi
+```
+
+## **sysdig.resources.meerkatApi.limits.cpu**
+
+**Required**: `false`
+**Description**: The max amount of cpu assigned to each Meerkat Api pod
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 4 |
+| medium | 8 |
+| large | 16 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ meerkatApi:
+ limits:
+ cpu: 2
+```
+
+## **sysdig.resources.meerkatApi.limits.memory**
+
+**Required**: `false`
+**Description**: The max amount of memory assigned to each Meerkat Api pod
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 4Gi |
+| medium | 8Gi |
+| large | 16Gi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ meerkatApi:
+ requests:
+ memory: 2Gi
+```
+
+## **sysdig.meerkatApi.jvmOptions**
+
+**Required**: `false`
+**Description**: Custom configuration for Meerkat API JVM.
+**Options**:
+**Default**:
+
+-Dlogging.level.org.springframework.transaction.interceptor=TRACE
+-Dio.netty.leakDetection.level=advanced
+-Dlogging.level.com.sysdig.meerkat.api.server.adapter.TimeSeriesGAdapter=DEBUG
+-Dlogging.level.com.sysdig.meerkat.api.server.service.realtime.RealTimeQueryServiceImpl=DEBUG
+-Dlogging.level.com.sysdig.meerkat.api.server.service.realtime.MeerkatClientDNSGrpcResolver=DEBUG
+-Dsysdig.meerkat.cassandra.features.queryAllMetricDescriptorsEnabled=true
+
+
+**Example**:
+
+```yaml
+sysdig:
+ meerkatApi:
+ jvmOptions: "-Dio.netty.leakDetection.level=advanced"
+```
+
+## **sysdig.resources.meerkatAggregator.requests.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu required to schedule each Meerkat Aggregator pod
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 1 |
+| medium | 2 |
+| large | 4 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ meerkatAggregator:
+ requests:
+ cpu: 2
+```
+
+## **sysdig.resources.meerkatAggregator.requests.memory**
+
+**Required**: `false`
+**Description**: The amount of memory required to schedule each Meerkat Aggregator pod
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 1Gi |
+| medium | 2Gi |
+| large | 4Gi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ meerkatAggregator:
+ requests:
+ memory: 2Gi
+```
+
+## **sysdig.resources.meerkatAggregator.limits.cpu**
+
+**Required**: `false`
+**Description**: The max amount of cpu assigned to each Meerkat Aggregator pod
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 4 |
+| medium | 8 |
+| large | 16 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ meerkatAggregator:
+ limits:
+ cpu: 2
+```
+
+## **sysdig.resources.meerkatAggregator.limits.memory**
+
+**Required**: `false`
+**Description**: The max amount of memory assigned to each Meerkat Aggregator pod
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 4Gi |
+| medium | 8Gi |
+| large | 16Gi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ meerkatAggregator:
+ requests:
+ memory: 2Gi
+```
+
+## **sysdig.meerkatAggregator.jvmOptions**
+
+**Required**: `false`
+**Description**: Custom configuration for Meerkat Aggregator JVM.
+**Options**:
+**Default**:
+
+-Dlogging.level.org.springframework.transaction.interceptor=TRACE
+-Dio.netty.leakDetection.level=advanced
+
+
+
+**Example**:
+
+```yaml
+sysdig:
+ meerkatAggregator:
+ jvmOptions: "-Dio.netty.leakDetection.level=advanced"
+```
+
+## **sysdig.resources.meerkatAggregatorWorker.requests.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu required to schedule each Meerkat Aggregator Worker pod
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 1 |
+| medium | 2 |
+| large | 4 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ meerkatAggregatorWorker:
+ requests:
+ cpu: 2
+```
+
+## **sysdig.resources.meerkatAggregatorWorker.requests.memory**
+
+**Required**: `false`
+**Description**: The amount of memory required to schedule each Meerkat Aggregator Worker pod
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 1Gi |
+| medium | 2Gi |
+| large | 4Gi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ meerkatAggregatorWorker:
+ requests:
+ memory: 2Gi
+```
+
+## **sysdig.resources.meerkatAggregatorWorker.limits.cpu**
+
+**Required**: `false`
+**Description**: The max amount of cpu assigned to each Meerkat Aggregator Worker pod
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 4 |
+| medium | 8 |
+| large | 16 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ meerkatAggregatorWorker:
+ limits:
+ cpu: 2
+```
+
+## **sysdig.resources.meerkatAggregatorWorker.limits.memory**
+
+**Required**: `false`
+**Description**: The max amount of memory assigned to each Meerkat Aggregator Worker pod
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 4Gi |
+| medium | 8Gi |
+| large | 16Gi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ meerkatAggregatorWorker:
+ requests:
+ memory: 2Gi
+```
+
+## **sysdig.meerkatAggregatorWorker.jvmOptions**
+
+**Required**: `false`
+**Description**: Custom configuration for Meerkat Aggregator Worker JVM.
+**Options**:
+**Default**: ``
+
+**Example**:
+
+```yaml
+sysdig:
+ meerkatAggregatorWorker:
+ jvmOptions: "-Xmx2Gi"
+```
+
+## **sysdig.resources.meerkatCollector.requests.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu required to schedule each Meerkat Collector pod
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 1 |
+| medium | 2 |
+| large | 4 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ meerkatCollector:
+ requests:
+ cpu: 2
+```
+
+## **sysdig.resources.meerkatCollector.requests.memory**
+
+**Required**: `false`
+**Description**: The amount of memory required to schedule each Meerkat Collector pod
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 3Gi |
+| medium | 8Gi |
+| large | 12Gi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ meerkatCollector:
+ requests:
+ memory: 2Gi
+```
+
+## **sysdig.resources.meerkatCollector.limits.cpu**
+
+**Required**: `false`
+**Description**: The max amount of cpu assigned to each Meerkat Collector pod
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 2 |
+| medium | 4 |
+| large | 8 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ meerkatCollector:
+ limits:
+ cpu: 2
+```
+
+## **sysdig.resources.meerkatCollector.limits.memory**
+
+**Required**: `false`
+**Description**: The max amount of memory assigned to each Meerkat Collector pod
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 8Gi |
+| medium | 16Gi |
+| large | 24Gi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ meerkatCollector:
+ requests:
+ memory: 2Gi
+```
+
+## **sysdig.meerkatCollector.jvmOptions**
+
+**Required**: `false`
+**Description**: Custom configuration for Meerkat Collector JVM.
+**Options**:
+**Default**:
+
+-Dsysdig.cassandra.auto-schema=true
+-Dlogging.level.org.springframework.transaction.interceptor=TRACE
+-Dio.netty.leakDetection.level=advanced
+-Dlogging.level.com.sysdig.meerkat.collector.kafka.epochstate.ShardEpochState=DEBUG
+-Dlogging.level.com.sysdig.meerkat.collector.service.GPartBuilderImpl=DEBUG
+-Dlogging.level.com.sysdig.meerkat.collector.service.MeerkatIndexer=DEBUG
+-Dlogging.level.com.sysdig.meerkat.collector.kafka.MeerkatWorker=DEBUG
+-Dlogging.level.com.sysdig.meerkat.collector.grpc.GPartsQueryServiceGrpcImpl=DEBUG
+
+
+**Example**:
+
+```yaml
+sysdig:
+ meerkatCollector:
+ jvmOptions: "-Dsysdig.cassandra.auto-schema=true"
+```
+
+## **sysdig.meerkat.datastreamEnabled**
+
+**Required**: `false`
+**Description**: Enables Meerkat Datastrem. Meerkat Datastream enables streaming of metric data via Kafka .
+**Options**: `true|false`
+**Default**: `false`
+**Example**:
+
+```yaml
+sysdig:
+ meerkat:
+ datastreamEnabled: true
+```
+
+## **sysdig.resources.meerkatDatastream.requests.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu required to schedule each Meerkat Datastream pod
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 1 |
+| medium | 2 |
+| large | 4 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ meerkatDatastream:
+ requests:
+ cpu: 2
+```
+
+## **sysdig.resources.meerkatDatastream.requests.memory**
+
+**Required**: `false`
+**Description**: The amount of memory required to schedule each Meerkat Datastream pod
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 512Mi |
+| medium | 1Gi |
+| large | 2Gi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ meerkatDatastream:
+ requests:
+ memory: 2Gi
+```
+
+## **sysdig.resources.meerkatDatastream.limits.cpu**
+
+**Required**: `false`
+**Description**: The max amount of cpu assigned to each Meerkat Datastream pod
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 2 |
+| medium | 4 |
+| large | 8 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ meerkatDatastream:
+ limits:
+ cpu: 2
+```
+
+## **sysdig.resources.meerkatDatastream.limits.memory**
+
+**Required**: `false`
+**Description**: The max amount of memory assigned to each Meerkat Datastream pod
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 1Gi |
+| medium | 2Gi |
+| large | 3Gi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ meerkatDatastream:
+ requests:
+ memory: 2Gi
+```
+
+## **sysdig.meerkatDatastream.jvmOptions**
+
+**Required**: `false`
+**Description**: Custom configuration for Meerkat Datastream JVM.
+**Options**:
+**Default**: -Xms1g -Xmx1g
+
+**Example**:
+
+```yaml
+sysdig:
+ meerkatDatastream:
+ jvmOptions: "-Xms1g -Xmx1g"
+```
+
+## **sysdig.kafka.cruiseControl.enabled**
+
+**Required**: `false`
+**Description**: Enables kafka Cruise Control, if it is required.
+**Options**: `true|false`
+**Default**: `false`
+**Example**:
+
+```yaml
+sysdig:
+ kafka:
+ cruiseControl:
+ enabled: true
+```
+
+## **sysdig.kafkaVersion**
+
+**Required**: `false`
+**Description**: Docker image tag of Kafka, relevant when `sysdig.meerkat.enabled` is `true`.
+**Options**:
+**Default**: 1.0.0
+**Example**:
+
+```yaml
+sysdig:
+ kafkaVersion: 1.0.0
+```
+
+## **sysdig.kafkaReplicaCount**
+
+**Required**: `false`
+**Description**: Number of Kafka replicas.
+**Options**:
+**Default**:
+
+| cluster-size | count |
+| ------------ | ----- |
+| small | 3 |
+| medium | 3 |
+| large | 5 |
+
+## **sysdig.kafka.enabled**
+
+**Required**: `false`
+**Description**: Enables kafka, if it is required by the apps.
+**Options**: `true|false`
+**Default**: `true`
+**Example**:
+
+```yaml
+sysdig:
+ kafka:
+ enabled: true
+```
+
+## **sysdig.kafka.enableMetrics**
+
+**Required**: `false`
+**Description**: Enables JMX exporter as a sidecar container to export prometheus metrics.
+
+**Options**: `true|false`
+**Default**: `false`
+**Example**:
+
+```yaml
+sysdig:
+ kafka:
+ enableMetrics: true
+```
+
+## **sysdig.kafka.jvmOptions**
+
+**Required**: `false`
+**Description**: The custom configuration for Kafka JVM.
+**Options**:
+**Default**: Empty (Kafka will implicitly assume `-Xms1G -Xmx1G`
+**Example**:
+
+```yaml
+sysdig:
+ kafka:
+ jvmOptions: -Xms4G -Xmx4G
+```
+
+## **sysdig.kafka.secure.enabled**
+
+**Required**: `false`
+**Description**: WARNING: If this is `true`, `sysdig.monitorVersion` must be `2.4.1.5032`. Enables TLS for Kafka cluster.
+**Options**: `true|false`
+**Default**: `true`
+**Example**:
+
+```yaml
+sysdig:
+ kafka:
+ secure:
+ enabled: true
+```
+
+## **sysdig.resources.kafka.requests.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu required to schedule each Kafka pod
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 200m |
+| medium | 1 |
+| large | 2 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ kafka:
+ requests:
+ cpu: 2
+```
+
+## **sysdig.resources.kafka.requests.memory**
+
+**Required**: `false`
+**Description**: The amount of memory required to schedule each Kafka pod
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 512Mi |
+| medium | 3Gi |
+| large | 6Gi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ kafka:
+ requests:
+ memory: 2Gi
+```
+
+## **sysdig.resources.kafka.limits.cpu**
+
+**Required**: `false`
+**Description**: The max amount of cpu assigned to each Kafka pod
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 500m |
+| medium | 4 |
+| large | 8 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ kafka:
+ limits:
+ cpu: 2
+```
+
+## **sysdig.resources.kafka.limits.memory**
+
+**Required**: `false`
+**Description**: The max amount of memory assigned to each Kafka pod
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 1Gi |
+| medium | 8Gi |
+| large | 16Gi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ kafka:
+ requests:
+ memory: 2Gi
+```
+
+## **sysdig.zookeeperVersion**
+
+**Required**: `false`
+**Description**: Docker image tag of Zookeeper, relevant when `sysdig.meerkat.enabled` is `true`.
+**Options**:
+**Default**: 1.0.0
+**Example**:
+
+```yaml
+sysdig:
+ zookeeperVersion: 1.0.0
+```
+
+## **sysdig.zookeeperReplicaCount**
+
+**Required**: `false`
+**Description**: Number of Zookeeper replicas.
+**Options**:
+**Default**:
+
+| cluster-size | count |
+| ------------ | ----- |
+| small | 3 |
+| medium | 3 |
+| large | 3 |
+
+## **sysdig.zookeeper.enableMetrics**
+
+**Required**: `false`
+**Description**: Enables JMX exporter as a sidecar container to export prometheus metrics.
+
+**Options**: `true|false`
+**Default**: `false`
+**Example**:
+
+```yaml
+sysdig:
+ zookeeper:
+ enableMetrics: true
+```
+
+## **sysdig.zookeeper.nodeAffinityLabel**
+
+**Required**: `false`
+**Description**: The key and the value of the label that is used to configure the nodes that the
+Zookeeper pods are expected to run on.
+**Options**:
+**Default**:
+**Example**:
+
+```yaml
+sysdig:
+ zookeeper:
+ nodeAffinityLabel:
+ key: sysdig/worker-pool
+ value: zookeeper
+```
+
+## **sysdig.zookeeper.nodeAffinityMode**
+
+**Required**: `false`
+**Description**: Make nodeAffinity "required" or "preferred" for Zookeeper
+**Options**: `required|preferred`
+**Default**: `preferred`
+**Example**:
+
+```yaml
+sysdig:
+ zookeeper:
+ nodeAffinityMode: preferred
+```
+
+## **sysdig.resources.zookeeper.requests.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu required to schedule each Zookeeper pod
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 100m |
+| medium | 200m |
+| large | 400m |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ zookeeper:
+ requests:
+ cpu: 2
+```
+
+## **sysdig.resources.zookeeper.requests.memory**
+
+**Required**: `false`
+**Description**: The amount of memory required to schedule each Zookeeper pod
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 128Mi |
+| medium | 256Mi |
+| large | 512Mi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ zookeeper:
+ requests:
+ memory: 2Gi
+```
+
+## **sysdig.resources.zookeeper.limits.cpu**
+
+**Required**: `false`
+**Description**: The max amount of cpu assigned to each Zookeeper pod
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 250m |
+| medium | 500m |
+| large | 1 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ zookeeper:
+ limits:
+ cpu: 2
+```
+
+## **sysdig.resources.zookeeper.limits.memory**
+
+**Required**: `false`
+**Description**: The max amount of memory assigned to each Zookeeper pod
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 500m |
+| medium | 1Gi |
+| large | 2Gi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ zookeeper:
+ requests:
+ memory: 2Gi
+```
+
+## **sysdig.beacon.enabled** (**Deprecated**)
+
+**Required**: `false`
+**Description**: Enables (IBM Platform Metrics version of) beacon, the components that allow Sysdig to natively ingest Prometheus metrics via remote write.
+**Options**: `true|false`
+**Default**: `false`
+**Example**:
+
+```yaml
+sysdig:
+ beacon:
+ enabled: true
+```
+
+## **sysdig.beacon.platformMetricsEnabled**
+
+**Required**: `false`
+**Description**: Enables IBM Platform Metrics version of beacon, the components that allow Sysdig to natively ingest Prometheus metrics via remote write.
+**Options**: `true|false`
+**Default**: Previously, this was called `beacon.enabled` and it defaults to that deprecated value, which defaults to `false`
+**Example**:
+
+```yaml
+sysdig:
+ beacon:
+ platformMetricsEnabled: true
+```
+
+**WARNING**
+**`HostAlreadyClaimed` Error in Openshift**
+To use this feature on Openshift an overlay is required to avoid an error in Routes which will prevent the `Collector`
+Route to be active and able to receive data from the agents.
+This is what the error would look like:
+
+```
+oc get route
+NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
+[omitted lines]
+sysdigcloud-collector HostAlreadyClaimed
+[omitted lines]
+```
+
+Use this overlay to avoid the error:
+
+```yaml
+apiVersion: route.openshift.io/v1
+kind: Route
+metadata:
+ name: sysdigcloud-beacon-prom-remote-write
+ namespace: sysdigcloud
+spec:
+ host: domain_name
+```
+
+The `domain_name` must be different from the name used for the Collectors endpoint and it must be used for Prometheus metrics ingestion.
+
+## **sysdig.beacon.promEnabled**
+
+**Required**: `false`
+**Description**: Enables Generalized Beacon for Prometheus, the components that allow Sysdig to natively ingest Prometheus metrics via remote write.
+**Options**: `true|false`
+**Default**: `false`
+**Example**:
+
+```yaml
+sysdig:
+ beacon:
+ promEnabled: true
+```
+
+## **sysdig.beacon.token**
+
+**Required**: `false`
+**Description**: Set the Beacon access token, used by the Beacon components to authenticate against the API server.
+**Options**:
+**Default**:
+**Example**:
+
+```yaml
+sysdig:
+ beacon:
+ token: change_me
+```
+
+## **sysdig.promRemoteWriteVersion**
+
+**Required**: `false`
+**Description**: Docker image tag of prom-remote-write, relevant when `sysdig.beacon.promEnabled` or `sysdig.beacon.platformMetricsEnabled` is `true`.
+**Options**:
+**Default**: [`sysdig.monitorVersion`](configuration_parameters.md#sysdigmonitorversion)
+**Example**:
+
+```yaml
+sysdig:
+ promRemoteWriteVersion: 2.4.1.5032
+```
+
+## **sysdig.promRemoteWriteBeaconReplicaCount**
+
+**Required**: `false`
+**Description**: Number of beacon-prom-remote-write replicas for Generalized Beacon.
+**Options**:
+**Default**:
+
+| cluster-size | count |
+| ------------ | ----- |
+| small | 1 |
+| medium | 3 |
+| large | 5 |
+
+**Example**:
+
+```yaml
+sysdig:
+ promRemoteWriteBeaconReplicaCount: 5
+```
+
+## **sysdig.promRemoteWritePlatformMetricsReplicaCount**
+
+**Required**: `false`
+**Description**: Number of prom-remote-write replicas for IBM Platform Metrics.
+**Options**:
+**Default**: Previously, this was called `promRemoteWriteReplicaCount` and it defaults to that deprecated value.
+
+| cluster-size | count |
+| ------------ | ----- |
+| small | 1 |
+| medium | 3 |
+| large | 5 |
+
+**Example**:
+
+```yaml
+sysdig:
+ promRemoteWritePlatformMetricsReplicaCount: 5
+```
+
+## **sysdig.promRemoteWriteBeacon.jvmOptions**
+
+**Required**: `false`
+**Description**: The custom configuration for the Generalized Beacon beacon-prom-remote-write JVM.
+**Options**:
+**Default**:
+**Example**:
+
+```yaml
+sysdig:
+ promRemoteWriteBeacon:
+ jvmOptions: -Xms4G -Xmx4G
+```
+
+## **sysdig.promRemoteWritePlatformMetrics.jvmOptions**
+
+**Required**: `false`
+**Description**: The custom configuration for the IBM Platform Metrics prom-remote-write JVM. Note that the profile is actually implicit.
+**Options**:
+**Default**: Previously, this was called `promRemoteWrite.jvmOptions` and it defaults to that deprecated value.
+**Example**:
+
+```yaml
+sysdig:
+ promRemoteWritePlatformMetrics:
+ jvmOptions: -Xms4G -Xmx4G -Dspring.profiles.active=beacon-ibm
+```
+
+## **sysdig.serviceOwnerManagement.enabled**
+
+**Required**: `false`
+**Description**: Enables ServiceOwnerManagement, the microservice that IBM Service Owners will use to manage their assets.
+**Options**: `true|false`
+**Default**: `false`
+**Example**:
+
+```yaml
+sysdig:
+ serviceOwnerManagement:
+ enabled: true
+```
+
+## **sysdig.serviceOwnerManagement.legacyToken**
+
+**Required**: `false`
+**Description**: Set the ServiceOwnerManagement-to-Legacy access token, used by this service to authenticate against the API server.
+**Options**:
+**Default**:
+**Example**:
+
+```yaml
+sysdig:
+ serviceOwnerManagement:
+ legacyToken: change_me
+```
+
+## **sysdig.serviceOwnerManagement.beaconToken**
+
+**Required**: `false`
+**Description**: Set the ServiceOwnerManagement-to-Beacon access token, used by this service to authenticate against the Beacon server.
+**Options**:
+**Default**:
+**Example**:
+
+```yaml
+sysdig:
+ serviceOwnerManagement:
+ beaconToken: change_me
+```
+
+## **sysdig.serviceOwnerManagementVersion**
+
+**Required**: `false`
+**Description**: Docker image tag of ServiceOwnerManagement, relevant when `sysdig.serviceOwnerManagement.enabled` is `true`.
+**Options**:
+**Default**: [`sysdig.monitorVersion`](configuration_parameters.md#sysdigmonitorversion)
+**Example**:
+
+```yaml
+sysdig:
+ serviceOwnerManagementVersion: 2.4.1.5032
+```
+
+## **sysdig.serviceOwnerManagementReplicaCount**
+
+**Required**: `false`
+**Description**: Number of ServiceOwnerManagement replicas.
+**Options**:
+**Default**:
+
+| cluster-size | count |
+| ------------ | ----- |
+| small | 1 |
+| medium | 3 |
+| large | 5 |
+
+**Example**:
+
+```yaml
+sysdig:
+ serviceOwnerManagementReplicaCount: 2
+```
+
+## **sysdig.serviceOwnerManagement.jvmOptions**
+
+**Required**: `false`
+**Description**: The custom configuration for the ServiceOwnerManagement JVM.
+**Options**:
+**Default**:
+**Example**:
+
+```yaml
+sysdig:
+ serviceOwnerManagement:
+ jvmOptions: -Xms4G -Xmx4G
+```
+
+## **sysdig.resources.promRemoteWriteBeacon.requests.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu required to schedule each Generalized Beacon beacon-prom-remote-write pod.
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 1 |
+| medium | 2 |
+| large | 4 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ promRemoteWriteBeacon:
+ requests:
+ cpu: 2
+```
+
+## **sysdig.resources.promRemoteWriteBeacon.requests.memory**
+
+**Required**: `false`
+**Description**: The amount of memory required to schedule each Generalized Beacon beacon-prom-remote-write pod.
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 3Gi |
+| medium | 8Gi |
+| large | 12Gi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ promRemoteWriteBeacon:
+ requests:
+ memory: 2Gi
+```
+
+## **sysdig.resources.promRemoteWriteBeacon.limits.cpu**
+
+**Required**: `false`
+**Description**: The max amount of cpu assigned to each Generalized Beacon beacon-prom-remote-write pod.
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 2 |
+| medium | 4 |
+| large | 8 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ promRemoteWriteBeacon:
+ limits:
+ cpu: 2
+```
+
+## **sysdig.resources.promRemoteWriteBeacon.limits.memory**
+
+**Required**: `false`
+**Description**: The max amount of memory assigned to each Generalized Beacon beacon-prom-remote-write pod.
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 8Gi |
+| medium | 16Gi |
+| large | 24Gi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ promRemoteWriteBeacon:
+ requests:
+ memory: 2Gi
+```
+
+## **sysdig.resources.promRemoteWritePlatformMetrics.requests.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu required to schedule each IBM Platform Metrics prom-remote-write pod.
+**Options**:
+**Default**:
+
+Previously, this was called `promRemoteWrite.requests.cpu` and it defaults to that deprecated value which has these defaults:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 1 |
+| medium | 2 |
+| large | 4 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ promRemoteWritePlatformMetrics:
+ requests:
+ cpu: 2
+```
+
+## **sysdig.resources.promRemoteWritePlatformMetrics.requests.memory**
+
+**Required**: `false`
+**Description**: The amount of memory required to schedule each IBM Platform Metrics prom-remote-write pod.
+**Options**:
+**Default**:
+
+Previously, this was called `promRemoteWrite.requests.memory` and it defaults to that deprecated value which has these defaults:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 3Gi |
+| medium | 8Gi |
+| large | 12Gi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ promRemoteWritePlatformMetrics:
+ requests:
+ memory: 2Gi
+```
+
+## **sysdig.resources.promRemoteWritePlatformMetrics.limits.cpu**
+
+**Required**: `false`
+**Description**: The max amount of cpu assigned to each IBM Platform Metrics prom-remote-write pod.
+**Options**:
+**Default**:
+
+Previously, this was called `promRemoteWrite.limits.cpu` and it defaults to that deprecated value which has these defaults:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 2 |
+| medium | 4 |
+| large | 8 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ promRemoteWritePlatformMetrics:
+ limits:
+ cpu: 2
+```
+
+## **sysdig.resources.promRemoteWritePlatformMetrics.limits.memory**
+
+**Required**: `false`
+**Description**: The max amount of memory assigned to each IBM Platform Metrics prom-remote-write pod.
+**Options**:
+**Default**:
+
+Previously, this was called `promRemoteWrite.limits.memory` and it defaults to that deprecated value which has these defaults:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 8Gi |
+| medium | 16Gi |
+| large | 24Gi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ promRemoteWritePlatformMetrics:
+ requests:
+ memory: 2Gi
+```
+
+## **sysdig.prometheus.enabled**
+
+**Required**: `false`
+**Description**: Enables Prometheus services.
+**Options**: `true|false`
+**Default**: `true`
+**Example**:
+
+```yaml
+sysdig:
+ prometheus:
+ enabled: true
+```
+
+## **sysdig.promchapVersion**
+
+**Required**: `false`
+**Description**: Docker image tag of Sysdig Prometheus Chaperone service, relevant when `sysdig.prometheus.enabled` is `true`.
+**Options**:
+**Default**: 0.99.0-2022-07-04T12-52-09Z.d68003f677
+**Example**:
+
+```yaml
+sysdig:
+ promchapVersion: 0.99.0-2022-07-04T12-52-09Z.d68003f677
+```
+
+## **sysdig.promqlatorVersion**
+
+**Required**: `false`
+**Description**: Docker image tag of Sysdig Promqlator service, relevant when `sysdig.prometheus.enabled` is `true`.
+**Options**:
+**Default**: 0.99.0-2022-07-12T09-19-16Z.93c0642b55
+**Example**:
+
+```yaml
+sysdig:
+ promqlatorVersion: 0.99.0-2022-07-12T09-19-16Z.93c0642b55
+```
+
+## **sysdig.promqlatorReplicaCount**
+
+**Required**: `false`
+**Description**: Number of Promqlator services replicas.
+**Options**:
+**Default**:
+
+| cluster-size | count |
+| ------------ | ----- |
+| small | 1 |
+| medium | 3 |
+| large | 5 |
+
+## **sysdig.resources.prometheus.redis.requests.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu required to schedule Prometheus Redis pod
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 1 |
+| medium | 2 |
+| large | 3 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ prometheus:
+ redis:
+ requests:
+ cpu: 2
+```
+
+## **sysdig.resources.prometheus.redis.limits.cpu**
+
+**Required**: `false`
+**Description**: The max amount of cpu assigned to Prometheus Redis pod
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 1 |
+| medium | 2 |
+| large | 3 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ prometheus:
+ redis:
+ limits:
+ cpu: 2
+```
+
+## **sysdig.resources.prometheus.redis.requests.memory**
+
+**Required**: `false`
+**Description**: The amount of memory required to schedule Prometheus Redis pod
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 600Mi |
+| medium | 1.2Gi |
+| large | 2.2Gi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ prometheus:
+ redis:
+ requests:
+ memory: 1.2Gi
+```
+
+## **sysdig.resources.prometheus.redis.limits.memory**
+
+**Required**: `false`
+**Description**: The max amount of memory assigned to Prometheus Redis pod
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 800Mi |
+| medium | 1.5Gi |
+| large | 2.5Gi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ prometheus:
+ redis:
+ requests:
+ memory: 1.5Gi
+```
+
+## **sysdig.prometheus.redis.maxmemory**
+
+**Required**: `false`
+**Description**: The max amount of memory used by Redis cache
+**Default**:
+
+| cluster-size | size |
+| ------------ | ----- |
+| small | 500Mb |
+| medium | 1Gb |
+| large | 2Gb |
+
+**Example**:
+
+```yaml
+sysdig:
+ prometheus:
+ redis:
+ maxmemory: 1Gb
+```
+
+## **sysdig.resources.promchap.limits.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu assigned to Promchap containers
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 1 |
+| medium | 2 |
+| large | 3 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ promchap:
+ limits:
+ cpu: 1
+```
+
+## **sysdig.resources.promchap.limits.memory**
+
+**Required**: `false`
+**Description**: The amount of memory assigned to Promchap containers
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 1Gi |
+| medium | 2Gi |
+| large | 4Gi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ promchap:
+ limits:
+ memory: 1Gi
+```
+
+## **sysdig.resources.promchap.requests.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu required to schedule Promchap containers
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 250m |
+| medium | 500m |
+| large | 1 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ promchap:
+ requests:
+ cpu: 250m
+```
+
+## **sysdig.resources.promchap.requests.memory**
+
+**Required**: `false`
+**Description**: The amount of memory required to schedule Promchap containers
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 300Mi |
+| medium | 500Mi |
+| large | 1Gi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ promchap:
+ requests:
+ memory: 300Mi
+```
+
+## **sysdig.resources.scanningv2-agents-conf.limits.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu assigned to scanningv2-agents-conf pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 500m |
+| medium | 500m |
+| large | 1 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ scanningv2-agents-conf:
+ limits:
+ cpu: 500m
+```
+
+## **sysdig.resources.scanningv2-agents-conf.limits.memory**
+
+**Required**: `false`
+**Description**: The amount of memory assigned to scanningv2-agents-conf pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 500Mi |
+| medium | 1Gi |
+| large | 2Gi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ scanningv2-agents-conf:
+ limits:
+ memory: 500Mi
+```
+
+## **sysdig.resources.scanningv2-agents-conf.requests.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu required to schedule scanningv2-agents-conf pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 250m |
+| medium | 250m |
+| large | 500m |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ scanningv2-agents-conf:
+ requests:
+ cpu: 250m
+```
+
+## **sysdig.resources.scanningv2-agents-conf.requests.memory**
+
+**Required**: `false`
+**Description**: The amount of memory required to schedule scanningv2-agents-conf pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 100Mi |
+| medium | 250Mi |
+| large | 500Mi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ scanningv2-agents-conf:
+ requests:
+ memory: 100Mi
+```
+
+## **sysdig.resources.scanningv2-collector.limits.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu assigned to scanningv2-collector pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 1 |
+| medium | 1 |
+| large | 2 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ scanningv2-collector:
+ limits:
+ cpu: 1
+```
+
+## **sysdig.resources.scanningv2-collector.limits.memory**
+
+**Required**: `false`
+**Description**: The amount of memory assigned to scanningv2-collector pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 1Gi |
+| medium | 1Gi |
+| large | 2Gi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ scanningv2-collector:
+ limits:
+ memory: 1Gi
+```
+
+## **sysdig.resources.scanningv2-collector.requests.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu required to schedule scanningv2-collector pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 500m |
+| medium | 500m |
+| large | 1 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ scanningv2-collector:
+ requests:
+ cpu: 500m
+```
+
+## **sysdig.resources.scanningv2-collector.requests.memory**
+
+**Required**: `false`
+**Description**: The amount of memory required to schedule scanningv2-collector pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 250Mi |
+| medium | 500Mi |
+| large | 1Gi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ scanningv2-collector:
+ requests:
+ memory: 250Mi
+```
+
+## **sysdig.resources.scanningv2-pkgmeta-api.limits.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu assigned to scanningv2-pkgmeta-api pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 500m |
+| medium | 1 |
+| large | 2 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ scanningv2-pkgmeta-api:
+ limits:
+ cpu: 500m
+```
+
+## **sysdig.resources.scanningv2-pkgmeta-api.limits.memory**
+
+**Required**: `false`
+**Description**: The amount of memory assigned to scanningv2-pkgmeta-api pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 1Gi |
+| medium | 1Gi |
+| large | 2Gi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ scanningv2-pkgmeta-api:
+ limits:
+ memory: 1Gi
+```
+
+## **sysdig.resources.scanningv2-pkgmeta-api.requests.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu required to schedule scanningv2-pkgmeta-api pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 250m |
+| medium | 500m |
+| large | 1 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ scanningv2-pkgmeta-api:
+ requests:
+ cpu: 250m
+```
+
+## **sysdig.resources.scanningv2-pkgmeta-api.requests.memory**
+
+**Required**: `false`
+**Description**: The amount of memory required to schedule scanningv2-pkgmeta-api pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 250Mi |
+| medium | 500Mi |
+| large | 1Gi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ scanningv2-pkgmeta-api:
+ requests:
+ memory: 250Mi
+```
+
+## **sysdig.resources.scanningv2-policies-api.limits.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu assigned to scanningv2-policies-api pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 500m |
+| medium | 1 |
+| large | 1 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ scanningv2-policies-api:
+ limits:
+ cpu: 500m
+```
+
+## **sysdig.resources.scanningv2-policies-api.limits.memory**
+
+**Required**: `false`
+**Description**: The amount of memory assigned to scanningv2-policies-api pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 1Gi |
+| medium | 1Gi |
+| large | 2Gi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ scanningv2-policies-api:
+ limits:
+ memory: 1Gi
+```
+
+## **sysdig.resources.scanningv2-policies-api.requests.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu required to schedule scanningv2-policies-api pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 250m |
+| medium | 500m |
+| large | 1 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ scanningv2-policies-api:
+ requests:
+ cpu: 250m
+```
+
+## **sysdig.resources.scanningv2-policies-api.requests.memory**
+
+**Required**: `false`
+**Description**: The amount of memory required to schedule scanningv2-policies-api pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 250Mi |
+| medium | 500Mi |
+| large | 1Gi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ scanningv2-policies-api:
+ requests:
+ memory: 250Mi
+```
+
+## **sysdig.resources.scanningv2-reporting-api.limits.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu assigned to scanningv2-reporting-api pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 500m |
+| medium | 1 |
+| large | 1 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ scanningv2-reporting-api:
+ limits:
+ cpu: 500m
+```
+
+## **sysdig.resources.scanningv2-reporting-api.limits.memory**
+
+**Required**: `false`
+**Description**: The amount of memory assigned to scanningv2-reporting-api pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 1Gi |
+| medium | 1Gi |
+| large | 2Gi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ scanningv2-reporting-api:
+ limits:
+ memory: 1Gi
+```
+
+## **sysdig.resources.scanningv2-reporting-api.requests.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu required to schedule scanningv2-reporting-api pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 250m |
+| medium | 500m |
+| large | 1 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ scanningv2-reporting-api:
+ requests:
+ cpu: 250m
+```
+
+## **sysdig.resources.scanningv2-reporting-api.requests.memory**
+
+**Required**: `false`
+**Description**: The amount of memory required to schedule scanningv2-reporting-api pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 250Mi |
+| medium | 500Mi |
+| large | 1Gi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ scanningv2-reporting-api:
+ requests:
+ memory: 250Mi
+```
+
+## **sysdig.resources.scanningv2-reporting-generator.limits.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu assigned to scanningv2-reporting-generator pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 500m |
+| medium | 1 |
+| large | 2 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ scanningv2-reporting-generator:
+ limits:
+ cpu: 500m
+```
+
+## **sysdig.resources.scanningv2-reporting-generator.limits.memory**
+
+**Required**: `false`
+**Description**: The amount of memory assigned to scanningv2-reporting-generator pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 1Gi |
+| medium | 2Gi |
+| large | 4Gi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ scanningv2-reporting-generator:
+ limits:
+ memory: 1Gi
+```
+
+## **sysdig.resources.scanningv2-reporting-generator.requests.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu required to schedule scanningv2-reporting-generator pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 500m |
+| medium | 1 |
+| large | 2 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ scanningv2-reporting-generator:
+ requests:
+ cpu: 500m
+```
+
+## **sysdig.resources.scanningv2-reporting-generator.requests.memory**
+
+**Required**: `false`
+**Description**: The amount of memory required to schedule scanningv2-reporting-generator pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 250Mi |
+| medium | 1Gi |
+| large | 2Gi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ scanningv2-reporting-generator:
+ requests:
+ memory: 250Mi
+```
+
+## **sysdig.resources.scanningv2-reporting-janitor.limits.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu assigned to scanningv2-reporting-janitor pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 1 |
+| medium | 1 |
+| large | 1 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ scanningv2-reporting-janitor:
+ limits:
+ cpu: 1
+```
+
+## **sysdig.resources.scanningv2-reporting-janitor.limits.memory**
+
+**Required**: `false`
+**Description**: The amount of memory assigned to scanningv2-reporting-janitor pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 1Gi |
+| medium | 1Gi |
+| large | 1Gi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ scanningv2-reporting-janitor:
+ limits:
+ memory: 1Gi
+```
+
+## **sysdig.resources.scanningv2-reporting-janitor.requests.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu required to schedule scanningv2-reporting-janitor pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 250m |
+| medium | 250m |
+| large | 250m |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ scanningv2-reporting-janitor:
+ requests:
+ cpu: 250m
+```
+
+## **sysdig.resources.scanningv2-reporting-janitor.requests.memory**
+
+**Required**: `false`
+**Description**: The amount of memory required to schedule scanningv2-reporting-janitor pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 50Mi |
+| medium | 50Mi |
+| large | 50Mi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ scanningv2-reporting-janitor:
+ requests:
+ memory: 50Mi
+```
+
+## **sysdig.resources.scanningv2-reporting-scheduler.limits.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu assigned to scanningv2-reporting-scheduler pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 500m |
+| medium | 500m |
+| large | 1 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ scanningv2-reporting-scheduler:
+ limits:
+ cpu: 500m
+```
+
+## **sysdig.resources.scanningv2-reporting-scheduler.limits.memory**
+
+**Required**: `false`
+**Description**: The amount of memory assigned to scanningv2-reporting-scheduler pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 500Mi |
+| medium | 1Gi |
+| large | 2Gi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ scanningv2-reporting-scheduler:
+ limits:
+ memory: 500Mi
+```
+
+## **sysdig.resources.scanningv2-reporting-scheduler.requests.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu required to schedule scanningv2-reporting-scheduler pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 250m |
+| medium | 250m |
+| large | 500m |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ scanningv2-reporting-scheduler:
+ requests:
+ cpu: 250m
+```
+
+## **sysdig.resources.scanningv2-reporting-scheduler.requests.memory**
+
+**Required**: `false`
+**Description**: The amount of memory required to schedule scanningv2-reporting-scheduler pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 100Mi |
+| medium | 250Mi |
+| large | 500Mi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ scanningv2-reporting-scheduler:
+ requests:
+ memory: 100Mi
+```
+
+## **sysdig.resources.scanningv2-reporting-worker.limits.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu assigned to scanningv2-reporting-worker pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 1 |
+| medium | 1 |
+| large | 1 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ scanningv2-reporting-worker:
+ limits:
+ cpu: 1
+```
+
+## **sysdig.resources.scanningv2-reporting-worker.limits.memory**
+
+**Required**: `false`
+**Description**: The amount of memory assigned to scanningv2-reporting-worker pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 1Gi |
+| medium | 1Gi |
+| large | 2Gi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ scanningv2-reporting-worker:
+ limits:
+ memory: 1Gi
+```
+
+## **sysdig.resources.scanningv2-reporting-worker.requests.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu required to schedule scanningv2-reporting-worker pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 250m |
+| medium | 500m |
+| large | 1 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ scanningv2-reporting-worker:
+ requests:
+ cpu: 250m
+```
+
+## **sysdig.resources.scanningv2-reporting-worker.requests.memory**
+
+**Required**: `false`
+**Description**: The amount of memory required to schedule scanningv2-reporting-worker pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 250Mi |
+| medium | 500Mi |
+| large | 1Gi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ scanningv2-reporting-worker:
+ requests:
+ memory: 250Mi
+```
+
+## **sysdig.resources.scanningv2-riskmanager-api.limits.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu assigned to scanningv2-riskmanager-api pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 500m |
+| medium | 1 |
+| large | 1 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ scanningv2-riskmanager-api:
+ limits:
+ cpu: 500m
+```
+
+## **sysdig.resources.scanningv2-riskmanager-api.limits.memory**
+
+**Required**: `false`
+**Description**: The amount of memory assigned to scanningv2-riskmanager-api pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 1Gi |
+| medium | 1Gi |
+| large | 2Gi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ scanningv2-riskmanager-api:
+ limits:
+ memory: 1Gi
+```
+
+## **sysdig.resources.scanningv2-riskmanager-api.requests.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu required to schedule scanningv2-riskmanager-api pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 250m |
+| medium | 500m |
+| large | 1 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ scanningv2-riskmanager-api:
+ requests:
+ cpu: 250m
+```
+
+## **sysdig.resources.scanningv2-riskmanager-api.requests.memory**
+
+**Required**: `false`
+**Description**: The amount of memory required to schedule scanningv2-riskmanager-api pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 250Mi |
+| medium | 500Mi |
+| large | 1Gi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ scanningv2-riskmanager-api:
+ requests:
+ memory: 250Mi
+```
+
+## **sysdig.resources.scanningv2-scanresults-api.limits.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu assigned to scanningv2-scanresults-api pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 500m |
+| medium | 1 |
+| large | 1 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ scanningv2-scanresults-api:
+ limits:
+ cpu: 500m
+```
+
+## **sysdig.resources.scanningv2-scanresults-api.limits.memory**
+
+**Required**: `false`
+**Description**: The amount of memory assigned to scanningv2-scanresults-api pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 500Mi |
+| medium | 1Gi |
+| large | 2Gi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ scanningv2-scanresults-api:
+ limits:
+ memory: 500Mi
+```
+
+## **sysdig.resources.scanningv2-scanresults-api.requests.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu required to schedule scanningv2-scanresults-api pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 500m |
+| medium | 1 |
+| large | 1 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ scanningv2-scanresults-api:
+ requests:
+ cpu: 500m
+```
+
+## **sysdig.resources.scanningv2-scanresults-api.requests.memory**
+
+**Required**: `false`
+**Description**: The amount of memory required to schedule scanningv2-scanresults-api pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 250Mi |
+| medium | 500Mi |
+| large | 500Mi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ scanningv2-scanresults-api:
+ requests:
+ memory: 250Mi
+```
+
+## **sysdig.resources.scanningv2-vulns-api.limits.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu assigned to scanningv2-vulns-api pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 500m |
+| medium | 1 |
+| large | 2 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ scanningv2-vulns-api:
+ limits:
+ cpu: 500m
+```
+
+## **sysdig.resources.scanningv2-vulns-api.limits.memory**
+
+**Required**: `false`
+**Description**: The amount of memory assigned to scanningv2-vulns-api pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 1Gi |
+| medium | 1Gi |
+| large | 2Gi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ scanningv2-vulns-api:
+ limits:
+ memory: 1Gi
+```
+
+## **sysdig.resources.scanningv2-vulns-api.requests.cpu**
+
+**Required**: `false`
+**Description**: The amount of cpu required to schedule scanningv2-vulns-api pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 250m |
+| medium | 500m |
+| large | 1 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ scanningv2-vulns-api:
+ requests:
+ cpu: 250m
+```
+
+## **sysdig.resources.scanningv2-vulns-api.requests.memory**
+
+**Required**: `false`
+**Description**: The amount of memory required to schedule scanningv2-vulns-api pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 250Mi |
+| medium | 500Mi |
+| large | 1Gi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ scanningv2-vulns-api:
+ requests:
+ memory: 250Mi
+```
+
+## **sysdig.secureOnly**
+
+**Required**: `false`
+**Description**: Enable product optimizations for secure that break monitor.
+**Options**: `true|false`
+**Default**: `false`
+
+**Example**:
+
+```yaml
+sysdig:
+ secureOnly: true
+```
+
+## **sysdig.secure.eventsForwarder.proxy.enable**
+
+**Required**: `false`
+**Description**: Set proxy settings for secure forwarding (overrides global settings)
+**Options**: `true|false`
+**Default**:
+
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ eventsForwarder:
+ proxy:
+ enable: false
+```
+
+## **sysdig.secure.eventsForwarder.proxy.host**
+
+**Required**: `false`
+**Description**: The address of the web proxy, this could be a domain name or
+an IP address. This is required if [`sysdig.secure.eventsForwarder.proxy.enable`](#sysdigsecureeventsforwarderproxyenable)
+is configured.
+**Options**:
+**Default**:
+
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ eventsForwarder:
+ proxy:
+ enable: true
+ host: my-awesome-proxy.my-awesome-domain.com
+```
+
+## **sysdig.secure.eventsForwarder.proxy.noProxy**
+
+**Required**: `false`
+**Description**: Comma separated list of addresses or domain names
+that can be reached without going through the configured web proxy. This is
+only relevant if [`sysdig.secure.eventsForwarder.proxy.enable`](#sysdigsecureeventsforwarderproxyenable) is configured and
+appended to the list in
+[`sysdig.proxy.defaultNoProxy`](#sysdigproxydefaultnoproxy]).
+**Options**:
+**Default**: `127.0.0.1, localhost, sysdigcloud-anchore-core, sysdigcloud-anchore-api`
+
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ eventsForwarder:
+ proxy:
+ enable: true
+ noProxy: my-awesome.domain.com, 192.168.0.0/16
+```
+
+## **sysdig.secure.eventsForwarder.proxy.password**
+
+**Required**: `false`
+**Description**: The password used to access the configured
+[`sysdig.secure.eventsForwarder.proxy.host`](#sysdigsecureeventsforwarderproxyhost).
+**Options**:
+**Default**:
+
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ eventsForwarder:
+ proxy:
+ enable: true
+ password: F00B@r!
+```
+
+## **sysdig.secure.eventsForwarder.proxy.port**
+
+**Required**: `false`
+**Description**: The port the configured
+[`sysdig.secure.eventsForwarder.proxy.host`](#sysdigsecureeventsforwarderproxyhost) is listening on. If this is not
+configured it defaults to 80.
+**Options**:
+**Default**: `80`
+
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ eventsForwarder:
+ proxy:
+ enable: true
+ port: 3128
+```
+
+## **sysdig.secure.eventsForwarder.proxy.protocol**
+
+**Required**: `false`
+**Description**: The protocol to use to communicate with the configured
+[`sysdig.secure.eventsForwarder.proxy.host`](#sysdigsecureeventsforwarderproxyhost) .
+**Options**: `http|https`
+**Default**: `http`
+
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ eventsForwarder:
+ proxy:
+ enable: true
+ protocol: https
+```
+
+## **sysdig.secure.eventsForwarder.proxy.user**
+
+**Required**: `false`
+**Description**: The user used to access the configured
+[`sysdig.secure.eventsForwarder.proxy.host`](#sysdigsecureeventsforwarderproxyhost).
+**Options**:
+**Default**:
+
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ eventsForwarder:
+ proxy:
+ enable: true
+ user: alice
+```
+
+## **sysdig.secure.certman.proxy.enable**
+
+**Required**: `false`
+**Description**: Set proxy settings for secure certman (overrides global settings)
+**Options**: `true|false`
+**Default**:
+
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ certman:
+ proxy:
+ enable: false
+```
+
+## **sysdig.secure.certman.proxy.host**
+
+**Required**: `false`
+**Description**: The address of the web proxy, this could be a domain name or
+an IP address. This is required if [`sysdig.secure.certman.proxy.enable`](#sysdigsecurecertmanproxyenable)
+is configured.
+**Options**:
+**Default**:
+
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ certman:
+ proxy:
+ enable: true
+ host: my-awesome-proxy.my-awesome-domain.com
+```
+
+## **sysdig.secure.certman.proxy.noProxy**
+
+**Required**: `false`
+**Description**: Comma separated list of addresses or domain names
+that can be reached without going through the configured web proxy. This is
+only relevant if [`sysdig.secure.certman.proxy.enable`](#sysdigsecurecertmanproxyenable) is configured and
+appended to the list in
+[`sysdig.proxy.defaultNoProxy`](#sysdigproxydefaultnoproxy]).
+**Options**:
+**Default**: `127.0.0.1, localhost, sysdigcloud-anchore-core, sysdigcloud-anchore-api`
+
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ certman:
+ proxy:
+ enable: true
+ noProxy: my-awesome.domain.com, 192.168.0.0/16
+```
+
+## **sysdig.secure.certman.proxy.password**
+
+**Required**: `false`
+**Description**: The password used to access the configured
+[`sysdig.secure.certman.proxy.host`](#sysdigsecurecertmanproxyhost).
+**Options**:
+**Default**:
+
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ certman:
+ proxy:
+ enable: true
+ password: F00B@r!
+```
+
+## **sysdig.secure.certman.proxy.port**
+
+**Required**: `false`
+**Description**: The port the configured
+[`sysdig.secure.certman.proxy.host`](#sysdigsecurecertmanproxyhost) is listening on. If this is not
+configured it defaults to 80.
+**Options**:
+**Default**: `80`
+
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ certman:
+ proxy:
+ enable: true
+ port: 3128
+```
+
+## **sysdig.secure.certman.proxy.protocol**
+
+**Required**: `false`
+**Description**: The protocol to use to communicate with the configured
+[`sysdig.secure.certman.proxy.host`](#sysdigsecurecertmanproxyhost) .
+**Options**: `http|https`
+**Default**: `http`
+
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ certman:
+ proxy:
+ enable: true
+ protocol: https
+```
+
+## **sysdig.secure.certman.proxy.user**
+
+**Required**: `false`
+**Description**: The user used to access the configured
+[`sysdig.secure.certman.proxy.host`](#sysdigsecurecertmanproxyhost).
+**Options**:
+**Default**:
+
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ certman:
+ proxy:
+ enable: true
+ user: alice
+```
+
+## **sysdig.postgresDatabases.PRWSInternalIngestion**
+
+**Required**: `false`
+**Description**: A map containing database connection details for external postgresql instance used as `prwsInternalIngestion` database. To use in conjunction with `sysdig.postgresql.external`.
+**Example**:
+
+```yaml
+sysdig:
+ postgresql:
+ external: true
+ postgresDatabases:
+ rapidResponse:
+ host: my-prw-internal-ingestion-db-external.com
+ port: 5432
+ db: prws_internal_ingestion
+ username: prws_internal_ingestion_user
+ password: my_prws_internal_ingestion_password
+ sslmode: disable
+ admindb: root_db
+ adminusername: root_user
+ adminpassword: my_root_user_password
+```
+
+## **sysdig.beacon.prwsInternalIngestionEnabled**
+
+**Required**: `false`
+**Description**: Enable Prom Remote Write Internal Ingestion
+**Options**:
+**Default**:`false`
+**Example**:
+
+```yaml
+sysdig:
+ beacon:
+ prwsInternalIngestionEnabled: true
+```
+
+## **sysdig.prwsInternalIngestionReplicaCount**
+
+**Required**: `false`
+**Description**: Number of PRWS Internal Ingestion replicas
+**Options**:
+**Default**:
+
+**Example**:
+
+```yaml
+sysdig:
+ prwsInternalIngestionReplicaCount: 5
+```
+
+## **sysdig.prwsInternalIngestion.jvmOptions**
+
+**Required**: `false`
+**Description**: Custom JVM configuration for PRWS Internal Ingestion
+**Options**:
+**Default**:
+
+**Example**:
+
+```yaml
+sysdig:
+ prwsInternalIngestion:
+ jvmOptions: |-
+ -Xms12g -Xmx12g
+```
+
+## **sysdig.prwsInternalIngestion.ingress**
+
+**Required**: `false`
+**Description**: Add a custom Ingress for PRWS Internal Ingestion
+**Options**:
+**Default**:
+
+**Example**:
+
+```yaml
+sysdig:
+ prwsInternalIngestion:
+ ingress:
+ - name: my-prws-internal-ingestion
+ omitBaseAnnotations: true
+ annotations:
+ haproxy-ingress.github.io/timeout-server: 20s
+ haproxy-ingress.github.io/config-backend: |
+ retries 2
+ labels:
+ app.kubernetes.io/managed-by: ingress-config
+ app.kubernetes.io/name: ingress-config
+ app.kubernetes.io/part-of: sysdigcloud
+ role: ingress-config
+ tier: infra
+ hosts:
+ - host: my-app.my-domain.com
+ sslSecretName: ssl-secret
+ paths:
+ - path: /api
+ serviceName: my-service-name
+ servicePort: 9510
+```
+
+## **sysdig.prwsInternalIngestion.privateEndpointCommunicationEnforcement**
+
+**Required**: `false`
+**Description**: Enable private endpoint communication for PRWS Internal Ingestion
+**Options**: `true|false`
+**Default**:
+**Example**:
+
+```yaml
+sysdig:
+ prwsInternalIngestion:
+ privateEndpointCommunicationEnforcement: false
+```
+
+## **sysdig.prwsInternalIngestion.privateEndpointCommunicationEnforcementExclusions**
+
+**Required**: `false`
+**Description**: Comma separated list of addresses or domain names that can
+override the `privateEndpointCommunicationEnforcement`.
+**Options**:
+**Default**:
+
+**Example**:
+
+```yaml
+sysdig:
+ prwsInternalIngestion:
+ privateEndpointCommunicationEnforcement: false
+ privateEndpointCommunicationEnforcementExclusions: my-awesome.domain.com, 192.168.0.0/16
+```
+
+## **sysdig.secure.netsec.rateLimit**
+
+**Required**: `false`
+**Description**: Netsec api rate limit.
+**Options**:
+**Default**:
+
+| cluster-size | count |
+| ------------ | ----- |
+| small | 200 |
+| medium | 200 |
+| large | 200 |
+
+## **sysdig.secure.scanningv2.enabled**
+
+**Required**: `false`
+**Description**: Enable Vulnerability Engine V2 for Sysdig Secure.
+**Options**:
+**Default**: true
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ scanningv2:
+ enabled: true
+```
+
+## **sysdig.secure.scanningv2.proxy**
+
+**Required**: `false`
+**Description**: Enables use of a proxy for two ScanningV2 services: PkgMeta and VulnAPI.
+**Options**:
+**Default**: `false`
+**Example**:
+
+```yaml
+ secure:
+ scanningv2:
+ proxy:
+ defaultNoProxy: "https://foo.bar"
+ user: "user01"
+ password: "password"
+ noProxy: "localhost"
+ enable: true
+ host: "myproxy.example.com"
+ port: 3128
+ protocol: "http"
+```
+**Related parameters**:
+
+sysdig.secure.scanningv2.proxy.enable
+sysdig.secure.scanningv2.proxy.defaultNoProxy
+sysdig.secure.scanningv2.proxy.user
+sysdig.secure.scanningv2.proxy.noProxy
+sysdig.secure.scanningv2.proxy.host
+sysdig.secure.scanningv2.proxy.port
+sysdig.secure.scanningv2.proxy.protocol
+
+
+## **sysdig.secure.scanningv2.vulnsApi.remoteSaaSEndpoint**
+
+**Required**: `true`
+**Description**: Remote endpoint that will be used to retrieve vulnerability feed metadata. Select the optimal Sysdig secure endpoint from .
+
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ scanningv2:
+ vulnsApi:
+ remoteSaaSEndpoint: "https://eu1.app.sysdig.com"
+```
+
+## **sysdig.secure.scanningv2.vulnsApi.remoteSaaSTlsSkip**
+
+**Required**: `false`
+**Description**: Whether to validate SSL certificates for the remote vuln feed download, especially useful when connecting via a proxy using self-signed certificate.
+**Options**:
+**Default**: `false`
+
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ scanningv2:
+ vulnsApi:
+ remoteSaaSTlsSkip: true
+```
+
+## **sysdig.secure.scanningv2.pkgMetaApi.remoteSaaSEndpoint**
+
+**Required**: `true`
+**Description**: Remote endpoint that will be used to retrieve vulnerability feed metadata. Select the optimal Sysdig secure endpoint from .
+
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ scanningv2:
+ pkgMetaApi:
+ remoteSaaSEndpoint: "https://eu1.app.sysdig.com"
+```
+
+## **sysdig.secure.scanningv2.pkgMetaApi.remoteSaaSTlsSkip**
+
+**Required**: `false`
+**Description**: Whether to validate SSL certificates for the remote vuln feed download, especially useful when connecting via a proxy using self-signed certificate.
+**Options**:
+**Default**: `false`
+
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ scanningv2:
+ pkgMetaApi:
+ remoteSaaSTlsSkip: true
+```
+
+## **sysdig.secure.scanningv2.reporting.enabled**
+
+**Required**: `false`
+**Description**: Enable reporting for the Vulnerability Engine V2 of Sysdig Secure.
+**Options**:
+**Default**: true
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ scanningv2:
+ reporting:
+ enabled: true
+```
+
+## **sysdig.secure.scanningv2.reporting.reportingJanitor.schedule**
+
+**Required**: `false`
+**Description**: K8s Cronjob schedule string for Vulnerability Engine V2 reporting cleanup process
+**Options**:
+**Default**: "0 3 \* \* \*"
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ scanningv2:
+ reporting:
+ reportingJanitor:
+ schedule: "0 3 * * *"
+```
+
+## **sysdig.secure.scanningv2.reporting.storageDriver**
+
+**Required**: `false`
+**Description**: Storage kind for the generated reports
+**Options**: postgres, s3
+**Default**: postgres
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ scanningv2:
+ reporting:
+ storageDriver: postgres
+```
+
+## **sysdig.secure.scanningv2.reporting.aws.bucket**
+
+**Required**: `false`
+**Description**: The AWS S3-compatible storage bucket name where reports will be saved (required when using `s3` driver)
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ scanningv2:
+ reporting:
+ aws:
+ bucket: secure-scanningv2-reporting
+```
+
+## **sysdig.secure.scanningv2.reporting.aws.endpoint**
+
+**Required**: `false`
+**Description**: The service endpoint of a AWS S3-compatible storage (required when using `s3` driver in a non-AWS deployment)
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ scanningv2:
+ reporting:
+ aws:
+ endpoint: s3.example.com
+```
+
+## **sysdig.secure.scanningv2.reporting.aws.region**
+
+**Required**: `false`
+**Description**: The AWS region where the S3 bucket is created (required when using `s3` driver in a AWS deployment)
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ scanningv2:
+ reporting:
+ aws:
+ region: us-east-1
+```
+
+## **sysdig.secure.scanningv2.reporting.aws.accessKeyId**
+
+**Required**: `false`
+**Description**: The Access Key ID used to authenticate with a S3-compatible storage (required when using `s3` driver in a non-AWS deployment)
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ scanningv2:
+ reporting:
+ aws:
+ accessKeyId: AKIAIOSFODNN7EXAMPLE
+```
+
+## **sysdig.secure.scanningv2.reporting.aws.secretAccessKey**
+
+**Required**: `false`
+**Description**: The Secret Access Key used to authenticate with a S3-compatible storage (required when using `s3` driver in a non-AWS deployment)
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ scanningv2:
+ reporting:
+ aws:
+ secretAccessKey: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
+```
+
+## **sysdig.secure.scanningv2.reporting.aws.secretAccessKey**
+
+**Required**: `false`
+**Description**: The Secret Access Key used to authenticate with a S3-compatible storage (required when using `s3` driver in a non-AWS deployment)
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ scanningv2:
+ reporting:
+ aws:
+ secretAccessKey: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
+```
+
+## **sysdig.secure.scanningv2.customCerts**
+
+**Required**: `false`
+**Description**:
+To allow the scanningv2 subsystem to trust these certificates, use this configuration to upload one or more PEM-format CA certificates. You must ensure you've uploaded all certificates in the CA approval chain to the root CA.
+
+This configuration when set expects certificates with .pem extension under certs/scanningv2-custom-certs/ in the same level as `values.yaml`
+**Options**: `true|false`
+**Default**: false
+**Example**:
+
+```bash
+# In the example directory structure below, certificate1.crt and certificate2.crt will be added to the trusted list.
+bash-5.0$ find certs values.yaml
+certs
+certs/scanningv2-custom-certs
+certs/scanningv2-custom-certs/certificate1.pem
+certs/scanningv2-custom-certs/certificate2.pem
+values.yaml
+```
+
+```yaml
+sysdig:
+ secure:
+ scanningv2:
+ customCerts: true
+```
+
+## **sysdig.secure.scanningv2.airgappedFeeds**
+
+**Required**: `false`
+**Description**: Deploys a local object storage for scanningv2 vuln feeds artifacts for airgapped installs. It does not reach out to one of Sysdig SaaS products
+**Options**: `true|false`
+**Default**: `false`
+
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ scanningv2:
+ airgappedFeeds: true
+```
+
+## **sysdig.secure.scanningV2.scanRequestor.enabled**
+**Required**: `false`
+**Description**: Enables the ScanRequestor BE component. By default it is set at true; setting it at false disables the ScanRequestor. If this flag is set at false also the **sysdig.secure.scanningv2.agentsConf.isBackendScanningEnabled** must be set at `false`
+**Options**: `true|false`
+**Default**: `true`
+
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ scanningv2:
+ scanRequestor:
+ enabled: true
+```
+
+## **sysdig.secure.scanningV2.scanRequestor.deploymentType**
+**Required**: `false`
+**Description**: If set at `saas` the storage type for the SR will be S3, if not set or empty SR will use Cassandra.
+**Options**: `saas|empty`
+**Default**: `empty`
+
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ scanningv2:
+ scanRequestor:
+ deploymentType: saas
+```
+
+## **sysdig.secure.scanningV2.scanRequestor.loggingLevel**
+**Required**: `false`
+**Description**: Sets the log level for the scan requestor component
+**Options**: `TRACE|DEBUG|INFO|WARN|ERROR`
+**Default**: `INFO`
+
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ scanningv2:
+ scanRequestor:
+ loggingLevel: INFO
+```
+
+## **sysdig.secure.scanningV2.scanRequestor.serviceAccount**
+**Required**: `false`
+**Description**: Sets the service account (name) used to access the (S3) storage in case the selected storage type is S3.
+**Default**: `sysdig`
+
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ scanningv2:
+ scanRequestor:
+ serviceAccount: sysdig
+```
+
+## **sysdig.secure.scanningV2.scanRequestor.storage.type**
+**Required**: `false`
+**Description**: Sets the type of storage is used by the ScanRequestor to persist its state.
+**Options**: `S3|cassandra`
+**Default**: `cassandra`
+
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ scanningv2:
+ scanRequestor:
+ storage:
+ type: cassandra
+```
+
+## **sysdig.secure.scanningV2.scanRequestor.storage.bucketName**
+**Required**: `false`
+**Description**: Sets the name of the bucket on which the ScanRequestor will store state and staging information, if the selected storage type is `S3`.
+**Default**: `scan-requestor`
+
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ scanningv2:
+ scanRequestor:
+ storage:
+ bucketName: "scan-requestor"
+```
+
+## **sysdig.secure.scanningV2.scanRequestor.storage.endpoint**
+**Required**: `false`
+**Description**: Sets the URl of the S3 service to use as storage, if the selected storage type is S3
+**Default**: ``
+
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ scanningv2:
+ scanRequestor:
+ storage:
+ endpoint: ""
+```
+
+## **sysdig.secure.scanningV2.scanRequestor.storage.region**
+**Required**: `false`
+**Description**: Sets the region S3 service to use as storage, if the selected storage type is S3.
+**Default**: ``
+
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ scanningv2:
+ scanRequestor:
+ storage:
+ region: ""
+```
+
+## **sysdig.secure.scanningV2.scanRequestor.storage.caCrt**
+**Required**: `false`
+**Description**: Sets the certificate of the S3 service to use as storage, if the selected storage type is S3
+**Default**: ``
+
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ scanningv2:
+ scanRequestor:
+ storage:
+ caCrt: ""
+```
+
+## **sysdig.secure.scanningV2.scanRequestor.storage.requestStore.enabled**
+**Required**: `false`
+**Description**: Enables the (PostgreSQL) Request Store used by the SR to store the ScanNow and ACValidation queue of requests.
+**Options**: `true|false`
+**Default**: `true`
+
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ scanningv2:
+ scanRequestor:
+ storage:
+ requestStore:
+ enabled: true
+```
+
+## **sysdig.secure.scanningV2.scanRequestor.storage.requestStore.requestMaxAge**
+**Required**: `false`
+**Description**: The maximum age for requests to be considered still valid/pending
+**Default**: `1h`
+
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ scanningv2:
+ scanRequestor:
+ storage:
+ requestStore:
+ requestMaxAge: "1h"
+```
+
+## **sysdig.secure.scanningV2.scanRequestor.storage.requestStore.requestReplyTimeout**
+**Required**: `false`
+**Description**: The period of time after which a scan request (in the ScanNow flow) is considered failed if no response is received.
+**Default**: `30s`
+
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ scanningv2:
+ scanRequestor:
+ storage:
+ requestStore:
+ requestReplyTimeout: 30s
+```
+
+## **sysdig.secure.scanningV2.scanRequestor.storage.requestStore.cassandra.hosts**
+**Required**: `false`
+**Description**: The URL of cassandra server(s).
+**Default**: `sysdigcloud-cassandra:9042`
+
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ scanningv2:
+ scanRequestor:
+ storage:
+ cassandra:
+ hosts: "sysdigcloud-cassandra:9042"
+```
+
+## **sysdig.secure.scanningV2.scanRequestor.storage.requestStore.cassandra.keyspace**
+**Required**: `false`
+**Description**: The cassandra key space to use for storing ScanRequestor tables.
+**Default**: `sysdig_scanning`
+
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ scanningv2:
+ scanRequestor:
+ storage:
+ cassandra:
+ keyspace: "sysdig_scanning"
+```
+
+## **sysdig.secure.scanningV2.scanRequestor.storage.requestStore.cassandra.protocolVersion**
+**Required**: `false`
+**Description**: The protocol version used to communicate with Cassandra
+**Default**: `3`
+
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ scanningv2:
+ scanRequestor:
+ storage:
+ cassandra:
+ protocolVersion: "3"
+```
+
+## **sysdig.secure.scanningV2.scanRequestor.storage.requestStore.cassandra.replicationFactor**
+**Required**: `false`
+**Description**: The replication factor to use for ScanRequestor tables.
+**Default**: `3`
+
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ scanningv2:
+ scanRequestor:
+ storage:
+ cassandra:
+ replicationFactor: "3"
+```
+
+## **sysdig.secure.scanningV2.scanRequestor.storage.requestStore.cassandra.datacenter**
+**Required**: `false`
+**Description**: The datacenter identifier to be used for cassandra communication.
+**Default**: `datacenter1`
+
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ scanningv2:
+ scanRequestor:
+ storage:
+ cassandra:
+ datacenter: "datacenter1"
+```
+
+## **sysdig.secure.scanningV2.scanRequestor.storage.requestStore.cassandra.requestTimeout**
+**Required**: `false`
+**Description**: The timeout for cassandra requests.
+**Default**: ` `
+
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ scanningv2:
+ scanRequestor:
+ storage:
+ cassandra:
+ requestTimeout: "3s"
+```
+
+## **sysdig.secure.scanningV2.scanRequestor.storage.requestStore.cassandra.maxReadRequests**
+**Required**: `false`
+**Description**: - to be filled -
+**Default**: ` `
+
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ scanningv2:
+ scanRequestor:
+ storage:
+ cassandra:
+ maxReadRequests: ""
+```
+
+## **sysdig.secure.scanningV2.scanRequestor.storage.requestStore.cassandra.maxWriteRequests**
+**Required**: `false`
+**Description**: - to be filled -
+**Default**: ` `
+
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ scanningv2:
+ scanRequestor:
+ storage:
+ cassandra:
+ maxWriteRequests: ""
+```
+
+## **sysdig.secure.scanningV2.scanRequestor.storage.requestStore.cassandra.compressionEnabled**
+**Required**: `false`
+**Description**: - to be filled -
+**Options**: `true|false`
+**Default**: `true`
+
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ scanningv2:
+ scanRequestor:
+ storage:
+ cassandra:
+ compressionEnabled: true
+```
+
+## **sysdig.secure.scanningV2.scanRequestor.storage.requestStore.cassandra.compressionThreshold**
+**Required**: `false`
+**Description**: - to be filled -
+**Default**: ` `
+
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ scanningv2:
+ scanRequestor:
+ storage:
+ cassandra:
+ compressionThreshold: ""
+```
+
+## **sysdig.secure.scanningV2.scanRequestor.storage.requestStore.cassandra.ttlSec.Metadata**
+**Required**: `false`
+**Description**: - to be filled -
+**Default**: ` `
+
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ scanningv2:
+ scanRequestor:
+ storage:
+ cassandra:
+ ttlSec:
+ metadata: "86400"
+```
+
+## **sysdig.secure.scanningV2.scanRequestor.storage.requestStore.cassandra.ttlSec.state**
+**Required**: `false`
+**Description**: - to be filled -
+**Default**: ` `
+
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ scanningv2:
+ scanRequestor:
+ storage:
+ cassandra:
+ ttlSec:
+ state: "86400"
+```
+
+## **sysdig.secure.scanningV2.scanRequestor.storage.requestStore.cassandra.ttlSec.events**
+**Required**: `false`
+**Description**: - to be filled -
+**Default**: ` `
+
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ scanningv2:
+ scanRequestor:
+ storage:
+ cassandra:
+ ttlSec:
+ events: "86400"
+```
+
+## **sysdig.s3.scanRequestor.accessKeyId**
+**Required**: `false`
+**Description**: The S3 access-key id to be used when the storage type is set at S3.
+**Default**: `scanningv2_scanrequestor`
+
+**Example**:
+
+```yaml
+sysdig:
+ s3:
+ scanRequestor:
+ accessKeyId: "a-key"
+```
+## **sysdig.s3.scanRequestor.secretAccessKey**
+**Required**: `false`
+**Description**: The S3 secret access-key id to be used when the storage type is set at S3.
+**Default**: `random`
+
+**Example**:
+
+```yaml
+sysdig:
+ s3:
+ scanRequestor:
+ secretAccessKey: "DLGJdgoiefebefhbhdfuhvbEAFBVAUGWUEghdwbYUWREG"
+```
+
+## **sysdig.secure.scanningv2.scanRequestor.requestPartitionProcessingScheduler.interval**
+**Required**: `false`
+**Description**: The interval between two subsequent processing of messages in the ScanRequestor staging area. Should be no lower than `5m`
+**Default**: `5m`
+
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ scanningv2:
+ scanRequestor:
+ requestPartitionProcessingScheduler:
+ interval: "5m"
+```
+
+## **sysdig.secure.scanningv2.scanRequestor.requestPartitionProcessingScheduler.startDelay**
+**Required**: `false`
+**Description**: The initial delay in staging area scheduled processing.
+**Default**: `10s`
+
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ scanningv2:
+ scanRequestor:
+ requestPartitionProcessingScheduler:
+ startDelay: "10s"
+```
+
+## **sysdig.secure.scanningv2.scanRequestor.requestPartitionProcessingScheduler.timeout**
+**Required**: `false`
+**Description**: The timeout for getting partition processing requests from NATS.
+**Default**: `30s`
+
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ scanningv2:
+ scanRequestor:
+ requestPartitionProcessingScheduler:
+ timeout: "30s"
+```
+
+## **sysdig.platformService.enabled**
+
+**Required**: `false`
+**Description**: Enable or disable the platform-service deployment
+**Options**:`true|false`
+**Default**: `false`
+**Example**:
+
+```yaml
+sysdig:
+ platformService:
+ enabled: false
+```
+
+## **sysdig.platformService.audit.enabled**
+
+**Required**: `false`
+**Description**: Enable or disable sending of audit data for platform-service
+**Options**:`true|false`
+**Default**: `false`
+**Example**:
+
+```yaml
+sysdig:
+ platformService:
+ audit:
+ enabled: false
+```
+
+## **sysdig.platformService.ingestion.endpoint**
+
+**Required**: `false`
+**Description**: Endpoint where platform-service will send data for Sysdig Platform Audit
+**Default**: `sysdigcloud-events-ingestion:3000`
+**Example**:
+
+```yaml
+sysdig:
+ platformService:
+ ingestion:
+ endpoint: sysdigcloud-events-ingestion:3000
+```
+
+## **sysdig.platformService.server.port.metric**
+
+**Required**: `false`
+**Description**: Server port that will be used to serve metrics data
+**Default**: `25000`
+**Example**:
+
+```yaml
+sysdig:
+ platformService:
+ server:
+ port:
+ metric: 25000
+```
+
+## **sysdig.platformService.server.port.health**
+
+**Required**: `false`
+**Description**: Server port that will be used to serve health checker endpoint
+**Default**: `8083`
+**Example**:
+
+```yaml
+sysdig:
+ platformService:
+ server:
+ port:
+ health: 8083
+```
+
+## **sysdig.platformService.alerts.enabled**
+
+**Required**: `false`
+**Description**: Enable or disable Platform Alerts service
+**Options**:`true|false`
+**Default**: `false`
+**Example**:
+
+```yaml
+sysdig:
+ platformService:
+ alerts:
+ enabled: false
+```
+
+## **sysdig.platformService.alerts.serviceToken**
+
+**Required**: `false`
+**Description**: Service token used to identify platform service for service calls to other services
+**Default**: `change_me`
+**Example**:
+
+```yaml
+sysdig:
+ platformService:
+ alerts:
+ serviceToken: change_me
+
+
+## **sysdig.platformService.alerts.server.port.grpc**
+
+**Required**: `false`
+**Description**: Platform Alerts service server port that will serve GRPC requests
+**Default**: `5052`
+**Example**:
+
+```yaml
+sysdig:
+ platformService:
+ alerts:
+ server:
+ port:
+ grpc: 5052
+```
+
+## **sysdig.platformService.alerts.server.port.rest**
+
+**Required**: `false`
+**Description**: Platform Alerts service server port that will serve HTTP requests
+**Default**: `7004`
+**Example**:
+
+```yaml
+sysdig:
+ platformService:
+ alerts:
+ server:
+ port:
+ rest: 7004
+```
+
+## **sysdig.platformService.alerts.server.enableEventsEndpoints**
+
+**Required**: `false`
+**Description**: Enable or disable test endpoints that will send fake events
+**Options**:`true|false`
+**Default**: `false`
+**Example**:
+
+```yaml
+sysdig:
+ platformService:
+ alerts:
+ server:
+ enableEventsEndpoints: false
+```
+
+## **sysdig.platformService.alerts.ticketing.url**
+
+**Required**: `false`
+**Description**: URL of the ticketing service which platform alerts will call to create Jira tickets
+**Default**: `http://sysdigcloud-ticketing-api:7001`
+**Example**:
+
+```yaml
+sysdig:
+ platformService:
+ alerts:
+ ticketing:
+ url: http://sysdigcloud-ticketing-api:7001
+```
+
+## **sysdig.platformService.alerts.monitor.url**
+
+**Required**: `false`
+**Description**: Base URL for monitor API calls
+**Default**: `http://sysdigcloud-api:8080`
+**Example**:
+
+```yaml
+sysdig:
+ platformService:
+ alerts:
+ monitor:
+ url: http://sysdigcloud-api:8080
+```
+
+## **sysdig.platformService.alerts.monitor.cache.expiration**
+
+**Required**: `false`
+**Description**: Expiration time of the cache for monitor API calls
+**Default**: `5m`
+**Example**:
+
+```yaml
+sysdig:
+ platformService:
+ alerts:
+ monitor:
+ cache:
+ expiration: 5m
+```
+
+## **sysdig.platformService.alerts.monitor.cache.cleanup**
+
+**Required**: `false`
+**Description**: Time after which cache for monitor API calls will be cleanup
+**Default**: `10m`
+**Example**:
+
+```yaml
+sysdig:
+ platformService:
+ alerts:
+ monitor:
+ cache:
+ cleanup: 10m
+```
+
+## **sysdig.platformService.alerts.nats.js.enabled**
+
+**Required**: `false`
+**Description**: Enable or disable NATS for platform alerts service
+**Options**: `true|false`
+**Default**: `true`
+**Example**:
+
+```yaml
+sysdig:
+ platformService:
+ alerts:
+ nats:
+ js:
+ enabled: true
+```
+
+## **sysdig.platformService.alerts.nats.js.url**
+
+**Required**: `false`
+**Description**: Url of the NATS server that platform alerts service will connect to
+**Default**: `nats`
+**Example**:
+
+```yaml
+sysdig:
+ platformService:
+ alerts:
+ nats:
+ js:
+ url: nats
+```
+
+## **sysdig.platformService.alerts.nats.js.clientName**
+
+**Required**: `false`
+**Description**: Client name for platform alerts service
+**Default**: `sysdigcloud-platform-alerts-api`
+**Example**:
+
+```yaml
+sysdig:
+ platformService:
+ alerts:
+ nats:
+ js:
+ clientName: sysdigcloud-platform-alerts-api
+```
+
+## **sysdig.platformService.alerts.nats.js.tls.enabled**
+
+**Required**: `false`
+**Description**: Enable or disable TLS connection for NATS
+**Options**: `true|false`
+**Default**: `true`
+**Example**:
+
+```yaml
+sysdig:
+ platformService:
+ alerts:
+ nats:
+ js:
+ tls:
+ enabled: true
+```
+
+## **sysdig.platformService.alerts.nats.js.tls.cert**
+
+**Required**: `false`
+**Description**: TLS certificate for NATS connection
+**Default**: `/opt/certs/nats-js-tls-certs/ca.crt`
+**Example**:
+
+```yaml
+sysdig:
+ platformService:
+ alerts:
+ nats:
+ js:
+ tls:
+ cert: /opt/certs/nats-js-tls-certs/ca.crt
+```
+
+## **sysdig.platformService.alerts.nats.js.migrationFile**
+
+**Required**: `false`
+**Description**: Location of the json migration file
+**Default**: `/nats/migrations/streams.json`
+**Example**:
+
+```yaml
+sysdig:
+ platformService:
+ alerts:
+ nats:
+ js:
+ migrationFile: /nats/migrations/streams.json
+```
+
+## **sysdig.platformService.alerts.nats.js.risk.consumer.enabled**
+
+**Required**: `false`
+**Description**: Enable or disable NATS consumer for Risk integration
+**Options**: `true|false`
+**Default**: `false`
+**Example**:
+
+```yaml
+sysdig:
+ platformService:
+ alerts:
+ nats:
+ js:
+ risk:
+ consumer:
+ enabled: false
+```
+
+## **sysdig.platformService.alerts.nats.js.risk.consumer.name**
+
+**Required**: `false`
+**Description**: Name of NATS consumer for Risk integration
+**Default**: `risk-consumer`
+**Example**:
+
+```yaml
+sysdig:
+ platformService:
+ alerts:
+ nats:
+ js:
+ risk:
+ consumer:
+ name: risk-consumer
+```
+
+## **sysdig.platformService.alerts.nats.js.risk.consumer.stream**
+
+**Required**: `false`
+**Description**: NATS stream name of consumer for Risk integration
+**Default**: `risk-alerts`
+**Example**:
+
+```yaml
+sysdig:
+ platformService:
+ alerts:
+ nats:
+ js:
+ risk:
+ consumer:
+ stream: risk-alerts
+```
+
+## **sysdig.platformService.alerts.nats.js.risk.consumer.subjects**
+
+**Required**: `false`
+**Description**: NATS subjects name of consumer for Risk integration
+**Default**: `risk.>`
+**Example**:
+
+```yaml
+sysdig:
+ platformService:
+ alerts:
+ nats:
+ js:
+ risk:
+ consumer:
+ subjects: risks-alerts.*
+```
+
+## **sysdig.platformService.alerts.nats.js.risk.consumer.timeoutRetryMaxWait**
+
+**Required**: `false`
+**Description**: Max retry wait time for consumer for Risk integration
+**Default**: `10s`
+**Example**:
+
+```yaml
+sysdig:
+ platformService:
+ alerts:
+ nats:
+ js:
+ risk:
+ consumer:
+ timeoutRetryMaxWait: 10s
+```
+
+## **sysdig.platformService.alerts.nats.js.risk.notifier.enabled**
+
+**Required**: `false`
+**Description**: Enable or disable NATS notifier publishing for Risk integration
+**Options**: `true|false`
+**Default**: `true`
+**Example**:
+
+```yaml
+sysdig:
+ platformService:
+ alerts:
+ nats:
+ js:
+ risk:
+ notifier:
+ enabled: false
+```
+
+## **sysdig.platformService.alerts.nats.js.risk.notifier.stream**
+
+**Required**: `false`
+**Description**: Name of a NATS stream for publishing events to notifier for Risk integration
+**Default**: `notifier-notifications-1`
+**Example**:
+
+```yaml
+sysdig:
+ platformService:
+ alerts:
+ nats:
+ js:
+ risk:
+ notifier:
+ stream: notifier-notifications-1
+```
+
+
+## **sysdig.platformService.alerts.nats.js.risk.notifier.subject**
+
+**Required**: `false`
+**Description**: NATS subject for publishing events to notifier for Risk integration
+**Default**: `notifier.notifications.1.risk`
+**Example**:
+
+```yaml
+sysdig:
+ platformService:
+ alerts:
+ nats:
+ js:
+ risk:
+ notifier:
+ subject: notifier.notifications.1.risk
+```
+
+## **sysdig.platformService.alerts.nats.js.vm.imageHasVulns.consumer.enabled**
+
+**Required**: `false`
+**Description**: Enable or disable NATS consumer for VM imageHasVulns integration
+**Options**: `true|false`
+**Default**: `true`
+**Example**:
+
+```yaml
+sysdig:
+ platformService:
+ alerts:
+ nats:
+ js:
+ vm:
+ imageHasVulns:
+ consumer:
+ enabled: true
+```
+
+## **sysdig.platformService.alerts.nats.js.vm.imageHasVulns.consumer.name**
+
+**Required**: `false`
+**Description**: Name of NATS consumer for VM imageHasVulns integration
+**Default**: `platform-alerts-consumer`
+**Example**:
+
+```yaml
+sysdig:
+ platformService:
+ alerts:
+ nats:
+ js:
+ vm:
+ imageHasVulns:
+ consumer:
+ name: platform-alerts-consumer
+```
+
+## **sysdig.platformService.alerts.nats.js.vm.imageHasVulns.consumer.stream**
+
+**Required**: `false`
+**Description**: NATS stream name of consumer for VM imageHasVulns integration
+**Default**: `secure-vm-notifier-integrations`
+**Example**:
+
+```yaml
+sysdig:
+ platformService:
+ alerts:
+ nats:
+ js:
+ vm:
+ imageHasVulns:
+ consumer:
+ stream: secure-vm-notifier-integrations
+```
+
+## **sysdig.platformService.alerts.nats.js.vm.imageHasVulns.consumer.subjects**
+
+**Required**: `false`
+**Description**: NATS subjects name of consumer for VM imageHasVulns integration
+**Default**: `secure.vm.notifier.integrations.jira`
+**Example**:
+
+```yaml
+sysdig:
+ platformService:
+ alerts:
+ nats:
+ js:
+ vm:
+ imageHasVulns:
+ consumer:
+ subjects: secure.vm.notifier.integrations.jira
+```
+
+## **sysdig.platformService.alerts.nats.js.vm.imageHasVulns.consumer.timeoutRetryMaxWait**
+
+**Required**: `false`
+**Description**: Max retry wait time for consumer for VM imageHasVulns integration
+**Default**: `10s`
+**Example**:
+
+```yaml
+sysdig:
+ platformService:
+ alerts:
+ nats:
+ js:
+ vm:
+ imageHasVulns:
+ consumer:
+ timeoutRetryMaxWait: 10s
+```
+
+## **sysdig.platformService.alerts.nats.js.vm.imageHasVulns.notifier.enabled**
+
+**Required**: `false`
+**Description**: Enable or disable NATS notifier publishing for VM imageHasVulns integration
+**Options**: `true|false`
+**Default**: `true`
+**Example**:
+
+```yaml
+sysdig:
+ platformService:
+ alerts:
+ nats:
+ js:
+ vm:
+ imageHasVulns:
+ notifier:
+ enabled: true
+```
+
+## **sysdig.platformService.alerts.nats.js.vm.imageHasVulns.notifier.stream**
+
+**Required**: `false`
+**Description**: Name of a NATS stream for publishing events to notifier for VM imageHasVulns integration
+**Default**: `notifier-notifications-1`
+**Example**:
+
+```yaml
+sysdig:
+ platformService:
+ alerts:
+ nats:
+ js:
+ vm:
+ imageHasVulns:
+ notifier:
+ stream: notifier-notifications-1
+```
+
+
+## **sysdig.platformService.alerts.nats.js.vm.imageHasVulns.notifier.subject**
+
+**Required**: `false`
+**Description**: NATS subject for publishing events to notifier for VM imageHasVulns integration
+**Default**: `notifier.notifications.1.vm`
+**Example**:
+
+```yaml
+sysdig:
+ platformService:
+ alerts:
+ nats:
+ js:
+ vm:
+ imageHasVulns:
+ notifier:
+ subject: notifier.notifications.1.vm
+```
+
+## **sysdig.platformService.alerts.nats.js.vm.newFindings.consumer.enabled**
+
+**Required**: `false`
+**Description**: Enable or disable NATS consumer for VM newFindings integration
+**Options**: `true|false`
+**Default**: `true`
+**Example**:
+
+```yaml
+sysdig:
+ platformService:
+ alerts:
+ nats:
+ js:
+ vm:
+ newFindings:
+ consumer:
+ enabled: true
+```
+
+## **sysdig.platformService.alerts.nats.js.vm.newFindings.consumer.name**
+
+**Required**: `false`
+**Description**: Name of NATS consumer for VM newFindings integration
+**Default**: `platform-alerts-consumer`
+**Example**:
+
+```yaml
+sysdig:
+ platformService:
+ alerts:
+ nats:
+ js:
+ vm:
+ newFindings:
+ consumer:
+ name: platform-alerts-consumer
+```
+
+## **sysdig.platformService.alerts.nats.js.vm.newFindings.consumer.stream**
+
+**Required**: `false`
+**Description**: NATS stream name of consumer for VM newFindings integration
+**Default**: `secure-vm-notifier-integrations`
+**Example**:
+
+```yaml
+sysdig:
+ platformService:
+ alerts:
+ nats:
+ js:
+ vm:
+ newFindings:
+ consumer:
+ stream: secure-vm-notifier-integrations
+```
+
+## **sysdig.platformService.alerts.nats.js.vm.newFindings.consumer.subjects**
+
+**Required**: `false`
+**Description**: NATS subjects name of consumer for VM newFindings integration
+**Default**: `secure.vm.newfindings`
+**Example**:
+
+```yaml
+sysdig:
+ platformService:
+ alerts:
+ nats:
+ js:
+ vm:
+ newFindings:
+ consumer:
+ subjects: secure.vm.newfindings
+```
+
+## **sysdig.platformService.alerts.nats.js.vm.newFindings.consumer.timeoutRetryMaxWait**
+
+**Required**: `false`
+**Description**: Max retry wait time for consumer for VM newFindings integration
+**Default**: `10s`
+**Example**:
+
+```yaml
+sysdig:
+ platformService:
+ alerts:
+ nats:
+ js:
+ vm:
+ newFindings:
+ consumer:
+ timeoutRetryMaxWait: 10s
+```
+
+## **sysdig.platformService.alerts.nats.js.vm.newFindings.notifier.enabled**
+
+**Required**: `false`
+**Description**: Enable or disable NATS notifier publishing for VM newFindings integration
+**Options**: `true|false`
+**Default**: `true`
+**Example**:
+
+```yaml
+sysdig:
+ platformService:
+ alerts:
+ nats:
+ js:
+ vm:
+ newFindings:
+ notifier:
+ enabled: true
+```
+
+## **sysdig.platformService.alerts.nats.js.vm.newFindings.notifier.stream**
+
+**Required**: `false`
+**Description**: Name of a NATS stream for publishing events to notifier for VM newFindings integration
+**Default**: `notifier-notifications-1`
+**Example**:
+
+```yaml
+sysdig:
+ platformService:
+ alerts:
+ nats:
+ js:
+ vm:
+ newFindings:
+ notifier:
+ stream: notifier-notifications-1
+```
+
+## **sysdig.platformService.alerts.nats.js.vm.newFindings.notifier.subject**
+
+**Required**: `false`
+**Description**: NATS subject for publishing events to notifier for VM newFindings integration
+**Default**: `notifier.notifications.1.vm`
+**Example**:
+
+```yaml
+sysdig:
+ platformService:
+ alerts:
+ nats:
+ js:
+ vm:
+ newFindings:
+ notifier:
+ subject: notifier.notifications.1.vm
+```
+
+
+## **sysdig.platformService.alerts.nats.js.vm.newFindings.consumer.enabled**
+
+**Required**: `false`
+**Description**: Enable or disable NATS consumer for VM newFindings integration
+**Options**: `true|false`
+**Default**: `true`
+**Example**:
+
+```yaml
+sysdig:
+ platformService:
+ alerts:
+ nats:
+ js:
+ vm:
+ newFindings:
+ consumer:
+ enabled: true
+```
+
+## **sysdig.platformService.alerts.nats.js.responseActions.consumer.name**
+
+**Required**: `false`
+**Description**: Name of NATS consumer for responseActions integration
+**Default**: `platform-alerts-consumer`
+**Example**:
+
+```yaml
+sysdig:
+ platformService:
+ alerts:
+ nats:
+ js:
+ responseActions:
+ consumer:
+ name: platform-alerts-consumer
+```
+
+## **sysdig.platformService.alerts.nats.js.responseActions.consumer.stream**
+
+**Required**: `false`
+**Description**: NATS stream name of consumer for responseActions integration
+**Default**: `response-actions-executions-1`
+**Example**:
+
+```yaml
+sysdig:
+ platformService:
+ alerts:
+ nats:
+ js:
+ responseActions:
+ consumer:
+ stream: response-actions-executions-1
+```
+
+## **sysdig.platformService.alerts.nats.js.responseActions.consumer.subjects**
+
+**Required**: `false`
+**Description**: NATS subjects name of consumer for responseActions integration
+**Default**: `response-actions.execution.action.v1.>`
+**Example**:
+
+```yaml
+sysdig:
+ platformService:
+ alerts:
+ nats:
+ js:
+ responseActions:
+ consumer:
+ subjects: response-actions.execution.action.v1.>
+```
+
+## **sysdig.platformService.alerts.nats.js.responseActions.consumer.timeoutRetryMaxWait**
+
+**Required**: `false`
+**Description**: Max retry wait time for consumer for responseActions integration
+**Default**: `10s`
+**Example**:
+
+```yaml
+sysdig:
+ platformService:
+ alerts:
+ nats:
+ js:
+ responseActions:
+ consumer:
+ timeoutRetryMaxWait: 10s
+```
+
+```
+
+## **sysdig.platformService.alerts.nats.js.runtime.consumer.enabled**
+
+**Required**: `false`
+**Description**: Enable or disable NATS consumer for runtime integration
+**Options**: `true|false`
+**Default**: `false`
+**Example**:
+
+```yaml
+sysdig:
+ platformService:
+ alerts:
+ nats:
+ js:
+ runtime:
+ consumer:
+ enabled: false
+```
+
+## **sysdig.platformService.alerts.nats.js.runtime.consumer.name**
+
+**Required**: `false`
+**Description**: Name of NATS consumer for runtime integration
+**Default**: `platform-alerts-consumer`
+**Example**:
+
+```yaml
+sysdig:
+ platformService:
+ alerts:
+ nats:
+ js:
+ runtime:
+ consumer:
+ name: platform-alerts-consumer
+```
+
+## **sysdig.platformService.alerts.nats.js.runtime.consumer.stream**
+
+**Required**: `false`
+**Description**: NATS stream name of consumer for runtime integration
+**Default**: `events`
+**Example**:
+
+```yaml
+sysdig:
+ platformService:
+ alerts:
+ nats:
+ js:
+ runtime:
+ consumer:
+ stream: events
+```
+
+## **sysdig.platformService.alerts.nats.js.runtime.consumer.subjects**
+
+**Required**: `false`
+**Description**: NATS subjects name of consumer for runtime integration
+**Default**: `events.source.events.policy.policies`
+**Example**:
+
+```yaml
+sysdig:
+ platformService:
+ alerts:
+ nats:
+ js:
+ runtime:
+ consumer:
+ subjects: events.source.events.policy.policies
+```
+
+## **sysdig.platformService.alerts.nats.js.runtime.consumer.timeoutRetryMaxWait**
+
+**Required**: `false`
+**Description**: Max retry wait time for consumer for runtime integration
+**Default**: `10s`
+**Example**:
+
+```yaml
+sysdig:
+ platformService:
+ alerts:
+ nats:
+ js:
+ runtime:
+ consumer:
+ timeoutRetryMaxWait: 10s
+```
+
+## **sysdig.platformService.alerts.nats.js.runtime.notifier.enabled**
+
+**Required**: `false`
+**Description**: Enable or disable NATS notifier publishing for runtime integration
+**Options**: `true|false`
+**Default**: `false`
+**Example**:
+
+```yaml
+sysdig:
+ platformService:
+ alerts:
+ nats:
+ js:
+ runtime:
+ notifier:
+ enabled: false
+```
+
+## **sysdig.platformService.alerts.nats.js.runtime.notifier.stream**
+
+**Required**: `false`
+**Description**: Name of a NATS stream for publishing events to notifier for runtime integration
+**Default**: `notifier-notifications-1`
+**Example**:
+
+```yaml
+sysdig:
+ platformService:
+ alerts:
+ nats:
+ js:
+ runtime:
+ notifier:
+ stream: notifier-notifications-1
+```
+
+## **sysdig.platformService.alerts.nats.js.runtime.notifier.subject**
+
+**Required**: `false`
+**Description**: NATS subject for publishing events to notifier for runtime integration
+**Default**: `notifier.notifications.1.runtime`
+**Example**:
+
+```yaml
+sysdig:
+ platformService:
+ alerts:
+ nats:
+ js:
+ runtime:
+ notifier:
+ subject: notifier.notifications.1.runtime
+```
+## **sysdig.platformService.alerts.workers.notification.enabled**
+
+**Required**: `false`
+**Description**: Enables or disables workers for sending notifications in batches to alerts-notifier
+**Options**: `true|false`
+**Default**: `true`
+**Example**:
+
+```yaml
+sysdig:
+ platformService:
+ alerts:
+ workers:
+ notification:
+ enabled: true
+```
+
+## **sysdig.platformService.alerts.workers.notification.pollInterval**
+
+**Required**: `false`
+**Description**: Pooling time interval that will read unsend notifications
+**Default**: `500ms`
+**Example**:
+
+```yaml
+sysdig:
+ platformService:
+ alerts:
+ workers:
+ notification:
+ pollInterval: 500ms
+```
+
+## **sysdig.platformService.alerts.workers.notification.batchSize**
+
+**Required**: `false`
+**Description**: Number of events that will be sent from platform alerts to alert-notifier
+**Default**: `50`
+**Example**:
+
+```yaml
+sysdig:
+ platformService:
+ alerts:
+ workers:
+ notification:
+ batchSize: 50
+```
+
+## **sysdig.platformService.zones.enabled**
+
+**Required**: `false`
+**Description**: Enable or disable Platform Zones service
+**Options**:`true|false`
+**Example**:
+
+```yaml
+sysdig:
+ platformService:
+ zones:
+ enabled: false
+```
+
+
+## **sysdig.platformService.zones.readOnly**
+
+**Required**: `false`
+**Description**: Puts the Platform Zones service in read-only mode
+**Options**:`true|false`
+**Default**: `false`
+**Example**:
+
+```yaml
+sysdig:
+ platformService:
+ zones:
+ readOnly: false
+```
+
+
+## **sysdig.platformService.zones.devmode**
+
+**Required**: `false`
+**Description**: Puts the Platform Zones service in devmode with enhanced logs and debug capabilities
+**Options**:`true|false`
+**Default**: `false`
+**Example**:
+
+```yaml
+sysdig:
+ platformService:
+ zones:
+ devmode: false
+```
+
+## **sysdig.platformService.zones.nats.js.enabled**
+
+**Required**: `false`
+**Description**: Enable or disable NATS for Platform Zones service
+**Options**: `true|false`
+**Default**: `true`
+**Example**:
+
+```yaml
+sysdig:
+ platformService:
+ zones:
+ nats:
+ js:
+ enabled: false
+```
+
+## **sysdig.platformService.zones.nats.js.url**
+
+**Required**: `false`
+**Description**: Url of the NATS server that Platform Zones service will connect to
+**Default**: `nats`
+**Example**:
+
+```yaml
+sysdig:
+ platformService:
+ zones:
+ nats:
+ js:
+ url: nats
+```
+
+## **sysdig.platformService.zones.nats.js.clientName**
+
+**Required**: `false`
+**Description**: Client name for Platform Zones service
+**Default**: `sysdigcloud-platform-zones-service`
+**Example**:
+
+```yaml
+sysdig:
+ platformService:
+ zones:
+ nats:
+ js:
+ clientName: sysdigcloud-platform-zones-service
+```
+
+## **sysdig.platformService.zones.nats.js.tls.enabled**
+
+**Required**: `false`
+**Description**: Enable or disable TLS connection for NATS
+**Options**: `true|false`
+**Default**: `true`
+**Example**:
+
+```yaml
+sysdig:
+ platformService:
+ zones:
+ nats:
+ js:
+ tls:
+ enabled: true
+```
+
+## **sysdig.platformService.zones.nats.js.tls.cert**
+
+**Required**: `false`
+**Description**: TLS certificate for NATS connection
+**Default**: `/opt/certs/nats-js-tls-certs/ca.crt`
+**Example**:
+
+```yaml
+sysdig:
+ platformService:
+ zones:
+ nats:
+ js:
+ tls:
+ cert: /opt/certs/nats-js-tls-certs/ca.crt
+```
+
+## **sysdig.platformService.zones.nats.js.migrationFile**
+
+**Required**: `false`
+**Description**: Location of the json migration file
+**Default**: `/platform-service/zones/nats/migrations/streams.json`
+**Example**:
+
+```yaml
+sysdig:
+ platformService:
+ zones:
+ nats:
+ js:
+ migrationFile: /nats/migrations/streams.json
+```
+
+## **sysdig.platformService.zones.monitor.url**
+
+**Required**: `false`
+**Description**: Base URL for monitor API calls
+**Default**: `http://sysdigcloud-api:8080`
+**Example**:
+
+```yaml
+sysdig:
+ platformService:
+ zones:
+ monitor:
+ url: http://sysdigcloud-api:8080
+```
+
+## **sysdig.platformService.zones.monitor.authCache.expiration**
+
+**Required**: `false`
+**Description**: Expiration time of the authentication cache for monitor API calls
+**Default**: `5m`
+**Example**:
+
+```yaml
+sysdig:
+ platformService:
+ zones:
+ monitor:
+ authCache:
+ expiration: 5m
+```
+
+## **sysdig.platformService.zones.server.port.rest**
+
+**Required**: `false`
+**Description**: Platform Zones service server port that will serve HTTP requests
+**Default**: `8090`
+**Example**:
+
+```yaml
+sysdig:
+ platformService:
+ zones:
+ server:
+ port:
+ rest: 7004
+```
+
+
+## **sysdig.platformService.zones.server.port.grpc**
+
+**Required**: `false`
+**Description**: Platform Zones service server port that will serve HTTP requests
+**Default**: `8091`
+**Example**:
+
+```yaml
+sysdig:
+ platformService:
+ zones:
+ server:
+ port:
+ rest: 7004
+```
+
+
+## **sysdig.secure.ticketing.enabled**
+
+**Required**: `false`
+**Description**: Enable or disable the ticketing service deployment
+**Options**:`true|false`
+**Default**: `false`
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ ticketing:
+ enabled: false
+```
+
+## **sysdig.secure.ticketing.audit.enabled**
+
+**Required**: `false`
+**Description**: Enable or disable sending of audit data for ticketing service
+**Options**:`true|false`
+**Default**: `false`
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ ticketing:
+ audit:
+ enabled: false
+```
+
+## **sysdig.secure.ticketing.jiraClientMaxRetries**
+
+**Required**: `false`
+**Description**: Number of max retries for Jira client
+**Default**: `5`
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ ticketing:
+ jiraClientMaxRetries: 5
+```
+
+## **sysdig.secure.ticketing.jiraClientBaseWait**
+
+**Required**: `false`
+**Description**: Jira client base wait time
+**Default**: `1s`
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ ticketing:
+ jiraClientBaseWait: 1s
+```
+
+## **sysdig.secure.ticketing.jiraClientMaxWait**
+
+**Required**: `false`
+**Description**: Max wait time for Jira client
+**Default**: `30s`
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ ticketing:
+ jiraClientMaxWait: 30s
+```
+
+## **sysdig.secure.ticketing.jiraCacheDefaultExpiration**
+
+**Required**: `false`
+**Description**: Jira cache will expire after this period
+**Default**: `15m`
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ ticketing:
+ jiraCacheDefaultExpiration: 15m
+```
+
+## **sysdig.secure.ticketing.jiraCacheCleanupInterval**
+
+**Required**: `false`
+**Description**: Time interval for Jira cache cleanup
+**Default**: `1m`
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ ticketing:
+ jiraCacheCleanupInterval: 1m
+```
+
+## **sysdig.secure.ticketing.jiraSyncIssuesCronExpr**
+
+**Required**: `false`
+**Description**: Expression for cron job for Jira sync issues job
+**Default**: `0 0 * * * *`
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ ticketing:
+ jiraSyncIssuesCronExpr: "0 0 * * * *"
+```
+
+## **sysdig.secure.ticketing.jiraCreateIssuesCronExpr**
+
+**Required**: `false`
+**Description**: Expression for cron job for Jira create issues job
+**Default**: `0 0 * * * *`
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ ticketing:
+ jiraCreateIssuesCronExpr: "0 0 * * * *"
+```
+
+## **sysdig.secure.ticketing.jiraCreateIssuesOrchestratorInterval**
+
+**Required**: `false`
+**Description**: Time interval for creating issues orchestrator
+**Default**: `5m`
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ ticketing:
+ jiraCreateIssuesOrchestratorInterval: 5m
+```
+
+## **sysdig.secure.ticketing.jiraCreateIssuesWorkersMinWait**
+
+**Required**: `false`
+**Description**: Min wait time for create issues from workers to compleate
+**Default**: `1s`
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ ticketing:
+ jiraCreateIssuesWorkersMinWait: 1s
+```
+
+## **sysdig.secure.ticketing.jiraCreateIssuesWorkersMaxWait**
+
+**Required**: `false`
+**Description**: Max wait time for create issues from workers to compleate
+**Default**: `5s`
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ ticketing:
+ jiraCreateIssuesWorkersMaxWait: 5s
+```
+
+## **sysdig.secure.ticketing.jiraMaxAttachmentSize**
+
+**Required**: `false`
+**Description**: Sets maximum size for jira attachment in bytes
+**Default**: `1048576`
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ ticketing:
+ jiraMaxAttachmentSize: 1048576
+```
+
+## **sysdig.secure.ticketing.hardDeleteIntegrationAPIEnabled**
+
+**Required**: `false`
+**Description**: Enables or disables hard delete of integrations in ticketing service
+**Options**: `true|false`
+**Default**: `false`
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ ticketing:
+ hardDeleteIntegrationAPIEnabled: false
+```
+
+## **sysdig.secure.ticketing.natsJS.migrationFile**
+
+**Required**: `false`
+**Description**: Location of the json migration file
+**Default**: `/nats/migrations/streams.json`
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ ticketing:
+ natsJS:
+ migrationFile: /nats/migrations/streams.json
+```
+
+## **sysdig.secure.ticketing.natsJS.url**
+
+**Required**: `false`
+**Description**: Url of the NATS server that ticketing service will connect to
+**Default**: `nats`
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ ticketing:
+ natsJS:
+ url: nats
+```
+
+## **sysdig.secure.ticketing.natsJS.secure.enabled**
+
+**Required**: `false`
+**Description**: Enables or disables NATS in ticketing service
+**Options**: `true|false`
+**Default**: `true`
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ ticketing:
+ natsJS:
+ secure:
+ enabled: true
+```
+
+## **sysdig.secure.ticketing.natsJS.addAttachmentConsumer.deliverPolicyAll**
+
+**Required**: `false`
+**Description**: Enables or disables deliverPolicyAll for NATS attachments consumer in ticketing service
+**Options**: `true|false`
+**Default**: `true`
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ ticketing:
+ natsJS:
+ addAttachmentConsumer:
+ deliverPolicyAll: true
+```
+
+## **sysdig.secure.ticketing.natsJS.addAttachmentConsumer.durable**
+
+**Required**: `false`
+**Description**: Name of NATS durable consumer for consuming attachments events for ticketing service
+**Default**: `add_attachment_to_issue_consumer`
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ ticketing:
+ natsJS:
+ addAttachmentConsumer:
+ durable: add_attachment_to_issue_consumer
+```
+
+## **sysdig.secure.ticketing.natsJS.addAttachmentConsumer.name**
+
+**Required**: `false`
+**Description**: Name of NATS consumer for consuming attachments events for ticketing service
+**Default**: `add_attachment_to_issue_consumer`
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ ticketing:
+ natsJS:
+ addAttachmentConsumer:
+ name: add_attachment_to_issue_consumer
+```
+
+## **sysdig.secure.ticketing.natsJS.addAttachmentConsumer.pull**
+
+**Required**: `false`
+**Description**: Enable or disable pulling events for attachments consumer for ticketing service
+**Options**: `true|false`
+**Default**: `true`
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ ticketing:
+ natsJS:
+ addAttachmentConsumer:
+ pull: true
+```
+
+## **sysdig.secure.ticketing.natsJS.addAttachmentConsumer.streamName**
+
+**Required**: `false`
+**Description**: Name of a NATS stream for consuming attachment events for ticketing service
+**Default**: `jira_attachments`
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ ticketing:
+ natsJS:
+ addAttachmentConsumer:
+ streamName: jira_attachments
+```
+
+## **sysdig.secure.ticketing.natsJS.addAttachmentConsumer.subject**
+
+**Required**: `false`
+**Description**: NATS subject for consuming attachments events for ticketing service
+**Default**: `jira_attachments.add_to_issue`
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ ticketing:
+ natsJS:
+ addAttachmentConsumer:
+ subject: jira_attachments.add_to_issue
+```
+
+## **sysdig.secure.ticketing.natsJS.addAttachmentConsumer.maxDeliver**
+
+**Required**: `false`
+**Description**: Number of max retries for delivering attachment
+**Default**: `3`
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ ticketing:
+ natsJS:
+ addAttachmentConsumer:
+ maxDeliver: 3
+```
+
+## **sysdig.secure.ticketing.natsJS.addAttachmentConsumer.ackWait**
+
+**Required**: `false`
+**Description**: Time to wait for receiving ACK signal for attachments
+**Default**: `5m`
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ ticketing:
+ natsJS:
+ addAttachmentConsumer:
+ ackWait: 5m
+```
diff --git a/installer/docs/03-upgrade.md b/installer/docs/03-upgrade.md
new file mode 100644
index 00000000..7323256f
--- /dev/null
+++ b/installer/docs/03-upgrade.md
@@ -0,0 +1,125 @@
+
+
+
+
+
+
+# Upgrade
+
+
+
+
+
+
+
+## Overview
+
+You can use the Installer to upgrade a Sysdig implementation. As with an install, you must:
+- Meet the prerequisites.
+- Download the values.yaml.
+- Edit the values as indicated.
+- Run the Installer.
+
+The main difference is that you run it twice: once to discover the differences between the old and new versions, and the second time to deploy the new version.
+
+As with installs, it can be used in airgapped or non-airgapped environments.
+
+For more context, review the [Prerequisites](../README.md#prerequisites) and [Installation Options](../README.md#quickstart-install).
+
+## Upgrade Steps
+
+
+
+### Step 1 - Download the latest `values.yaml` template
+
+Copy the current version `sysdig-chart/values.yaml` to your working directory.
+
+```bash
+wget https://raw.githubusercontent.com/draios/sysdig-cloud-scripts/installer/installer/values.yaml
+```
+
+
+
+### Step 2 - Configure `values.yaml` according to your environment
+
+Edit the following values:
+
+- [`scripts`](docs/configuration_parameters.md#scripts): Set this to
+ `generate diff`. This setting will generate the differences between the
+ installed environment and the upgrade version. The changes will be displayed
+ in your terminal.
+- [`size`](docs/configuration_parameters.md#size): Specifies the size of the
+ cluster. Size defines CPU, Memory, Disk, and Replicas. Valid options are:
+ small, medium and large.
+- [`quaypullsecret`](docs/configuration_parameters.md#quaypullsecret):
+ quay.io credentials provided with your Sysdig purchase confirmation mail.
+- [`storageClassProvisioner`](docs/configuration_parameters.md#storageClassProvisioner):
+ The name of the storage class provisioner to use when creating the
+ configured storageClassName parameter. If you do not use one of those two
+ dynamic storage provisioners, then enter: hostPath and refer to the Advanced
+ examples for how to configure static storage provisioning with this option.
+ Valid options: aws, gke, hostPath
+- [`sysdig.license`](docs/configuration_parameters.md#sysdiglicense): Sysdig license key
+ provided with your Sysdig purchase confirmation mail
+- [`sysdig.dnsName`](docs/configuration_parameters.md#sysdigdnsName): The domain name
+ the Sysdig APIs will be served on.
+- [`sysdig.collector.dnsName`](docs/configuration_parameters.md#sysdigcollectordnsName):
+ (OpenShift installs only) Domain name the Sysdig collector will be served on.
+ When not configured it defaults to whatever is configured for sysdig.dnsName.
+- [`sysdig.ingressNetworking`](docs/configuration_parameters.md#sysdigingressnetworking):
+ The networking construct used to expose the Sysdig API and collector. Options
+ are:
+
+ - hostnetwork: sets the hostnetworking in the ingress daemonset and opens
+ host ports for api and collector. This does not create a Kubernetes service.
+ - loadbalancer: creates a service of type loadbalancer and expects that
+ your Kubernetes cluster can provision a load balancer with your cloud provider.
+ - nodeport: creates a service of type nodeport. The node ports can be
+ customized with:
+
+ - sysdig.ingressNetworkingInsecureApiNodePort
+ - sysdig.ingressNetworkingApiNodePort
+ - sysdig.ingressNetworkingCollectorNodePort
+
+**NOTE**: If doing an airgapped install (see airgapped Installation Options), you
+would also edit the following values:
+
+- [`airgapped_registry_name`](docs/configuration_parameters.md#airgapped_registry_name):
+ The URL of the airgapped (internal) docker registry. This URL is used for
+ installations where the Kubernetes cluster can not pull images directly from
+ Quay.
+- [`airgapped_registry_password`](docs/configuration_parameters.md#airgapped_registry_password):
+ The password for the configured airgapped_registry_username. Ignore this
+ parameter if the registry does not require authentication.
+- [`airgapped_registry_username`](docs/configuration_parameters.md#airgapped_registry_username):
+ The username for the configured airgapped_registry_name. Ignore this
+ parameter if the registry does not require authentication.
+
+
+
+### Step 3 - Check differences with the old Sysdig environment
+
+Run the Installer (if you are in airgapped environment make sure you follow instructions from installation on how to get the images to your airgapped registry)
+
+```bash
+./installer diff
+```
+
+
+
+### Step 4 - Deploy Sysdig version
+
+If you are fine with the differences displayed, then run:
+
+```bash
+./installer deploy
+```
+
+If you find differences that you want to preserve you should
+look in the [Configuration Parameters](docs/configuration_parameters.md)
+documentation for the configuration parameter that matches the difference
+you intend preserving and update your values.yaml accordingly then repeat
+step 3 until you are fine with the differences. Then set scripts to deploy
+and run for the final time.
+
+
diff --git a/installer/docs/04-advanced_configuration.md b/installer/docs/04-advanced_configuration.md
new file mode 100644
index 00000000..3b754ace
--- /dev/null
+++ b/installer/docs/04-advanced_configuration.md
@@ -0,0 +1,229 @@
+
+
+
+
+
+
+# Advanced Configuration
+
+
+
+
+
+
+
+## Use hostPath for Static Storage of Sysdig Components
+
+As described in the Installation Storage Requirements, the Installer assumes usage of a dynamic storage provider (AWS or GKE). If these are not used in your environment, add the entries below to the values.yaml to configure static storage.
+
+Based on the `size` found in the `values.yaml` file (small/medium/large), the Installer assumes a minimum number of replicas and nodes to be provided. You will enter the names of the nodes on which you will run the Cassandra, ElasticSearch and Postgres components of Sysdig in the values.yaml, as in the parameters and example below.
+
+### Parameters
+
+- `storageClassProvisioner`: hostPath.
+- `sysdig.cassandra.hostPathNodes`: The number of nodes configured here needs to be at minimum 1 when configured `size` is `small`, 3 when configured `size` is `medium` and 6 when configured `size` is large.
+- `elasticsearch.hostPathNodes`: The number of nodes configured here needs to be be at minimum 1 when configured `size` is `small`, 3 when configured `size` is `medium` and 6 when configured `size` is large.
+- `sysdig.mysql.hostPathNodes`: When sysdig.mysqlHa is configured to true this has to be at least 3 nodes and when sysdig.mysqlHa is not configured it should be at least one node.
+- `sysdig.postgresql.hostPathNodes`: This can be ignored if Sysdig Secure is not licensed or used on this environment. If Secure is used, then the parameter should be set to 1, regardless of the environment size setting.
+- `.hostPathCustomPaths`: customize the location of the directory structure on the Kubernetes node
+- `.pvStorageSize..`: customize the size of Volumes (check in the [configuration parameters list](/docs/02-configuration_parameters.md))
+
+### Example
+
+```yaml
+storageClassProvisioner: hostPath
+elasticsearch:
+ hostPathNodes:
+ - my-cool-host1.com
+ - my-cool-host2.com
+ - my-cool-host3.com
+ - my-cool-host4.com
+ - my-cool-host5.com
+ - my-cool-host6.com
+sysdig:
+ cassandra:
+ hostPathNodes:
+ - my-cool-host1.com
+ - my-cool-host2.com
+ - my-cool-host3.com
+ - my-cool-host4.com
+ - my-cool-host5.com
+ - my-cool-host6.com
+ postgresql:
+ hostPathNodes:
+ - my-cool-host1.com
+ kafka:
+ hostPathNodes:
+ - i-0082bddac2e013639
+ - i-05eb2d9719cc2dafa
+ - i-082b0341a1bb2f2be
+ zookeeper:
+ hostPathNodes:
+ - i-0082bddac2e013639
+ - i-05eb2d9719cc2dafa
+ - i-082b0341a1bb2f2be
+pvStorageSize:
+ medium:
+ cassandra: 600Gi
+ elasticsearch: 275Gi
+ postgresql: 120Gi
+hostPathCustomPaths:
+ cassandra: /sysdig/cassandra
+ elasticsearch: /sysdig/elasticsearch
+ mysql: /sysdig/mysql
+ postgresql: /sysdig/postgresql
+```
+
+## Installer on EKS
+
+### Creating a cluster
+
+Please do not use eksctl 0.10.0 and 0.10.1 as those are known to be buggy see: kubernetes/kubernetes#73906 (comment)
+
+```bash
+eksctl create cluster \
+ --name=eks-installer1 \
+ --node-type=m5.4xlarge \
+ --nodes=3 \
+ --version 1.14 \
+ --region=us-east-1 \
+ --vpc-public-subnets=
+```
+
+### Additional installer configurations
+
+EKS uses aws-iam-authenticator to authorize kubectl commands.
+aws-iam-authenticator needs aws credentials mounted from **~/.aws** to the installer.
+
+```bash
+docker run \
+ -v ~/.aws:/.aws \
+ -e HOST_USER=$(id -u) \
+ -e KUBECONFIG=/.kube/config \
+ -v ~/.kube:/.kube:Z \
+ -v $(pwd):/manifests:Z \
+ quay.io/sysdig/installer:
+```
+
+### Running airgapped EKS
+
+```bash
+EKS=true bash sysdig_installer.tar.gz
+```
+
+The above ensures the `~/.aws` directory is correctly mounted for the airgap installer container.
+
+### Exposing the Sysdig endpoint
+
+Get the external ip/endpoint for the ingress service.
+
+```bash
+kubectl -n get service haproxy-ingress-service
+```
+
+In route53 create an A record with the dns name pointing to external ip/endpoint.
+
+### Gotchas
+
+Make sure that subnets have internet gateway configured and has enough ips.
+
+## airgapped Installations
+
+### Updating the Feeds Database in airgapped environments [ScanningV2]
+
+In non-airgap onprem environments, the vulnerabilities feeds is automatically retrieved by the Sysdig stack from a Sysdig SaaS endpoint.
+In an airgap onprem environment, the customer must retrieve the feed as a Docker image from a workstation with Internet access and then load the image onto their own private registry.
+
+The following is an example of a Bash script that could be used to update the vulnerability feeds used by the ScanningV2 engine.
+The tag used is `latest`, and Sysdig is building and pushing this tag multiple times each day.
+The details of the image can be found using the `docker inspect` command, even if the tag is `latest`.
+The script is only provided as an example or template to be filled and customized.
+
+```bash
+#!/bin/bash
+QUAY_USERNAME=""
+QUAY_PASSWORD=""
+IMAGE_TAG="latest"
+
+# Download image
+docker login quay.io/sysdig -u ${QUAY_USERNAME} -p ${QUAY_PASSWORD}
+docker image pull quay.io/sysdig/airgap-vuln-feeds:${IMAGE_TAG}
+# Save image
+docker image save quay.io/sysdig/airgap-vuln-feeds:${IMAGE_TAG} -o airgap-vuln-feeds-latest.tar
+# Optionally move image
+mv airgap-vuln-feeds-latest.tar /var/shared-folder
+# Load image remotely
+ssh -t user@airgapped-host "docker image load -i /var/shared-folder/airgap-vuln-feeds-latest.tar"
+# Push image remotely
+ssh -t user@airgapped-host "docker tag airgap-vuln-feeds:${IMAGE_TAG} airgapped-registry/airgap-vuln-feeds:${IMAGE_TAG}"
+ssh -t user@airgapped-host "docker image push airgapped-registry/airgap-vuln-feeds:${IMAGE_TAG}"
+# verify the image timestamp - this command should return the timestamp in epoch format
+epoch_timestamp=$(ssh -q -t user@airgapped-host "docker inspect --format '{{ index .Config.Labels \"sysdig.origin-docker-image-tag\" }}' airgapped-registry/airgap-vuln-feeds:${IMAGE_TAG}")
+human_readable_timestamp=$(date -d@"$epoch_timestamp")
+echo "Actual timestamp of the image based on the label sysdig.origin-docker-image-tag: epoch: ${epoch_timestamp} human readable: ${human_readable_timestamp}"
+
+
+# Update the image: we need to restart the Deployment so that the image will be reloaded
+ssh -t user@airgapped-host "kubectl -n rollout restart deploy/sysdigcloud-scanningv2-airgap-vuln-feeds"
+
+# Follow and check the restart
+ssh -t user@airgapped-host "kubectl -n rollout status deploy/sysdigcloud-scanningv2-airgap-vuln-feeds"
+```
+
+> Note: The `IMAGE_TAG` mentioned above could also be used with the timestamp as well, like it was used in previous releases, here an example how to re-write the `IMAGE_TAG` line for the timestamp:
+> ```
+> # Calculate the tag of the last version.
+> epoch=`date +%s`
+> IMAGE_TAG=$(( $epoch - 86400 - $epoch % 86400))
+> ```
+
+The above script could be scheduled using a Linux cronjob that runs every day. E.g.:
+
+```bash
+0 8 * * * airgap-vuln-feeds-image-update.sh > /somedir/sysdig-airgapvulnfeed.log 2>&1
+```
+
+### Updating the Feeds Database in airgapped Environments [Legacy Scanning]
+
+This is a procedure that can be used to automatically update the feeds database:
+
+1. download the image file quay.io/sysdig/vuln-feed-database-12:latest from Sysdig registry to the jumpbox server and save it locally
+2. (Optional) Move the file from the jumpbox server to your airgapped environment.
+3. Load the image file and push it to your airgapped image registry.
+4. restart the pod sysdigcloud-feeds-db
+5. restart the pod feeds-api
+
+Finally, steps 1 to 5 will be performed periodically once a day.
+
+This is an example script that contains all the steps:
+
+```bash
+#!/bin/bash
+QUAY_USERNAME=""
+QUAY_PASSWORD=""
+
+# Download image
+docker login quay.io/sysdig -u ${QUAY_USERNAME} -p ${QUAY_PASSWORD}
+docker image pull quay.io/sysdig/vuln-feed-database-12:latest
+# Save image
+docker image save quay.io/sysdig/vuln-feed-database-12:latest -o vuln-feed-database-12.tar
+# Optionally move image
+mv vuln-feed-database-12.tar /var/shared-folder
+# Load image remotely
+ssh -t user@airgapped-host "docker image load -i /var/shared-folder/vuln-feed-database-12.tar"
+# Push image remotely
+ssh -t user@airgapped-host "docker tag vuln-feed-database-12:latest airgapped-registry/vuln-feed-database-12:latest"
+ssh -t user@airgapped-host "docker image push airgapped-registry/vuln-feed-database-12:latest"
+# Restart database pod
+ssh -t user@airgapped-host "kubectl -n sysdigcloud scale deploy sysdigcloud-feeds-db --replicas=0"
+ssh -t user@airgapped-host "kubectl -n sysdigcloud scale deploy sysdigcloud-feeds-db --replicas=1"
+# Restart feeds-api pod
+ssh -t user@airgapped-host "kubectl -n sysdigcloud scale deploy sysdigcloud-feeds-api --replicas=0"
+ssh -t user@airgapped-host "kubectl -n sysdigcloud scale deploy sysdigcloud-feeds-api --replicas=1"
+```
+
+The script can be scheduled using a cron job that run every day
+
+```bash
+0 8 * * * feeds-database-update.sh >/dev/null 2>&1
+```
diff --git a/installer/docs/05-networkPolicies.md b/installer/docs/05-networkPolicies.md
new file mode 100644
index 00000000..11d63313
--- /dev/null
+++ b/installer/docs/05-networkPolicies.md
@@ -0,0 +1,108 @@
+
+
+
+
+
+
+# Network Policies
+
+
+
+
+
+
+
+## Overview
+
+The current version of Sysdig Network policies v2 supports Sysdig HAProxy Ingress and IBM Cloud IKS ALBs.
+
+The NetworkPolicies (NP) are controlled via two flags:
+
+- (`.networkPolicies.ingress.default`) controls if the manifests will be generated at all or not. Manifests will be generated only if this flag is set to `deny`.
+
+- (`.networkPolicies.enabled`) controls if the NPs are active or not. This flag controls if the entries required under `.spec` to enable the NPs are rendered or not.
+
+In order to generate the manifests and enable the NPs, `networkPolicies.enabled` must be set to `true` and `networkPolicies.ingress.default` must be set to `deny`.
+
+A validation checks that the minimal requirements for each type of environment (via the `.deployment` parameter) are met:
+
+- if `.deployment=kubernetes`, then the `.networkPolicies.ingress.haproxy.allowedNetworks` is required
+
+- if `.deployment=iks`, then the `.networkPolicies.ingress.alb.selector` is required
+
+## Parameters
+
+### **networkPolicies.enabled**
+
+**Required**: `false`
+**Description**: to activate or de-activate NetworkPolicies. This flag works together with next flag `networkPolicies.ingress.default`. It controls whether the actual `.spec` section of the NP is enabled or not.
+**Options**: `true|false`
+**Default**: `false`
+
+**Example**:
+
+```yaml
+networkPolicies:
+ enabled: true
+```
+
+### **networkPolicies.ingress.default**
+
+**Required**: `false`
+**Description**: to render the NetworkPolicies this flag must be set to `deny`. It works together with flag `networkPolicies.enabled`.
+**Options**: `deny|allow`
+**Default**: `false`
+
+**Example**:
+
+```yaml
+networkPolicies:
+ enabled: "true"
+ ingress:
+ default: "deny"
+```
+
+### **networkPolicies.ingress.haproxy.allowedNetworks**
+
+**Required**: `true` (if NPs are enabled and active and `.deployment=kubernetes`)
+**Description**: If NPs are enabled (`.networkPolicies.enabled` to `"true"` and `.networkPolicies.ingress.default` to `"deny"`), then this value is required. It's the CIDR (or CIDRs) used by the HAPROXY Ingress controller
+**Options**: a list of valid IP Network address/Netmask entries
+**Default**: None
+
+**Example**:
+
+```yaml
+deployment: kubernetes
+networkPolicies:
+ enabled: "true"
+ ingress:
+ default: "deny"
+ haproxy:
+ allowedNetworks:
+ - 100.96.0.0/11
+```
+
+### **networkPolicies.ingress.alb.selector**
+
+**Required**: `true` (if `.deployment=iks`)
+**Description**: In IKS the list of ALBs must be specified via the `app` label
+**Options**: A list of "app" label values to match ALB deployments to permit traffic from; make it `null` to exclude ALBs from generated rules
+**Default**: `None`
+
+**Example**:
+
+```yaml
+deployment: iks
+networkPolicies:
+ enabled: "true"
+ ingress:
+ default: "deny"
+ alb:
+ # -- (map) A list of "app" label values to match ALB deployments to permit traffic from; make it `null` to exclude ALBs from generated rules
+ selector: {}
+ # selector:
+ # matchExpressions:
+ # - key: app
+ # operator: In
+ # values: ["public-cr-alb1", "public-cr-alb2"]
+```
diff --git a/installer/docs/advanced.md b/installer/docs/advanced.md
new file mode 100644
index 00000000..6617e8ba
--- /dev/null
+++ b/installer/docs/advanced.md
@@ -0,0 +1,150 @@
+# Advanced configuration
+
+## Use hostPath for Static Storage of Sysdig Components
+
+As described in the Installation Storage Requirements, the Installer
+assumes usage of a dynamic storage provider (AWS or GKE). In case these are
+not used in your environment, add the entries below to the values.yaml to
+configure static storage.
+
+Based on the `size` entered in the values.yaml file (small/medium/large), the
+Installer assumes a minimum number of replicas and nodes to be provided.
+You will enter the names of the nodes on which you will run the Cassandra,
+ElasticSearch, mySQL and Postgres components of Sysdig in the values.yaml, as
+in the parameters and example below.
+
+### Parameters
+
+`storageClassProvisioner`: hostPath.
+`sysdig.cassandra.hostPathNodes`: The number of nodes configured here needs to
+be at minimum 1 when configured `size` is `small`, 3 when configured `size` is
+`medium` and 6 when configured `size` is large.
+`elasticsearch.hostPathNodes`: The number of nodes configured here needs to be
+be at minimum 1 when configured `size` is `small`, 3 when configured `size` is
+`medium` and 6 when configured `size` is large.
+`sysdig.mysql.hostPathNodes`: When sysdig.mysqlHa is configured to true this has
+to be at least 3 nodes and when sysdig.mysqlHa is not configured it should be
+at least one node.
+`sysdig.postgresql.hostPathNodes`: This can be ignored if Sysdig Secure is not
+licensed or used on this environment. If Secure is used, then the parameter
+should be set to 1, regardless of the environment size setting.
+
+### Example
+
+```yaml
+storageClassProvisioner: hostPath
+elasticsearch:
+ hostPathNodes:
+ - my-cool-host1.com
+ - my-cool-host2.com
+ - my-cool-host3.com
+ - my-cool-host4.com
+ - my-cool-host5.com
+ - my-cool-host6.com
+sysdig:
+ cassandra:
+ hostPathNodes:
+ - my-cool-host1.com
+ - my-cool-host2.com
+ - my-cool-host3.com
+ - my-cool-host4.com
+ - my-cool-host5.com
+ - my-cool-host6.com
+ mysql:
+ hostPathNodes:
+ - my-cool-host1.com
+ postgresql:
+ hostPathNodes:
+ - my-cool-host1.com
+```
+
+
+## Installer on EKS
+
+### Creating a cluster
+Please do not use eksctl 0.10.0 and 0.10.1 as those are known to be buggy see: kubernetes/kubernetes#73906 (comment)
+```bash
+eksctl create cluster \
+ --name=eks-installer1 \
+ --node-type=m5.4xlarge \
+ --nodes=3 \
+ --version 1.14 \
+ --region=us-east-1 \
+ --vpc-public-subnets=
+```
+
+### Additional config for installer
+EKS uses aws-iam-authenticator to authorize kubectl commands.
+aws-iam-authenticator needs aws credentials mounted from **~/.aws** to the installer.
+```bash
+docker run \
+ -v ~/.aws:/.aws \
+ -e HOST_USER=$(id -u) \
+ -e KUBECONFIG=/.kube/config \
+ -v ~/.kube:/.kube:Z \
+ -v $(pwd):/manifests:Z \
+ quay.io/sysdig/installer:
+```
+
+### Running airgapped EKS
+
+```bash
+EKS=true bash sysdig_installer.tar.gz
+```
+
+The above ensures the `~/.aws` directory is correctly mounted for the airgap
+installer container.
+
+### Exposing the sysdig endpoint
+Get the external ip/endpoint for the ingress service.
+```bash
+kubectl -n get service haproxy-ingress-service
+```
+In route53 create an A record with the dns name pointing to external ip/endpoint.
+
+### Gotchas
+Make sure that subnets have internet gateway configured and has enough ips.
+
+## Airgapped installations
+
+### Method for automatically updating the feeds database in airgapped environments
+This is a procedure that can be used to automatically update the feeds database:
+
+1. download the image file quay.io/sysdig/vuln-feed-database:latest from Sysdig registry to the jumpbox server and save it locally
+2. move the file from the jumpbox server to the customer airgapped environment (optional)
+3. load the image file and push it to the customer's airgapped image registry
+4. restart the pod sysdigcloud-feeds-db
+5. restart the pod feeds-api
+
+Finally, steps 1 to 5 will be performed periodically once a day.
+
+This is an example script that contains all the steps:
+```bash
+#!/bin/bash
+QUAY_USERNAME=""
+QUAY_PASSWORD=""
+
+# Download image
+docker login quay.io/sysdig -u ${QUAY_USERNAME} -p ${QUAY_PASSWORD}
+docker image pull quay.io/sysdig/vuln-feed-database:latest
+# Save image
+docker image save quay.io/sysdig/vuln-feed-database:latest -o vuln-feed-database.tar
+# Optionally move image
+mv vuln-feed-database.tar /var/shared-folder
+# Load image remotely
+ssh -t user@airgapped-host "docker image load -i /var/shared-folder/vuln-feed-database.tar"
+# Push image remotely
+ssh -t user@airgapped-host "docker tag vuln-feed-database:latest airgapped-registry/vuln-feed-database:latest"
+ssh -t user@airgapped-host "docker image push airgapped-registry/vuln-feed-database:latest"
+# Restart database pod
+ssh -t user@airgapped-host "kubectl -n sysdigcloud scale deploy sysdigcloud-feeds-db --replicas=0"
+ssh -t user@airgapped-host "kubectl -n sysdigcloud scale deploy sysdigcloud-feeds-db --replicas=1"
+# Restart feeds-api pod
+ssh -t user@airgapped-host "kubectl -n sysdigcloud scale deploy sysdigcloud-feeds-api --replicas=0"
+ssh -t user@airgapped-host "kubectl -n sysdigcloud scale deploy sysdigcloud-feeds-api --replicas=1"
+```
+
+The script can be scheduled using a cron job that run every day
+```bash
+0 8 * * * feeds-database-update.sh >/dev/null 2>&1
+```
diff --git a/installer/docs/configuration_parameters.md b/installer/docs/configuration_parameters.md
new file mode 100644
index 00000000..8968fb27
--- /dev/null
+++ b/installer/docs/configuration_parameters.md
@@ -0,0 +1,10193 @@
+# Configuration Parameters
+
+## **quaypullsecret**
+**Required**: `true`
+**Description**: quay.io credentials provided with your Sysdig purchase confirmation
+ mail.
+**Options**:
+**Default**:
+**Example**:
+
+```yaml
+quaypullsecret: Y29tZS13b3JrLWF0LXN5c2RpZwo=
+```
+
+## **schema_version**
+**Required**: `true`
+**Description**: Represents the schema version of the values.yaml
+configuration. Versioning follows [Semver](https://semver.org/) (Semantic
+Versioning) and maintains semver guarantees about versioning.
+**Options**:
+**Default**: `1.0.0`
+**Example**:
+
+```yaml
+schema_version: 1.0.0
+```
+
+## **size**
+**Required**: `true`
+**Description**: Specifies the size of the cluster. Size defines CPU, Memory,
+Disk, and Replicas.
+**Options**: `small|medium|large`
+**Default**:
+**Example**:
+
+```yaml
+size: medium
+```
+
+## **kubernetesServerVersion**
+**Required**: `false`
+**Description**: The Kubernetes version of the targeted cluster.
+ This helps to programmatically determine which apiVersions should be used, i.e. for `Ingress` - `networking.k8s.io/v1`
+ must be used with k8s version 1.22+.
+**Options**:
+**Default**:If not provided, it will be pulled during `generate` and/or `import` phases.
+**Example**:
+
+```yaml
+kubernetesServerVersion: v1.18.10
+```
+
+## **storageClassProvisioner**
+**Required**: `false`
+**Description**: The name of the [storage class
+provisioner](https://kubernetes.io/docs/concepts/storage/storage-classes/#provisioner)
+to use when creating the configured storageClassName parameter. Use hostPath
+or local in clusters that do not have a provisioner. For setups where
+Persistent Volumes and Persistent Volume Claims are created manually this
+should be configured as `none`. If this is not configured
+[`storageClassName`](#storageclassname) needs to be configured.
+**Options**: `aws|gke|hostPath|none`
+**Default**:
+**Example**:
+
+```yaml
+storageClassProvisioner: aws
+```
+
+## **apps**
+**Required**: `false`
+**Description**: Specifies the Sysdig Platform components to be installed.
+Combine multiple components by space separating them. Specify at least one
+app, for example, `monitor`.
+**Options**: `monitor|monitor secure|agent|monitor agent|monitor secure agent`
+**Default**: `monitor secure`
+**Example**:
+
+```yaml
+apps: monitor secure
+```
+
+## **airgapped_registry_name**
+**Required**: `false`
+**Description**: The URL of the airgapped (internal) docker registry. This URL
+is used for installations where the Kubernetes cluster can not pull images
+directly from Quay. See [airgap instructions
+multi-homed](../README.md#airgapped-with-multi-homed-installation-machine)
+and [full airgap instructions](../README.md#full-airgap-install) for more
+details.
+**Options**:
+**Default**:
+**Example**:
+
+```yaml
+airgapped_registry_name: my-awesome-domain.docker.io
+```
+
+## **airgapped_repository_prefix**
+**Required**: `false`
+**Description**: This defines custom repository prefix for airgapped_registry.
+Tags and pushes images as airgapped_registry_name/airgapped_repository_prefix/image_name:tag
+**Options**:
+**Default**: sysdig
+**Example**:
+
+```yaml
+#tags and pushes the image to /foo/bar/
+airgapped_repository_prefix: foo/bar
+```
+
+## **airgapped_registry_password**
+**Required**: `false`
+**Description**: The password for the configured
+`airgapped_registry_username`. Ignore this parameter if the registry does not
+require authentication.
+**Options**:
+**Default**:
+**Example**:
+
+```yaml
+airgapped_registry_password: my-@w350m3-p@55w0rd
+```
+
+## **airgapped_registry_username**
+**Required**: `false`
+**Description**: The username for the configured `airgapped_registry_name`.
+Ignore this parameter if the registry does not require authentication.
+**Options**:
+**Default**:
+**Example**:
+
+```yaml
+airgapped_registry_username: bob+alice
+```
+
+## **deployment**
+**Required**: `false`
+**Description**: The name of the Kubernetes installation.
+**Options**: `iks|kubernetes|openshift|goldman`
+**Default**: `kubernetes`
+**Example**:
+
+```yaml
+deployment: kubernetes
+```
+
+## **context**
+**Required**: `false`
+**Description**: Kubernetes context to use for deploying Sysdig Platform.
+If this param is not not or a blank value is specified, it will use the default context.
+**Options**:
+**Default**:
+**Example**:
+
+```yaml
+context: production
+```
+
+## **namespace**
+**Required**: `false`
+**Description**: Kubernetes namespace to deploy Sysdig Platform to.
+**Options**:
+**Default**: `sysdig`
+**Example**:
+
+```yaml
+namespace: sysdig
+```
+
+## **scripts**
+**Required**: `false`
+**Description**: Defines which scripts needs to be run.
+ `generate`: performs templating and customization.
+ `diff`: generates diff against in-cluster configuration.
+ `deploy`: applies the generated script in Kubernetes environment.
+These options can be combined by space separating them.
+**Options**: `generate|diff|deploy|generate diff|generate deploy|diff deploy|generate diff deploy`
+**Default**: `generate deploy`
+**Example**:
+
+```yaml
+scripts: generate diff
+```
+
+## **storageClassName**
+**Required**: `false`
+**Description**: The name of the preconfigured [storage
+class](https://kubernetes.io/docs/concepts/storage/storage-classes/). If the
+storage class does not exist, Installer will attempt to create it using the
+`storageClassProvisioner` as the provisioner. This has no effect if
+`storageClassProvisioner` is configured to `none`.
+**Options**:
+**Default**: `sysdig`
+**Example**:
+
+```yaml
+storageClassName: sysdig
+```
+
+## ~~**cloudProvider.create_loadbalancer**~~ (**Deprecated**)
+**Required**: `false`
+**Description**: This is deprecated, prefer
+[`sysdig.ingressNetworking`](#sysdigingressnetworking) instead. When set to
+true a service of type
+[LoadBalancer](https://kubernetes.io/docs/concepts/services-networking/#loadbalancer)
+is created.
+**Options**: `true|false`
+**Default**: `false`
+**Example**:
+
+```yaml
+cloudProvider:
+ create_loadbalancer: true
+```
+
+## **cloudProvider.name**
+**Required**: `false`
+**Description**: The name of the cloud provider Sysdig Platform will run on.
+**Options**: `aws|gcp`
+**Default**:
+**Example**:
+
+```yaml
+cloudProvider:
+ name: aws
+```
+
+## **cloudProvider.isMultiAZ**
+**Required**: `false`
+**Description**: Specifies whether the underlying Kubernetes cluster is
+deployed in multiple availability zones. The parameter requires
+[`cloudProvider.name`](#cloudprovidername) to be configured.
+**Options**: `true|false`
+**Default**: `false`
+**Example**:
+
+```yaml
+cloudProvider:
+ isMultiAZ: false
+```
+
+## **cloudProvider.region**
+**Required**: `false`
+**Description**: The cloud provider region the underlying Kubernetes Cluster
+runs on. This parameter is required if
+[`cloudProvider.name`](#cloudprovidername) is configured.
+**Options**:
+**Default**:
+**Example**:
+
+```yaml
+cloudProvider:
+ region: us-east-1
+```
+
+## **elasticsearch.hostPathNodes**
+**Required**: `false`
+**Description**: An array of node hostnames printed out by the `kubectl get
+node -o name` command. ElasticSearch hostPath persistent volumes should be
+created on these nodes. The number of nodes must be at minimum whatever the
+value of
+[`sysdig.elasticsearchReplicaCount`](#sysdigelasticsearchreplicacount) is.
+This is required if configured
+[`storageClassProvisioner`](#storageclassprovisioner) is `hostPath`.
+**Options**:
+**Default**: []
+
+**Example**:
+
+```yaml
+elasticsearch:
+ hostPathNodes:
+ - my-cool-host1.com
+ - my-cool-host2.com
+ - my-cool-host3.com
+ - my-cool-host4.com
+ - my-cool-host5.com
+ - my-cool-host6.com
+```
+
+
+## **elasticsearch.jvmOptions**
+**Required**: `false`
+**Description**: Custom configuration for Elasticsearch JVM.
+**Options**:
+**Default**:
+**Example**:
+
+```yaml
+elasticsearch:
+ jvmOptions: -Xms4G -Xmx4G
+```
+
+## **elasticsearch.external**
+**Required**: `false`
+**Description**: If set does not create a local Elasticsearch cluster, tries connecting to an external Elasticsearch cluster.
+This can be used in conjunction with [`elasticsearch.hostname`](#elasticsearchhostname)
+**Options**: `true|false`
+**Default**: `false`
+**Example**:
+
+```yaml
+elasticsearch:
+ external: true
+```
+
+## **elasticsearch.hostname**
+**Required**: `false`
+**Description**: External Elasticsearch hostname can be provided here and certificates for clients can be provided under certs/elasticsearch-tls-certs.
+**Options**:
+**Default**: 'sysdigcloud-elasticsearch'
+**Example**:
+
+```yaml
+elasticsearch:
+ external: true
+ hostname: external.elasticsearch.cluster
+```
+
+## **elasticsearch.useES6**
+**Required**: `false`
+**Description**: Install Elasticsearch 6.8.x along with user authentication and TLS-encrypted data-in-transit
+using Elasticsearch's native TLS Encrpytion.
+If TLS Encrpytion is enabled Installer does the following in the provided order:
+ 1. Checks for existing Elasticsearch certificates in the provided environment to setup ES cluster. (applicable for upgrades)
+ 2. If they are not present Installer autogenerates tls certificates and uses them to setup es cluster.
+**Options**: `true|false`
+**Default**: `true`
+**Example**:
+
+```yaml
+elasticsearch:
+ useES6: true
+```
+
+## **elasticsearch.enableMetrics**
+**Required**: `false`
+**Description**:
+Allow Elasticsearch to export prometheus metrics.
+
+**Options**: `true|false`
+**Default**: `false`
+**Example**:
+
+```yaml
+elasticsearch:
+ enableMetrics: true
+```
+
+## **sysdig.elasticsearchExporterVersion**
+**Required**: `false`
+**Description**: Docker image tag of Elasticsearch Metrics Exporter, relevant when configured
+`elasticsearch.enableMetrics` is `true`.
+**Options**:
+**Default**: v1.2.0
+**Example**:
+
+```yaml
+sysdig:
+ elasticsearchExporterVersion: v1.2.0
+```
+
+## **elasticsearch.tlsencryption.adminUser**
+**Required**: `false`
+**Description**: The user bound to the ElasticSearch admin role.
+**Options**:
+**Default**: `sysdig`
+**Example**:
+
+```yaml
+elasticsearch:
+ tlsencryption:
+ adminUser: admin
+```
+
+## ~~**elasticsearch.searchguard.enabled**~~ (**Deprecated**)
+**Required**: `false`
+**Description**: Enables user authentication and TLS-encrypted data-in-transit
+with [Searchguard](https://search-guard.com/)
+If Searchguard is enabled Installer does the following in the provided order:
+ 1. Checks for user provided certificates under certs/elasticsearch-tls-certs if present uses that to setup elasticsearch(es) cluster.
+ 2. Checks for existing searchguard certificates in the provided environment to setup ES cluster. (applicable for upgrades)
+ 3. If neither of them are present Installer autogenerates searchguard certificates and uses them to setup es cluster.
+
+
+**Options**: `true|false`
+**Default**: `false`
+**Example**:
+
+```yaml
+elasticsearch:
+ searchguard:
+ enabled: false
+```
+
+## ~~**elasticsearch.searchguard.adminUser**~~ (**Deprecated**)
+**Required**: `false`
+**Description**: The user bound to the ElasticSearch Searchguard admin role.
+**Options**:
+**Default**: `sysdig`
+**Example**:
+
+```yaml
+elasticsearch:
+ searchguard:
+ adminUser: admin
+```
+
+## **elasticsearch.snitch.extractCMD**
+**Required**: `false`
+**Description**: The command used to determine [elasticsearch cluster routing
+allocation awareness
+attributes](https://www.elastic.co/guide/en/elasticsearch/reference/current/allocation-awareness.html).
+The command will be passed to the bash eval command and is expected to return
+a single string. For example: `cut -d- -f2 /host/etc/hostname`.
+**Options**:
+**Default**: `sysdig`
+**Example**:
+
+```yaml
+elasticsearch:
+ snitch:
+ extractCMD: cut -d- -f2 /host/etc/hostname
+```
+
+## **elasticsearch.snitch.hostnameFile**
+**Required**: `false`
+**Description**: The name of the location to bind mount the host's
+`/etc/hostname` file to. This can be combined with
+[`elasticsearch.snitch.extractCMD`](#elasticsearchsnitchextractcmd) to
+determine cluster routing allocation associated with the node's hostname.
+**Options**:
+**Default**: `sysdig`
+**Example**:
+
+```yaml
+elasticsearch:
+ snitch:
+ hostnameFile: /host/etc/hostname
+```
+
+## **hostPathCustomPaths.cassandra**
+**Required**: `false`
+**Description**: The directory to bind mount Cassandra pod's
+`/var/lib/cassandra` to on the host. This parameter is relevant only when
+`storageClassProvisioner` is `hostPath`.
+**Options**:
+**Default**: `/var/lib/cassandra`
+**Example**:
+
+```yaml
+hostPathCustomPaths:
+ cassandra: `/sysdig/cassandra`
+```
+
+## **hostPathCustomPaths.elasticsearch**
+**Required**: `false`
+**Description**: The directory to bind mount elasticsearch pod's
+`/usr/share/elasticsearch` to on the host. This parameter is relevant only when
+`storageClassProvisioner` is `hostPath`.
+**Options**:
+**Default**: `/usr/share/elasticsearch`
+**Example**:
+
+```yaml
+hostPathCustomPaths:
+ elasticsearch: `/sysdig/elasticsearch`
+```
+
+## **hostPathCustomPaths.mysql**
+**Required**: `false`
+**Description**: The directory to bind mount mysql pod's `/var/lib/mysql` to
+on the host. This is relevant only when `storageClassProvisioner` is
+`hostPath`.
+**Options**:
+**Default**: `/var/lib/mysql`
+**Example**:
+
+```yaml
+hostPathCustomPaths:
+ mysql: `/sysdig/mysql`
+```
+
+## **hostPathCustomPaths.postgresql**
+**Required**: `false`
+**Description**: The directory to bind mount PostgreSQL pod's
+`/var/lib/postgresql/data/pgdata` to on the host. This parameter is relevant
+only when `storageClassProvisioner` is `hostPath`.
+**Options**:
+**Default**: `/var/lib/postgresql/data/pgdata`
+**Example**:
+
+```yaml
+hostPathCustomPaths:
+ postgresql: `/sysdig/pgdata`
+```
+
+## **nodeaffinityLabel.key**
+**Required**: `false`
+**Description**: The key of the label that is used to configure the nodes that the
+Sysdig Platform pods are expected to run on. The nodes are expected to have
+been labeled with the key.
+**Options**:
+**Default**:
+**Example**:
+
+```yaml
+nodeaffinityLabel:
+ key: instancegroup
+```
+
+## **nodeaffinityLabel.value**
+**Required**: `false`
+**Description**: The value of the label that is used to configure the nodes
+that the Sysdig Platform pods are expected to run on. The nodes are expected
+to have been labeled with the value of
+[`nodeaffinityLabel.key`](#nodeaffinitylabelkey), and is required if
+[`nodeaffinityLabel.key`](#nodeaffinitylabelkey) is configured.
+**Options**:
+**Default**:
+**Example**:
+
+```yaml
+nodeaffinityLabel:
+ value: sysdig
+```
+
+## **pvStorageSize.large.cassandra**
+**Required**: `false`
+**Description**: The size of the persistent volume assigned to Cassandra in a
+cluster of [`size`](#size) large. This option is ignored if
+[`storageClassProvisioner`](#storageclassprovisioner) is `hostPath`.
+**Options**:
+**Default**: 300Gi
+**Example**:
+
+```yaml
+pvStorageSize:
+ large:
+ cassandra: 500Gi
+```
+
+## **pvStorageSize.large.elasticsearch**
+**Required**: `false`
+**Description**: The size of the persistent volume assigned to Elasticsearch
+in a cluster of [`size`](#size) large. This option is ignored if
+[`storageClassProvisioner`](#storageclassprovisioner) is `hostPath`.
+**Options**:
+**Default**: 300Gi
+**Example**:
+
+```yaml
+pvStorageSize:
+ large:
+ elasticsearch: 500Gi
+```
+
+## **pvStorageSize.large.mysql**
+**Required**: `false`
+**Description**: The size of the persistent volume assigned to MySQL in a
+cluster of [`size`](#size) large. This option is ignored if
+[`storageClassProvisioner`](#storageclassprovisioner) is `hostPath`.
+**Options**:
+**Default**: 25Gi
+**Example**:
+
+```yaml
+pvStorageSize:
+ large:
+ mysql: 100Gi
+```
+
+## **pvStorageSize.large.postgresql**
+**Required**: `false`
+**Description**: The size of the persistent volume assigned to PostgreSQL in a
+cluster of [`size`](#size) large. This option is ignored if
+[`storageClassProvisioner`](#storageclassprovisioner) is `hostPath`.
+**Options**:
+**Default**: 60Gi
+**Example**:
+
+```yaml
+pvStorageSize:
+ large:
+ postgresql: 100Gi
+```
+
+## **pvStorageSize.medium.cassandra**
+**Required**: `false`
+**Description**: The size of the persistent volume assigned to Cassandra in a
+cluster of [`size`](#size) medium. This option is ignored if
+[`storageClassProvisioner`](#storageclassprovisioner) is `hostPath`.
+**Options**:
+**Default**: 100Gi
+**Example**:
+
+```yaml
+pvStorageSize:
+ medium:
+ cassandra: 300Gi
+```
+
+## **pvStorageSize.medium.elasticsearch**
+**Required**: `false`
+**Description**: The size of the persistent volume assigned to Elasticsearch in a
+cluster of [`size`](#size) medium. This option is ignored if
+[`storageClassProvisioner`](#storageclassprovisioner) is `hostPath`.
+**Options**:
+**Default**: 100Gi
+**Example**:
+
+```yaml
+pvStorageSize:
+ medium:
+ elasticsearch: 300Gi
+```
+
+## **pvStorageSize.medium.mysql**
+**Required**: `false`
+**Description**: The size of the persistent volume assigned to MySQL in a
+cluster of [`size`](#size) medium. This option is ignored if
+[`storageClassProvisioner`](#storageclassprovisioner) is `hostPath`.
+**Options**:
+**Default**: 25Gi
+**Example**:
+
+```yaml
+pvStorageSize:
+ medium:
+ mysql: 100Gi
+```
+
+## **pvStorageSize.medium.postgresql**
+**Required**: `false`
+**Description**: The size of the persistent volume assigned to PostgreSQL in a
+cluster of [`size`](#size) medium. This option is ignored if
+[`storageClassProvisioner`](#storageclassprovisioner) is `hostPath`.
+**Options**:
+**Default**: 60Gi
+**Example**:
+
+```yaml
+pvStorageSize:
+ medium:
+ postgresql: 100Gi
+```
+
+## **pvStorageSize.small.cassandra**
+**Required**: `false`
+**Description**: The size of the persistent volume assigned to Cassandra in a
+cluster of [`size`](#size) small. This option is ignored if
+[`storageClassProvisioner`](#storageclassprovisioner) is `hostPath`.
+**Options**:
+**Default**: 30Gi
+**Example**:
+
+```yaml
+pvStorageSize:
+ small:
+ cassandra: 100Gi
+```
+
+## **pvStorageSize.small.elasticsearch**
+**Required**: `false`
+**Description**: The size of the persistent volume assigned to Elasticsearch
+in a cluster of [`size`](#size) small. This option is ignored if
+[`storageClassProvisioner`](#storageclassprovisioner) is `hostPath`.
+**Options**:
+**Default**: 30Gi
+**Example**:
+
+```yaml
+pvStorageSize:
+ small:
+ elasticsearch: 100Gi
+```
+
+## **pvStorageSize.small.mysql**
+**Required**: `false`
+**Description**: The size of the persistent volume assigned to MySQL in a
+cluster of [`size`](#size) small. This option is ignored if
+[`storageClassProvisioner`](#storageclassprovisioner) is `hostPath`.
+**Options**:
+**Default**: 25Gi
+**Example**:
+
+```yaml
+pvStorageSize:
+ small:
+ mysql: 100Gi
+```
+
+## **pvStorageSize.small.postgresql**
+**Required**: `false`
+**Description**: The size of the persistent volume assigned to PostgreSQL in a
+cluster of [`size`](#size) small. This option is ignored if
+[`storageClassProvisioner`](#storageclassprovisioner) is `hostPath`.
+**Options**:
+**Default**: 30Gi
+**Example**:
+
+```yaml
+pvStorageSize:
+ small:
+ postgresql: 100Gi
+```
+
+## **pvStorageSize.large.nats**
+**Required**: `false`
+**Description**: The size of the persistent volume assigned to NATS HA in a
+cluster of [`size`](#size) large. This option is ignored if
+[`storageClassProvisioner`](#storageclassprovisioner) is `hostPath`.
+**Options**:
+**Default**: 10Gi
+**Example**:
+
+```yaml
+pvStorageSize:
+ large:
+ nats: 10Gi
+```
+
+## **pvStorageSize.medium.nats**
+**Required**: `false`
+**Description**: The size of the persistent volume assigned to NATS HA in a
+cluster of [`size`](#size) medium. This option is ignored if
+[`storageClassProvisioner`](#storageclassprovisioner) is `hostPath`.
+**Options**:
+**Default**: 10Gi
+**Example**:
+
+```yaml
+pvStorageSize:
+ medium:
+ nats: 10Gi
+```
+
+## **pvStorageSize.small.nats**
+**Required**: `false`
+**Description**: The size of the persistent volume assigned to NATS HA in a
+cluster of [`size`](#size) small. This option is ignored if
+[`storageClassProvisioner`](#storageclassprovisioner) is `hostPath`.
+**Options**:
+**Default**: 10Gi
+**Example**:
+
+```yaml
+pvStorageSize:
+ small:
+ nats: 10Gi
+```
+
+## **sysdig.anchoreVersion**
+**Required**: `false`
+**Description**: The docker image tag of the Sysdig Anchore Core.
+**Options**:
+**Default**: 0.8.1-51
+**Example**:
+
+```yaml
+sysdig:
+ anchoreVersion: 0.8.1-51
+```
+
+## **sysdig.accessKey**
+**Required**: `false`
+**Description**: The AWS (or AWS compatible) accessKey to be used by Sysdig
+components to communicate with AWS (or an AWS compatible API).
+**Options**:
+**Default**:
+**Example**:
+
+```yaml
+sysdig:
+ accessKey: my_awesome_aws_access_key
+```
+
+## **sysdig.awsRegion**
+**Required**: `false`
+**Description**: The AWS (or AWS compatible) region to be used by Sysdig
+components to communicate with AWS (or an AWS compatible API).
+**Options**:
+**Default**:
+**Example**:
+
+```yaml
+sysdig:
+ awsRegion: my_aws_region
+```
+
+## **sysdig.secretKey**
+**Required**: `false`
+**Description**: The AWS (or AWS compatible) secretKey to be used by Sysdig
+components to communicate with AWS (or an AWS compatible API).
+**Options**:
+**Default**:
+**Example**:
+
+```yaml
+sysdig:
+ secretKey: my_super_secret_secret_key
+```
+
+## **sysdig.s3.enabled**
+**Required**: `false`
+**Description**: Specifies if storing Sysdig Captures in S3 or S3-compatible storage is enabled.
+**Options**:`true|false`
+**Default**:false
+**Example**:
+
+```yaml
+sysdig:
+ s3:
+ enabled: true
+```
+
+## **sysdig.s3.endpoint**
+**Required**: `false`
+**Description**: S3-compatible endpoint for the bucket, this option is ignored if
+[`sysdig.s3.enabled`](#sysdigs3enabled) is not configured. This option is not required if using an AWS S3 Bucket for captures.
+**Options**:
+**Default**:
+**Example**:
+
+```yaml
+sysdig:
+ s3:
+ endpoint: s3.us-south.cloud-object-storage.appdomain.cloud
+```
+
+## **sysdig.s3.bucketName**
+**Required**: `false`
+**Description**: Name of the S3 bucket to be used for captures, this option is ignored if
+[`sysdig.s3.enabled`](#sysdigs3enabled) is not configured.
+**Options**:
+**Default**:
+**Example**:
+
+```yaml
+sysdig:
+ s3:
+ bucketName: my_awesome_bucket
+```
+
+## **sysdig.s3.capturesFolder**
+**Required**: `false`
+**Description**: Name of the folder in S3 bucket to be used for storing captures, this option is ignored if
+[`sysdig.s3.enabled`](#sysdigs3enabled) is not configured.
+**Options**:
+**Default**:
+**Example**:
+
+```yaml
+sysdig:
+ s3:
+ capturesFolder: my_captures_folder
+```
+
+## **sysdig.cassandraVersion**
+**Required**: `false`
+**Description**: The docker image tag of Cassandra.
+**Options**:
+**Default**: 2.1.22.4
+**Example**:
+
+```yaml
+sysdig:
+ cassandraVersion: 2.1.22.4
+```
+
+## **sysdig.cassandraExporterVersion**
+**Required**: `false`
+**Description**: The docker `image tag` of Cassandra's Prometheus JMX exporter. Default image: `//promcat-jmx-exporter:latest`
+**Options**:
+**Default**: latest
+**Example**:
+
+```yaml
+sysdig:
+ cassandraExporterVersion: latest
+```
+
+## **sysdig.cassandra.useCassandra3**
+**Required**: `false`
+**Description**: Use Cassandra 3 instead of Cassandra 2. Only available for fresh installs from 4.0.
+**Options**: `true|false`
+**Default**: `true`
+**Example**:
+
+```yaml
+sysdig:
+ cassandra:
+ useCassandra3: false
+```
+
+## **sysdig.Cassandra3Version**
+**Required**: `false`
+**Description**: Specify the image version of Cassandra 3.x. Ignored if `sysdig.useCassandra3` is not set to `true`. Only supported in fresh installs from 4.0
+**Options**:
+**Default**: `3.11.11.1`
+**Example**:
+
+```yaml
+sysdig:
+ cassandra3Version: 3.11.11.1
+```
+
+## **sysdig.cassandra.external**
+**Required**: `false`
+**Description**: If set does not create a local Cassandra cluster, tries connecting to an external Cassandra cluster.
+This can be used in conjunction with [`sysdig.cassandra.endpoint`](#sysdigcassandraendpoint)
+**Options**: `true|false`
+**Default**: `false`
+**Example**:
+
+```yaml
+sysdig:
+ cassandra:
+ external: true
+```
+
+## **sysdig.cassandra.endpoint**
+**Required**: `false`
+**Description**: External Cassandra endpoint can be provided here.
+**Options**:
+**Default**: 'sysdigcloud-cassandra'
+**Example**:
+
+```yaml
+sysdig:
+ cassandra:
+ external: true
+ endpoint: external.cassandra.cluster
+```
+
+## **sysdig.cassandra.secure**
+**Required**: `false`
+**Description**: Enables cassandra server and clients to use authentication.
+**Options**: `true|false`
+**Default**:`true`
+**Example**:
+
+```yaml
+sysdig:
+ cassandra:
+ secure: true
+ ssl: true
+```
+
+## **sysdig.cassandra.ssl**
+**Required**: `false`
+**Description**: Enables cassandra server and clients communicate over ssl. Defaults to `true` for Cassandra 3 installs (available from 4.0)
+**Options**: `true|false`
+**Default**: `true`
+**Example**:
+
+```yaml
+sysdig:
+ cassandra:
+ secure: true
+ ssl: true
+```
+
+## **sysdig.cassandra.enableMetrics**
+**Required**: `false`
+**Description**: Enables cassandra exporter as sidecar. Defaults to `false` for all Cassandra installs (available from 4.0)
+**Options**: `true|false`
+**Default**: `false`
+**Example**:
+
+```yaml
+sysdig:
+ cassandra:
+ enableMetrics: true
+```
+
+## **sysdig.cassandra.user**
+**Required**: `false`
+**Description**: Sets cassandra user. The only gotcha is the user cannot be a substring of sysdigcloud-cassandra.
+**Options**:
+**Default**: `sysdigcassandra`
+**Example**:
+
+```yaml
+sysdig:
+ cassandra:
+ user: cassandrauser
+```
+
+## **sysdig.cassandra.password**
+**Required**: `false`
+**Description**: Sets cassandra password
+**Options**:
+**Default**: Autogenerated 16 alphanumeric characters
+**Example**:
+
+```yaml
+sysdig:
+ cassandra:
+ user: cassandrauser
+ password: cassandrapassword
+```
+
+## **sysdig.cassandra.workloadName**
+**Required**: `false`
+**Description**: Name assigned to the Cassandra objects(statefulset and
+service)
+**Options**:
+**Default**: `sysdigcloud-cassandra`
+**Example**:
+
+```yaml
+sysdig:
+ cassandra:
+ workloadName: sysdigcloud-cassandra
+```
+
+## **sysdig.cassandra.customOverrides**
+**Required**: `false`
+**Description**: The custom overrides of Cassandra's default configuration. The parameter
+expects a YAML block of key-value pairs as described in the [Cassandra
+documentation](https://docs.datastax.com/en/archived/cassandra/2.1/cassandra/configuration/configCassandra_yaml_r.html).
+**Options**:
+**Default**:
+**Example**:
+
+```yaml
+sysdig:
+ cassandra:
+ customOverrides: |
+ hinted_handoff_enabled: false
+ concurrent_compactors: 8
+ read_request_timeout_in_ms: 10000
+ write_request_timeout_in_ms: 10000
+```
+
+## **sysdig.cassandra.datacenterName**
+**Required**: `false`
+**Description**: The datacenter name used for the [Cassandra
+Snitch](http://cassandra.apache.org/doc/latest/operating/snitch.html).
+**Options**:
+**Default**: In AWS the value is ec2Region as determined by the code
+[here](https://github.com/apache/cassandra/blob/a85afbc7a83709da8d96d92fc4154675794ca7fb/src/java/org/apache/cassandra/locator/Ec2Snitch.java#L61-L63),
+elsewhere defaults to an empty string.
+**Example**:
+
+```yaml
+sysdig:
+ cassandra:
+ datacenterName: my-cool-datacenter
+```
+
+## **sysdig.cassandra.jvmOptions**
+**Required**: `false`
+**Description**: The custom configuration for Cassandra JVM.
+**Options**:
+**Default**: `-Xms4g -Xmx4g`
+**Example**:
+
+```yaml
+sysdig:
+ cassandra:
+ jvmOptions: -Xms6G -Xmx6G -XX:+PrintGCDateStamps -XX:+PrintGCDetails
+```
+
+## **sysdig.cassandra.hostPathNodes**
+**Required**: `false`
+**Description**: An array of node hostnames printed out by the `kubectl get node -o
+name` command. These are the nodes where Cassandra hostPath persistent volumes should be created on. The number of nodes must be at minimum whatever the value of
+[`sysdig.cassandraReplicaCount`](#sysdigcassandrareplicacount) is. This is
+required if configured [`storageClassProvisioner`](#storageclassprovisioner)
+is `hostPath`.
+**Options**:
+**Default**: []
+
+**Example**:
+
+```yaml
+sysdig:
+ cassandra:
+ hostPathNodes:
+ - my-cool-host1.com
+ - my-cool-host2.com
+ - my-cool-host3.com
+ - my-cool-host4.com
+ - my-cool-host5.com
+ - my-cool-host6.com
+```
+
+## **sysdig.collectorPort**
+**Required**: `false`
+**Description**: The port to publicly serve Sysdig collector on.
+_**Note**: collectorPort is not configurable in openshift deployments. It is always 443._
+**Options**: `1024-65535`
+**Default**: `6443`
+**Example**:
+
+```yaml
+sysdig:
+ collectorPort: 7000
+```
+
+## **sysdig.certificate.customCA**
+**Required**: `false`
+**Description**:
+The Sysdig platform may sometimes open connections over SSL to certain external services, including:
+ - LDAP over SSL
+ - SAML over SSL
+ - OpenID Connect over SSL
+ - HTTPS Proxies
+If the signing authorities for the certificates presented by these services are not well-known to the Sysdig Platform
+ (e.g., if you maintain your own Certificate Authority), they are not trusted by default.
+
+To allow the Sysdig platform to trust these certificates, use this configuration to upload one or more
+PEM-format CA certificates. You must ensure you've uploaded all certificates in the CA approval chain to the root CA.
+
+This configuration when set expects certificates with .crt, .pem or .p12 extensions under certs/custom-java-certs/
+in the same level as `values.yaml`.
+
+**Options**: `true|false`
+**Default**: false
+**Example**:
+
+```bash
+#In the example directory structure below, certificate1.crt and certificate2.crt will be added to the trusted list.
+# certificate3.p12 will be loaded to the keystore together with it's private key.
+bash-5.0$ find certs values.yaml
+certs
+certs/custom-java-certs
+certs/custom-java-certs/certificate1.crt
+certs/custom-java-certs/certificate2.crt
+certs/custom-java-certs/certificate3.p12
+certs/custom-java-certs/certificate3.p12.passwd
+
+
+values.yaml
+```
+
+```yaml
+sysdig:
+ certificate:
+ customCA: true
+```
+
+## **sysdig.dnsName**
+**Required**: `true`
+**Description**: The domain name the Sysdig APIs will be served on.
+**Options**:
+**Default**:
+**Example**:
+
+```yaml
+sysdig:
+ dnsName: my-awesome-domain-name.com
+```
+
+## **sysdig.elasticsearchVersion**
+**Required**: `false`
+**Description**: The docker image tag of Elasticsearch.
+**Options**:
+**Default**: 5.6.16.18
+**Example**:
+
+```yaml
+sysdig:
+ elasticsearchVersion: 5.6.16.18
+```
+
+## **sysdig.elasticsearch6Version**
+**Required**: `false`
+**Description**: The docker image tag of Elasticsearch.
+**Options**:
+**Default**: 6.8.6.12
+**Example**:
+
+```yaml
+sysdig:
+ elasticsearch6Version: 6.8.6.12
+```
+
+## **sysdig.haproxyVersion**
+**Required**: `false`
+**Description**: The docker image tag of HAProxy ingress controller. The
+parameter is relevant only when configured `deployment` is `kubernetes`.
+**Options**:
+**Default**: v0.7-beta.7.1
+**Example**:
+
+```yaml
+sysdig:
+ haproxyVersion: v0.7-beta.7.1
+```
+
+## **sysdig.ingressNetworking**
+**Required**: `false`
+**Description**: The networking construct used to expose the Sysdig API and collector.
+* hostnetwork, sets the hostnetworking in ingress daemonset and opens host ports for api and collector. This does not create a service.
+* loadbalancer, creates a service of type [`loadbalancer`](https://kubernetes.io/docs/concepts/services-networking/#loadbalancer)
+* nodeport, creates a service of type [`nodeport`](https://kubernetes.io/docs/concepts/services-networking/#nodeport). The node ports can be customized with:
+ * [`sysdig.ingressNetworkingInsecureApiNodePort`](#sysdigingressnetworkinginsecureapinodeport)
+ * [`sysdig.ingressNetworkingApiNodePort`](#sysdigingressnetworkingapinodeport)
+ * [`sysdig.ingressNetworkingCollectorNodePort`](#sysdigingressnetworkingcollectornodeport)
+* external, assumes external ingress is used and does not create ingress objects.
+
+**Options**:
+[`hostnetwork`](https://kubernetes.io/docs/concepts/policy/pod-security-policy/#host-namespaces)|[`loadbalancer`](https://kubernetes.io/docs/concepts/services-networking/#loadbalancer)|[`nodeport`](https://kubernetes.io/docs/concepts/services-networking/#nodeport)| external
+
+**Default**: `hostnetwork`
+**Example**:
+
+```yaml
+sysdig:
+ ingressNetworking: loadbalancer
+```
+
+## **sysdig.ingressNetworkingInsecureApiNodePort**
+**Required**: `false`
+**Description**: When [`sysdig.ingressNetworking`](#sysdigingressnetworking)
+is configured as `nodeport`, this is the NodePort requested by Installer
+from Kubernetes for the Sysdig non-TLS API endpoint.
+**Options**:
+**Default**: `30000`
+**Example**:
+
+```yaml
+sysdig:
+ ingressNetworkingInsecureApiNodePort: 30000
+```
+
+## **sysdig.ingressLoadBalancerAnnotation**
+**Required**: `false`
+**Description**: Annotations that will be added to the
+`haproxy-ingress-service` object, this is useful to set annotations related to
+creating internal loadbalancers.
+**Options**:
+**Example**:
+
+```yaml
+sysdig:
+ ingressLoadBalancerAnnotation:
+ cloud.google.com/load-balancer-type: Internal
+```
+
+## **sysdig.ingressNetworkingApiNodePort**
+**Required**: `false`
+**Description**: When [`sysdig.ingressNetworking`](#sysdigingressnetworking)
+is configured as `nodeport`, this is the NodePort requested by Installer
+from Kubernetes for the Sysdig TLS API endpoint.
+**Options**:
+**Default**: `30001`
+**Example**:
+
+```yaml
+sysdig:
+ ingressNetworkingApiNodePort: 30001
+```
+
+## **sysdig.ingressNetworkingCollectorNodePort**
+**Required**: `false`
+**Description**: When [`sysdig.ingressNetworking`](#sysdigingressnetworking)
+is configured as `nodeport`, this is the NodePort requested by Installer
+from Kubernetes for the Sysdig collector endpoint.
+**Options**:
+**Default**: `30002`
+**Example**:
+
+```yaml
+sysdig:
+ ingressNetworkingCollectorNodePort: 30002
+```
+
+## **sysdig.license**
+**Required**: `true`
+**Description**: Sysdig license provided with the deployment.
+**Options**:
+**Default**:
+**Example**:
+
+```yaml
+sysdig:
+ license: replace_with_your_license
+```
+
+## **sysdig.monitorVersion**
+**Required**: `false`
+**Description**: The docker image tag of the Sysdig Monitor. **Do not modify
+this unless you know what you are doing as modifying it could have unintended
+consequences**
+**Options**:
+**Default**: 5.0.4.11001
+**Example**:
+
+```yaml
+sysdig:
+ monitorVersion: 5.0.4.11001
+```
+
+## **sysdig.secureVersion**
+**Required**: `false`
+**Description**: The docker image tag of the Sysdig Secure, if this is not
+configured it defaults to `sysdig.monitorVersion` **Do not modify
+this unless you know what you are doing as modifying it could have unintended
+consequences**
+**Options**:
+**Default**: 5.0.4.11001
+**Example**:
+
+```yaml
+sysdig:
+ secureVersion: 5.0.4.11001
+```
+
+## **sysdig.sysdigAPIVersion**
+**Required**: `false`
+**Description**: The docker image tag of Sysdig API components, if
+this is not configured it defaults to `sysdig.monitorVersion` **Do not modify
+this unless you know what you are doing as modifying it could have unintended
+consequences**
+**Options**:
+**Default**: 5.0.4.11001
+**Example**:
+
+```yaml
+sysdig:
+ sysdigAPIVersion: 5.0.4.11001
+```
+
+## **sysdig.sysdigCollectorVersion**
+**Required**: `false`
+**Description**: The docker image tag of Sysdig Collector components, if
+this is not configured it defaults to `sysdig.monitorVersion` **Do not modify
+this unless you know what you are doing as modifying it could have unintended
+consequences**
+**Options**:
+**Default**: 5.0.4.11001
+**Example**:
+
+```yaml
+sysdig:
+ sysdigCollectorVersion: 5.0.4.11001
+```
+
+## **sysdig.sysdigWorkerVersion**
+**Required**: `false`
+**Description**: The docker image tag of Sysdig Worker components, if
+this is not configured it defaults to `sysdig.monitorVersion` **Do not modify
+this unless you know what you are doing as modifying it could have unintended
+consequences**
+**Options**:
+**Default**: 5.0.4.11001
+**Example**:
+
+```yaml
+sysdig:
+ sysdigWorkerVersion: 5.0.4.11001
+```
+
+## **sysdig.enableAlerter**
+**Required**: `false`
+**Description**: This creates a separate deployment for Alerters while
+disabling this functionality in workers. **Do not modify this unless you
+know what you are doing as modifying it could have unintended
+consequences**
+**Options**:`true|false`
+**Default**: `false`
+**Example**:
+
+```yaml
+sysdig:
+ enableAlerter: true
+```
+
+## **sysdig.alertingSystem.enabled**
+**Required**: `false`
+**Description**: Enable or disable the new alert-manager and alert-notifier deployment
+**Options**:`true|false`
+**Default**: `true`
+**Example**:
+
+```yaml
+sysdig:
+ alertingSystem:
+ enabled: true
+```
+
+## **sysdig.alertingSystem.alertManager.jvmOptions**
+**Required**: `false`
+**Description**: Custom configuration for Sysdig Alert Manager jvm.
+**Options**:
+**Default**:
+**Example**:
+
+```yaml
+sysdig:
+ alertingSystem:
+ alertManager:
+ jvmOptions: -Dsysdig.redismq.watermark.consumer.threads=20
+```
+
+## **sysdig.alertingSystem.alertManager.apiToken**
+**Required**: `false`
+**Description**: API token used by the Alert Manager to communicate with the sysdig API server
+**Options**:
+**Default**:
+**Example**:
+
+```yaml
+sysdig:
+ alertingSystem:
+ alertManager:
+ apiToken: A_VALID_TOKEN
+```
+
+## **sysdig.alertingSystem.alertNotifier.jvmOptions**
+**Required**: `false`
+**Description**: Custom configuration for Sysdig Alert Notifier jvm.
+**Options**:
+**Default**:
+**Example**:
+
+```yaml
+sysdig:
+ alertingSystem:
+ alertNotifier:
+ jvmOptions: -Dsysdig.redismq.watermark.consumer.threads=20
+```
+
+## **sysdig.alertingSystem.alertNotifier.apiToken**
+**Required**: `false`
+**Description**: API token used by the Alert Notifier to communicate with the sysdig API server
+**Options**:
+**Default**:
+**Example**:
+
+```yaml
+sysdig:
+ alertingSystem:
+ alertNotifier:
+ apiToken: A_VALID_TOKEN
+```
+
+## **sysdig.alertingSystem.alertNotifierReplicaCount**
+**Required**: `false`
+**Description**: Number of Replica for the alertNotifier
+**Options**:
+**Default**: small: 1, medium: 3, large: 5
+**Example**:
+
+```yaml
+sysdig:
+ alertingSystem:
+ alertNotifierReplicaCount: 3
+```
+
+## **sysdig.alertingSystem.alertManagerReplicaCount**
+**Required**: `false`
+**Description**: Number of Replica for the alertManager
+**Options**:
+**Default**: small: 1, medium: 3, large: 5
+**Example**:
+
+```yaml
+sysdig:
+ alertingSystem:
+ alertManagerReplicaCount: 3
+```
+
+## **sysdig.mysqlHa**
+**Required**: `false`
+**Description**: Determines if mysql should run in HA mode.
+**Options**: `true|false`
+**Default**: `false`
+**Example**:
+
+```yaml
+sysdig:
+ mysqlHa: false
+```
+
+## **sysdig.useMySQL8**
+**Required**: `false`
+**Description**: Determines if standalone mysql should run MySQL8.
+**Options**: `true|false`
+**Default**: `false`
+**Example**:
+
+```yaml
+sysdig:
+ useMySQL8: true
+```
+
+## **sysdig.mysqlHaVersion**
+**Required**: `false`
+**Description**: The docker image tag of MySQL used for HA.
+**Options**:
+**Default**: 8.0.16.4
+**Example**:
+
+```yaml
+sysdig:
+ mysqlHaVersion: 8.0.16.4
+```
+
+## **sysdig.mysqlHaAgentVersion**
+**Required**: `false`
+**Description**: The docker image tag of MySQL Agent used for HA.
+**Options**:
+**Default**: 0.1.1.6
+**Example**:
+
+```yaml
+sysdig:
+ mysqlHaAgentVersion: 0.1.1.6
+```
+
+## **sysdig.mysqlVersion**
+**Required**: `false`
+**Description**: The docker image tag of MySQL.
+**Options**:
+**Default**: 5.6.44.0
+**Example**:
+
+```yaml
+sysdig:
+ mysqlVersion: 5.6.44.0
+```
+
+## **sysdig.mysql8Version**
+**Required**: `false`
+**Description**: The docker image tag of MySQL8.
+**Options**:
+**Default**: 8.0.16.0
+**Example**:
+
+```yaml
+sysdig:
+ mysqlVersion: 8.0.16.0
+```
+
+## **sysdig.mysql.external**
+**Required**: `false`
+**Description**: If set, the installer does not create a local mysql cluster, instead it sets up the sysdig platform to connect to the configured
+[`sysdig.mysql.hostname`](#sysdigmysqlhostname)
+**Options**: `true|false`
+**Default**: `false`
+**Example**:
+
+```yaml
+sysdig:
+ mysql:
+ external: true
+```
+
+## **sysdig.mysql.hostname**
+**Required**: `false`
+**Description**: Name of the mySQL host that the sysdig platform components
+should connect to.
+**Options**:
+**Default**:
+**Example**:
+
+```yaml
+sysdig:
+ mysql:
+ hostname: mysql.foo.com
+```
+
+## **sysdig.mysql.hostPathNodes**
+**Required**: `false`
+**Description**: An array of node hostnames printed out by the `kubectl get
+node -o name` command. These are the nodes where MySQL hostPath persistent
+volumes should be created on. The number of nodes must be at minimum whatever
+the value of [`sysdig.mysqlReplicaCount`](#sysdigmysqlreplicacount) is. This
+parameter is required if configured
+[`storageClassProvisioner`](#storageclassprovisioner) is `hostPath`.
+**Options**:
+**Default**: []
+
+**Example**:
+
+```yaml
+sysdig:
+ mysql:
+ hostPathNodes:
+ - my-cool-host1.com
+```
+
+## **sysdig.mysql.maxConnections**
+**Required**: `false`
+**Description**: The maximum permitted number of simultaneous client connections.
+**Options**:
+**Default**: `1024`
+
+**Example**:
+
+```yaml
+sysdig:
+ mysql:
+ maxConnections: 1024
+```
+
+## **sysdig.mysql.password**
+**Required**: `false`
+**Description**: The password of the MySQL user that the Sysdig Platform backend
+components will use in communicating with MySQL.
+**Options**:
+**Default**: `mysql-admin`
+
+**Example**:
+
+```yaml
+sysdig:
+ mysql:
+ user: awesome-user
+```
+
+## **sysdig.mysql.user**
+**Required**: `false`
+**Description**: The username of the MySQL user that the Sysdig Platform backend
+components will use in communicating with MySQL.
+_**Note**: Do NOT use `root` user for this value._
+**Options**:
+**Default**: `mysql-admin`
+
+**Example**:
+
+```yaml
+sysdig:
+ mysql:
+ user: awesome-user
+```
+
+## **sysdig.natsExporterVersion**
+**Required**: `false`
+**Description**: Docker image tag of the Prometheus exporter for NATS.
+**Options**:
+**Default**: 0.7.0.1
+**Example**:
+
+```yaml
+sysdig:
+ natsExporterVersion: 0.7.0.1
+```
+
+## **sysdig.natsStreamingVersion**
+**Required**: `false`
+**Description**: Docker image tag of NATS streaming.
+**Options**:
+**Default**: 0.22.0.2
+**Example**:
+
+```yaml
+sysdig:
+ natsStreamingVersion: 0.22.0.2
+```
+
+## **sysdig.natsStreamingInitVersion**
+**Required**: `false`
+**Description**: Docker image tag of NATS streaming init.
+**Options**:
+**Default**: 0.22.0.2
+**Example**:
+
+```yaml
+sysdig:
+ natsStreamingInitVersion: 0.22.0.2
+```
+
+## **sysdig.nats.secure.enabled**
+**Required**: `false`
+**Description**: NATS Streaming TLS enabled.
+**Options**:
+**Default**: true
+**Example**:
+
+```yaml
+sysdig:
+ nats:
+ secure:
+ enabled: true
+```
+
+## **sysdig.nats.secure.username**
+**Required**: `true` when `sysdig.nats.secure.enabled` is set to true
+**Description**: NATS username
+**Options**:
+**Default**:
+**Example**:
+
+```yaml
+sysdig:
+ nats:
+ secure:
+ enabled: true
+ username: somevalue
+```
+
+## **sysdig.nats.secure.password**
+**Required**: `true` when `sysdig.nats.secure.enabled` is set to true
+**Description**: NATS password
+**Options**:
+**Default**:
+**Example**:
+
+```yaml
+sysdig:
+ nats:
+ secure:
+ enabled: true
+ password: somevalue
+```
+
+## **sysdig.nats.ha.enabled**
+**Required**: `false`
+**Description**: NATS Streaming HA (High Availability) enabled.
+**Options**:
+**Default**: false
+**Example**:
+
+```yaml
+sysdig:
+ nats:
+ ha:
+ enabled: false
+```
+
+## **sysdig.nats.urlha**
+**Required**: `false`
+**Description**: NATS Streaming URL for HA deployment.
+**Options**:
+**Default**: nats://sysdigcloud-nats-streaming-cluster-0.sysdigcloud-nats-streaming-cluster:4222,nats://sysdigcloud-nats-streaming-cluster-1.sysdigcloud-nats-streaming-cluster:4222,nats://sysdigcloud-nats-streaming-cluster-2.sysdigcloud-nats-streaming-cluster:4222
+**Example**:
+
+```yaml
+sysdig:
+ nats:
+ urlha: nats://sysdigcloud-nats-streaming-cluster-0.sysdigcloud-nats-streaming-cluster:4222,nats://sysdigcloud-nats-streaming-cluster-1.sysdigcloud-nats-streaming-cluster:4222,nats://sysdigcloud-nats-streaming-cluster-2.sysdigcloud-nats-streaming-cluster:4222
+```
+
+## **sysdig.nats.urltls**
+**Required**: `false`
+**Description**: NATS Streaming URL for TLS enabled.
+**Options**:
+**Default**: nats://sysdigcloud-nats-streaming-tls:4222
+**Example**:
+
+```yaml
+sysdig:
+ nats:
+ urltls: nats://sysdigcloud-nats-streaming-tls:4222
+```
+
+## **sysdig.openshiftUrl**
+**Required**: `false`
+**Description**: Openshift API url along with its port number, this is
+required if configured `deployment` is `openshift`.
+**Options**:
+**Default**:
+**Example**:
+
+```yaml
+sysdig:
+ openshiftUrl: https://api.my-awesome-openshift.com:6443
+```
+
+## **sysdig.openshiftUser**
+**Required**: `false`
+**Description**: Username of the user to access the configured
+`sysdig.openshiftUrl`, required if configured `deployment` is `openshift`.
+**Options**:
+**Default**:
+**Example**:
+
+```yaml
+sysdig:
+ openshiftUser: bob+alice
+```
+
+## **sysdig.openshiftPassword**
+**Required**: `false`
+**Description**: Password of the user(`sysdig.openshiftUser`) to access the
+configured `sysdig.openshiftUrl`, required if configured `deployment` is
+`openshift`.
+**Options**:
+**Default**:
+**Example**:
+
+```yaml
+sysdig:
+ openshiftPassword: my-@w350m3-p@55w0rd
+```
+
+## **sysdig.postgresVersion**
+**Required**: `false`
+**Description**: Docker image tag of Postgres, relevant when configured `apps`
+is `monitor secure` and when `postgres.HA.enabled` is false.
+**Options**:
+**Default**: 10.6.11
+**Example**:
+
+```yaml
+sysdig:
+ postgresVersion: 10.6.11
+```
+
+## **sysdig.mysqlToPostgresMigrationVersion**
+**Required**: `false`
+**Description**: The docker image tag for MySQL to PostgreSQL migration.
+**Options**:
+**Default**: 1.2.5-mysql-to-postgres
+**Example**:
+
+```yaml
+sysdig:
+ mysqlToPostgresMigrationVersion: 1.2.5-mysql-to-postgres
+```
+
+## **sysdig.postgresql.rootUser**
+**Required**: `false`
+**Description**: Root user of the in-cluster postgresql instance.
+**Options**:
+**Default**: `postgres`
+**Example**:
+
+```yaml
+sysdig:
+ postgresql:
+ rootUser: postgres
+```
+
+## **sysdig.postgresql.rootDb**
+**Required**: `false`
+**Description**: Root database of the in-cluster postgresql instance.
+**Options**:
+**Default**: `anchore`
+**Example**:
+
+```yaml
+sysdig:
+ postgresql:
+ rootDb: anchore
+```
+
+## **sysdig.postgresql.rootPassword**
+**Required**: `false`
+**Description**: Password for the root user of the in-cluster postgresql instance.
+**Options**:
+**Default**: Autogenerated 16 alphanumeric characters
+**Example**:
+
+```yaml
+sysdig:
+ postgresql:
+ rootPassword: my_root_password
+```
+
+## **sysdig.postgresql.primary**
+**Required**: `false`
+**Description**: If set, the installer starts the mysql to postgresql migration (if not already performed), services will start in postgresql mode.
+**Options**: `true|false`
+**Default**: `true`
+**Example**:
+
+```yaml
+sysdig:
+ postgresql:
+ primary: true
+```
+
+## **sysdig.postgresql.external**
+**Required**: `false`
+**Description**: If set, the installer does not create a local postgresql cluster, instead it sets up the sysdig platform to connect to configured `sysdig.postgresDatabases.*.Host` databases.
+**Options**: `true|false`
+**Default**: `false`
+**Example**:
+
+```yaml
+sysdig:
+ postgresql:
+ external: true
+ postgresDatabases:
+ padvisor:
+ host: my-padvisor-db-external.com
+ sysdig:
+ host: my-sysdig-db-external.com
+```
+
+## **sysdig.postgresql.hostPathNodes**
+**Required**: `false`
+**Description**: An array of node hostnames has shown in `kubectl get node -o
+name` that postgresql hostPath persistent volumes should be created on. The
+number of nodes must be at minimum whatever the value of
+[`sysdig.postgresReplicaCount`](#sysdigpostgresreplicacount) is. This is
+required if configured [`storageClassProvisioner`](#storageclassprovisioner)
+is `hostPath`.
+**Options**:
+**Default**: []
+
+**Example**:
+
+```yaml
+sysdig:
+ postgresql:
+ hostPathNodes:
+ - my-cool-host1.com
+```
+
+## **sysdig.postgresql.pgParameters**
+**Required**: `false`
+**Description**: a dictionary of Postgres parameter names and values to apply to the cluster
+**Options**:
+**Default**: ``
+
+**Example**:
+
+```yaml
+sysdig:
+ postgresql:
+ pgParameters:
+ max_connections: '1024'
+ shared_buffers: '110MB'
+```
+
+
+## **sysdig.postgresql.ha.enabled**
+**Required**: `false`
+**Description**: true if you want to deploy postgreSQL in HA mode.
+**Options**: `true|false`
+**Default**: `false`
+
+**Example**:
+
+```yaml
+sysdig:
+ postgresql:
+ ha:
+ enabled: true
+```
+
+## **sysdig.postgresql.ha.spiloVersion**
+**Required**: `false`
+**Description**: Docker image tag of the postgreSQL node in HA mode.
+**Options**:
+**Default**: `2.0-p7`
+
+**Example**:
+
+```yaml
+sysdig:
+ postgresql:
+ ha:
+ spiloVersion: 2.0-p7
+```
+
+## **sysdig.postgresql.ha.operatorVersion**
+**Required**: `false`
+**Description**: Docker image tag of the postgreSQL operator pod that orchestrate postgreSQL nodes in HA mode.
+**Options**:
+**Default**: `v1.6.3`
+
+**Example**:
+
+```yaml
+sysdig:
+ postgresql:
+ ha:
+ operatorVersion: v1.6.3
+```
+
+## **sysdig.postgresql.ha.exporterVersion**
+**Required**: `false`
+**Description**: Docker image tag of the prometheus exporter for postgreSQL in HA mode.
+**Options**:
+**Default**: `latest`
+
+**Example**:
+
+```yaml
+sysdig:
+ postgresql:
+ ha:
+ exporterVersion: v0.3
+```
+
+## **sysdig.postgresql.ha.clusterDomain**
+**Required**: `false`
+**Description**: dns domain inside the cluster. Needed by the postgres operator to select the correct kubernetes api endpoint.
+**Options**:
+**Default**: `cluster.local`
+
+**Example**:
+
+```yaml
+sysdig:
+ postgresql:
+ ha:
+ clusterDomain: cluster.local
+```
+
+## **sysdig.postgresql.ha.replicas**
+**Required**: `false`
+**Description**: number of replicas for postgreSQL nodes in HA mode.
+**Options**:
+**Default**: `3`
+
+**Example**:
+
+```yaml
+sysdig:
+ postgresql:
+ ha:
+ replicas: 3
+```
+
+## **sysdig.postgresql.ha.checkCRDs**
+**Required**: `false`
+**Description**: Check if zalando pg operator CRDs are already present, if yes stop the installation. If disable the installation will continue to be performed even if the CRDs are present.
+**Options**:
+**Default**: `true`
+
+**Example**:
+
+```yaml
+sysdig:
+ postgresql:
+ ha:
+ checkCRD: true
+```
+
+## **sysdig.postgresql.ha.enableExporter**
+**Required**: `false`
+**Description**: Docker image tag of the prometheus exporter for postgreSQL in HA mode.
+**Options**:
+**Default**: `true`
+
+**Example**:
+
+```yaml
+sysdig:
+ postgresql:
+ ha:
+ enableExporter: true
+```
+
+## **sysdig.postgresql.ha.migrate.retryCount**
+**Required**: `false`
+**Description**: If true a sidecar prometheus exporter for postgres in HA mode is created.
+**Options**: `true|false`
+**Default**: `3600`
+
+**Example**:
+
+```yaml
+sysdig:
+ postgresql:
+ ha:
+ migrate:
+ retryCount: 3600
+```
+
+## **sysdig.postgresql.ha.migrate.retrySleepSeconds**
+**Required**: `false`
+**Description**: Wait time between checks for the migration job from postgreSQL in single mode to HA mode.
+**Options**:
+**Default**: `10`
+
+**Example**:
+
+```yaml
+sysdig:
+ postgresql:
+ ha:
+ migrate:
+ retrySleepSeconds: 10
+```
+
+## **sysdig.postgresql.ha.migrate.retainBackup**
+**Required**: `false`
+**Description**: If true the statefulset and pvc of the postgreSQL in single node mode is not deleted after the migration to HA mode.
+**Options**: `true|false`
+**Default**: `true`
+
+**Example**:
+
+```yaml
+sysdig:
+ postgresql:
+ ha:
+ migrate:
+ retainBackup: true
+```
+
+## **sysdig.postgresql.ha.migrate.migrationJobImageVersion**
+**Required**: `false`
+**Description**: Docker image tag of the migration job from postgres single node to HA mode.
+**Options**:
+**Default**: `postgres-to-postgres-ha-0.0.4`
+
+**Example**:
+
+```yaml
+sysdig:
+ postgresql:
+ ha:
+ migrate:
+ migrationJobImageVersion: v0.1
+```
+
+## **sysdig.postgresql.ha.customTls.enabled**
+**Required**: `false`
+**Description**: If set to true will pass to the target pg crd the option to add
+custom certificates and CA
+**Options**: `true|false`
+**Default**: `false`
+
+**Example**:
+
+```yaml
+sysdig:
+ postgresql:
+ ha:
+ customTls:
+ enabled: true
+```
+
+## **sysdig.postgresql.ha.customTls.crtSecretName**
+**Required**: `false`
+**Description**: in case of customtls enabled it's the name of the k8s secret
+that container certificate and key that will be used in postgres HA for ssl
+NOTE: the certficate and key files must be called `tls.crt` and `tls.key`
+**Options**: `secret-name`
+**Default**: `nil`
+
+**Example**:
+
+```yaml
+sysdig:
+ postgresql:
+ ha:
+ customTls:
+ enabled: true
+ crtSecretName: sysdigcloud-postgres-tls-crt
+```
+
+## **sysdig.postgresql.ha.customTls.caSecretName**
+**Required**: `false`
+**Description**: in case of customtls enabled it's the name of the k8s secret
+that container the CA certificate that will be used in postgres HA for ssl
+NOTE: the CA certificate file must be called `ca.crt`
+**Options**: `secret-name`
+**Default**: `nil`
+
+**Example**:
+
+```yaml
+sysdig:
+ postgresql:
+ ha:
+ customTls:
+ enabled: true
+ crtSecretName: sysdigcloud-postgres-tls-crt
+ caSecretName: sysdigcloud-postgres-tls-ca
+
+```
+
+## **sysdig.postgresDatabases.useNonAdminUsers**
+**Required**: `false`
+**Description**: If set, the services will connect to `anchore` and `profiling` databases in non-root mode: this also means that `anchore` and `profiling` connection details and credentials will be fetched from `sysdigcloud-postgres-config` configmap and `sysdigcloud-postgres-secret` secret, instead of `sysdigcloud-config` configmap and `sysdigcloud-anchore` secret. It only works if `sysdig.postgresql.external` is set.
+**Options**: `true|false`
+**Default**: `false`
+**Example**:
+
+```yaml
+sysdig:
+ postgresql:
+ external: true
+ postgresDatabases:
+ useNonAdminUsers: true
+ anchore:
+ host: my-anchore-db-external.com
+ profiling:
+ host: my-profiling-db-external.com
+```
+
+## **sysdig.postgresDatabases.anchore**
+**Required**: `false`
+**Description**: A map containing database connection details for external postgresql instance used as `anchore` database. To use in conjunction with `sysdig.postgresql.external`. Only relevant if `sysdig.postgresDatabases.useNonAdminUsers` is configured.
+**Example**:
+
+```yaml
+sysdig:
+ postgresql:
+ external: true
+ postgresDatabases:
+ useNonAdminUsers: true
+ anchore:
+ host: my-anchore-db-external.com
+ port: 5432
+ db: anchore_db
+ username: anchore_user
+ password: my_anchore_user_password
+ sslmode: disable
+ admindb: root_db
+ adminusername: root_user
+ adminpassword: my_root_user_password
+```
+
+## **sysdig.postgresDatabases.profiling**
+**Required**: `false`
+**Description**: A map containing database connection details for external postgresql instance used as `profiling` database. To use in conjunction with `sysdig.postgresql.external`. Only relevant if `sysdig.postgresDatabases.useNonAdminUsers` is configured.
+**Example**:
+
+```yaml
+sysdig:
+ postgresql:
+ external: true
+ postgresDatabases:
+ useNonAdminUsers: true
+ profiling:
+ host: my-profiling-db-external.com
+ port: 5432
+ db: anchore_db
+ username: profiling_user
+ password: my_profiling_user_password
+ sslmode: disable
+ admindb: root_db
+ adminusername: root_user
+ adminpassword: my_root_user_password
+```
+
+## **sysdig.postgresDatabases.policies**
+**Required**: `false`
+**Description**: A map containing database connection details for external postgresql instance used as `policies` database. To use in conjunction with `sysdig.postgresql.external`.
+**Example**:
+
+```yaml
+sysdig:
+ postgresql:
+ external: true
+ postgresDatabases:
+ policies:
+ host: my-policies-db-external.com
+ port: 5432
+ db: policies_db
+ username: policies_user
+ password: my_policies_user_password
+ sslmode: disable
+ admindb: root_db
+ adminusername: root_user
+ adminpassword: my_root_user_password
+```
+
+## **sysdig.postgresDatabases.scanning**
+**Required**: `false`
+**Description**: A map containing database connection details for external postgresql instance used as `scanning` database. To use in conjunction with `sysdig.postgresql.external`. Only relevant if `sysdig.postgresql.primary` is configured.
+**Example**:
+
+```yaml
+sysdig:
+ postgresql:
+ primary: true
+ external: true
+ postgresDatabases:
+ scanning:
+ host: my-scanning-db-external.com
+ port: 5432
+ db: scanning_db
+ username: scanning_user
+ password: my_scanning_user_password
+ sslmode: disable
+ admindb: root_db
+ adminusername: root_user
+ adminpassword: my_root_user_password
+```
+
+## **sysdig.postgresDatabases.reporting**
+**Required**: `false`
+**Description**: A map containing database connection details for external postgresql instance used as `reporting` database. To use in conjunction with `sysdig.postgresql.external`.
+**Example**:
+
+```yaml
+sysdig:
+ postgresql:
+ external: true
+ postgresDatabases:
+ reporting:
+ host: my-reporting-db-external.com
+ port: 5432
+ db: reporting_db
+ username: reporting_user
+ password: my_reporting_user_password
+ sslmode: disable
+ admindb: root_db
+ adminusername: root_user
+ adminpassword: my_root_user_password
+```
+
+## **sysdig.postgresDatabases.padvisor**
+**Required**: `false`
+**Description**: A map containing database connection details for external postgresql instance used as `padvisor` database. To use in conjunction with `sysdig.postgresql.external`. Only relevant if `sysdig.postgresql.primary` is configured.
+**Example**:
+
+```yaml
+sysdig:
+ postgresql:
+ primary: true
+ external: true
+ postgresDatabases:
+ padvisor:
+ host: my-padvisor-db-external.com
+ port: 5432
+ db: padvisor_db
+ username: padvisor_user
+ password: my_padvisor_user_password
+ sslmode: disable
+ admindb: root_db
+ adminusername: root_user
+ adminpassword: my_root_user_password
+```
+
+## **sysdig.postgresDatabases.sysdig**
+**Required**: `false`
+**Description**: A map containing database connection details for external postgresql instance used as `sysdig` database. To use in conjunction with `sysdig.postgresql.external`. Only relevant if `sysdig.postgresql.primary` is configured.
+**Example**:
+
+```yaml
+sysdig:
+ postgresql:
+ primary: true
+ external: true
+ postgresDatabases:
+ sysdig:
+ host: my-sysdig-db-external.com
+ port: 5432
+ db: sysdig_db
+ username: sysdig_user
+ password: my_sysdig_user_password
+ sslmode: disable
+ admindb: root_db
+ adminusername: root_user
+ adminpassword: my_root_user_password
+```
+
+## **sysdig.postgresDatabases.serviceOwnerManagement**
+**Required**: `false`
+**Description**: A map containing database connection details for external postgresql instance used as `serviceOwnerManagement` database. To use in conjunction with `sysdig.postgresql.external`. Only relevant if `sysdig.postgresql.primary` is configured.
+**Example**:
+
+```yaml
+sysdig:
+ postgresql:
+ primary: true
+ external: true
+ postgresDatabases:
+ serviceOwnerManagement:
+ host: my-som-db-external.com
+ port: 5432
+ db: som_db
+ username: som_user
+ password: my_som_user_password
+ sslmode: disable
+ admindb: root_db
+ adminusername: root_user
+ adminpassword: my_root_user_password
+```
+
+## **sysdig.postgresDatabases.beacon**
+**Required**: `false`
+**Description**: A map containing database connection details for external postgresql instance used as `beacon` database. To use in conjunction with `sysdig.postgresql.external`. Only relevant if `sysdig.postgresql.primary` is configured and Beacon for IBM PlatformMetrics is enabled.
+**Example**:
+
+```yaml
+sysdig:
+ postgresql:
+ primary: true
+ external: true
+ postgresDatabases:
+ beacon:
+ host: my-beacon-db-external.com
+ port: 5432
+ db: beacon_db
+ username: beacon_user
+ password: my_beacon_user_password
+ sslmode: disable
+ admindb: root_db
+ adminusername: root_user
+ adminpassword: my_root_user_password
+```
+
+## **sysdig.postgresDatabases.promBeacon**
+**Required**: `false`
+**Description**: A map containing database connection details for external postgresql instance used as `promBeacon` database. To use in conjunction with `sysdig.postgresql.external`. Only relevant if `sysdig.postgresql.primary` is configured and Generalized Beacon is enabled.
+**Example**:
+
+```yaml
+sysdig:
+ postgresql:
+ primary: true
+ external: true
+ postgresDatabases:
+ promBeacon:
+ host: my-prom-beacon-db-external.com
+ port: 5432
+ db: prom_beacon_db
+ username: prom_beacon_user
+ password: my_prom_beacon_user_password
+ sslmode: disable
+ admindb: root_db
+ adminusername: root_user
+ adminpassword: my_root_user_password
+```
+
+## **sysdig.postgresDatabases.quartz**
+**Required**: `false`
+**Description**: A map containing database connection details for external postgresql instance used as `quartz` database. To use in conjunction with `sysdig.postgresql.external`. Only relevant if `sysdig.postgresql.primary` is configured.
+**Example**:
+
+```yaml
+sysdig:
+ postgresql:
+ primary: true
+ external: true
+ postgresDatabases:
+ quartz:
+ host: my-quartz-db-external.com
+ port: 5432
+ db: quartz_db
+ username: quartz_user
+ password: my_quartz_user_password
+ sslmode: disable
+ admindb: root_db
+ adminusername: root_user
+ adminpassword: my_root_user_password
+```
+
+## **sysdig.postgresDatabases.compliance**
+**Required**: `false`
+**Description**: A map containing database connection details for external postgresql instance used as `compliance` database. To use in conjunction with `sysdig.postgresql.external`.
+**Example**:
+
+```yaml
+sysdig:
+ postgresql:
+ external: true
+ postgresDatabases:
+ compliance:
+ host: my-compliance-db-external.com
+ port: 5432
+ db: compliance_db
+ username: compliance_user
+ password: my_compliance_user_password
+ sslmode: disable
+ admindb: root_db
+ adminusername: root_user
+ adminpassword: my_root_user_password
+```
+
+## **sysdig.postgresDatabases.admissionController**
+**Required**: `false`
+**Description**: A map containing database connection details for external postgresql instance used as `admissionController` database. To use in conjunction with `sysdig.postgresql.external`.
+**Example**:
+
+```yaml
+sysdig:
+ postgresql:
+ external: true
+ postgresDatabases:
+ admissionController:
+ host: my-admission-controller-db-external.com
+ port: 5432
+ db: admission_controller_db
+ username: admission_controller_user
+ password: my_admission_controller_user_password
+ sslmode: disable
+ admindb: root_db
+ adminusername: root_user
+ adminpassword: my_root_user_password
+```
+
+## **sysdig.postgresDatabases.rapidResponse**
+**Required**: `false`
+**Description**: A map containing database connection details for external postgresql instance used as `rapidResponse` database. To use in conjunction with `sysdig.postgresql.external`.
+**Example**:
+
+```yaml
+sysdig:
+ postgresql:
+ external: true
+ postgresDatabases:
+ rapidResponse:
+ host: my-rapid-response-db-external.com
+ port: 5432
+ db: rapid_response_db
+ username: rapid_response_user
+ password: my_rapid_response_user_password
+ sslmode: disable
+ admindb: root_db
+ adminusername: root_user
+ adminpassword: my_root_user_password
+```
+
+## **sysdig.proxy.defaultNoProxy**
+**Required**: `false`
+**Description**: Default comma separated list of addresses or domain names
+that can be reached without going through the configured web proxy. This is
+only relevant if [`sysdig.proxy.enable`](#sysdigproxyenable) is configured and
+should only be used if there is an intent to override the defaults provided by
+Installer otherwise consider [`sysdig.proxy.noProxy`](#sysdigproxynoproxy)
+instead.
+**Options**:
+**Default**: `127.0.0.1, localhost, sysdigcloud-anchore-core, sysdigcloud-anchore-api`
+
+**Example**:
+
+```yaml
+sysdig:
+ proxy:
+ enable: true
+ defaultNoProxy: 127.0.0.1, localhost, sysdigcloud-anchore-core, sysdigcloud-anchore-api
+```
+
+## **sysdig.proxy.enable**
+**Required**: `false`
+**Description**: Determines if a [web
+proxy](https://en.wikipedia.org/wiki/Proxy_server#Web_proxy_servers) should be
+used by Anchore for fetching CVE feed from
+[https://api.sysdigcloud.com/api/scanning-feeds/v1/feeds](https://api.sysdigcloud.com/api/scanning-feeds/v1/feeds) and by the events forwarder to forward to HTTP based targets.
+**Options**:
+**Default**: `false`
+
+**Example**:
+
+```yaml
+sysdig:
+ proxy:
+ enable: true
+```
+
+## **sysdig.proxy.host**
+**Required**: `false`
+**Description**: The address of the web proxy, this could be a domain name or
+an IP address. This is required if [`sysdig.proxy.enable`](#sysdigproxyenable)
+is configured.
+**Options**:
+**Default**:
+
+**Example**:
+
+```yaml
+sysdig:
+ proxy:
+ enable: true
+ host: my-awesome-proxy.my-awesome-domain.com
+```
+
+## **sysdig.proxy.noProxy**
+**Required**: `false`
+**Description**: Comma separated list of addresses or domain names
+that can be reached without going through the configured web proxy. This is
+only relevant if [`sysdig.proxy.enable`](#sysdigproxyenable) is configured and
+appended to the list in
+[`sysdig.proxy.defaultNoProxy`](#sysdigproxydefaultnoproxy]).
+**Options**:
+**Default**: `127.0.0.1, localhost, sysdigcloud-anchore-core, sysdigcloud-anchore-api`
+
+**Example**:
+
+```yaml
+sysdig:
+ proxy:
+ enable: true
+ noProxy: my-awesome.domain.com, 192.168.0.0/16
+```
+
+## **sysdig.proxy.password**
+**Required**: `false`
+**Description**: The password used to access the configured
+[`sysdig.proxy.host`](#sysdigproxyhost).
+**Options**:
+**Default**:
+
+**Example**:
+
+```yaml
+sysdig:
+ proxy:
+ enable: true
+ password: F00B@r!
+```
+
+## **sysdig.proxy.port**
+**Required**: `false`
+**Description**: The port the configured
+[`sysdig.proxy.host`](#sysdigproxyhost) is listening on. If this is not
+configured it defaults to 80.
+**Options**:
+**Default**: `80`
+
+**Example**:
+
+```yaml
+sysdig:
+ proxy:
+ enable: true
+ port: 3128
+```
+
+## **sysdig.proxy.protocol**
+**Required**: `false`
+**Description**: The protocol to use to communicate with the configured
+[`sysdig.proxy.host`](#sysdigproxyhost).
+**Options**: `http|https`
+**Default**: `http`
+
+**Example**:
+
+```yaml
+sysdig:
+ proxy:
+ enable: true
+ protocol: https
+```
+
+## **sysdig.proxy.user**
+**Required**: `false`
+**Description**: The user used to access the configured
+[`sysdig.proxy.host`](#sysdigproxyhost).
+**Options**:
+**Default**:
+
+**Example**:
+
+```yaml
+sysdig:
+ proxy:
+ enable: true
+ user: alice
+```
+## **sysdig.slack.client.id**
+**Required**: `false`
+**Description**: Your Slack application client_id, needed for Sysdig Platform to send Slack notifications
+**Options**:
+**Default**: `awesomeclientid`
+
+**Example**:
+
+```yaml
+sysdig:
+ slack:
+ client:
+ id: 2255883163.123123123534
+```
+
+## **sysdig.slack.client.secret**
+**Required**: `false`
+**Description**: Your Slack application client_secret, needed for Sysdig Platform to send Slack notifications
+**Options**:
+**Default**: `awesomeclientsecret`
+
+**Example**:
+
+```yaml
+sysdig:
+ slack:
+ client:
+ secret: 8a8af18123128acd312d12d12da
+```
+
+## **sysdig.slack.client.scope**
+**Required**: `false`
+**Description**: Your Slack application scope, needed for Sysdig Platform to send Slack notifications
+**Options**:
+**Default**: `incoming-webhook`
+
+**Example**:
+
+```yaml
+sysdig:
+ slack:
+ client:
+ scope: incoming-webhook
+```
+
+## **sysdig.slack.client.endpoint**
+**Required**: `false`
+**Description**: Your Slack application authorization endpoint, needed for Sysdig Platform to send Slack notifications
+**Options**:
+**Default**: `https://slack.com/oauth/v2/authorize`
+
+**Example**:
+
+```yaml
+sysdig:
+ slack:
+ client:
+ endpoint: https://slack.com/oauth/v2/authorize
+```
+
+## **sysdig.slack.client.oauth.endpoint**
+**Required**: `false`
+**Description**: Your Slack application oauth endpoint, needed for Sysdig Platform to send Slack notifications
+**Options**:
+**Default**: `https://slack.com/api/oauth.v2.access`
+
+**Example**:
+
+```yaml
+sysdig:
+ slack:
+ client:
+ oauth:
+ endpoint: https://slack.com/api/oauth.v2.access
+```
+## **sysdig.saml.certificate.name**
+**Required**: `false`
+**Description**: The filename of the certificate that will be used for signing SAML requests.
+The certificate file needs to be passed via `sysdig.certificate.customCA` and the filename should match
+the certificate name used when creating the certificate.
+**Options**:
+**Default**:
+
+**Example**:
+
+```yaml
+sysdig:
+ saml:
+ certificate:
+ name: saml-cert.p12
+```
+## **sysdig.saml.certificate.password**
+**Required**: `false`
+**Description**: The password required to read the certificate that will be used for signing SAML requests.
+If `sysdig.saml.certificate.name` is set, this parameter needs to be set as well.
+**Options**:
+**Default**:
+
+**Example**:
+
+```yaml
+sysdig:
+ saml:
+ certificate:
+ name: saml-cert.p12
+ password: changeit
+```
+
+## **sysdig.inactivitySettings.trackerEnabled**
+**Required**: `false`
+**Description**: Enables inactivity tracker. If the user performed no actions, they will be logged out automatically.
+**Options**: `true|false`
+**Default**: `false`
+
+**Example**:
+```yaml
+sysdig:
+ inactivitySettings:
+ trackerEnabled: true
+```
+
+## **sysdig.inactivitySettings.trackerTimeout**
+**Required**: `false`
+**Description**: Sets the timeout value (in seconds) for inactivity tracker.
+**Options**: `60-1209600`
+**Default**: `1800`
+
+**Example**:
+```yaml
+sysdig:
+ inactivitySettings:
+ trackerTimeout: 900
+```
+
+
+## **sysdig.secure.anchore.customCerts**
+**Required**: `false`
+**Description**:
+To allow the Anchore to trust these certificates, use this configuration to upload one or more PEM-format CA certificates. You must ensure you've uploaded all certificates in the CA approval chain to the root CA.
+
+This configuration when set expects certificates with .crt, .pem extension under certs/anchore-custom-certs/ in the same level as `values.yaml`
+**Options**: `true|false`
+**Default**: false
+**Example**:
+
+```bash
+#In the example directory structure below, certificate1.crt and certificate2.crt will be added to the trusted list.
+bash-5.0$ find certs values.yaml
+certs
+certs/anchore-custom-certs
+certs/anchore-custom-certs/certificate1.crt
+certs/anchore-custom-certs/certificate2.crt
+values.yaml
+```
+
+```yaml
+sysdig:
+ secure:
+ anchore:
+ customCerts: true
+```
+
+## **sysdig.secure.anchore.enableMetrics**
+**Required**: `false`
+**Description**:
+Allow Anchore to export prometheus metrics.
+
+**Options**: `true|false`
+**Default**: false
+**Example**:
+```yaml
+sysdig:
+ secure:
+ anchore:
+ enableMetrics: true
+```
+
+## **sysdig.redisVersion**
+**Required**: `false`
+**Description**: Docker image tag of Redis.
+**Options**:
+**Default**: 4.0.12.7
+**Example**:
+
+```yaml
+sysdig:
+ redisVersion: 4.0.12.7
+```
+
+## **sysdig.redisHaVersion**
+**Required**: `false`
+**Description**: Docker image tag of HA Redis, relevant when configured
+`sysdig.redisHa` is `true`.
+**Options**:
+**Default**: 4.0.12-1.0.1
+**Example**:
+
+```yaml
+sysdig:
+ redisHaVersion: 4.0.12-1.0.1
+```
+
+## **sysdig.redisHa**
+**Required**: `false`
+**Description**: Determines if redis should run in HA mode
+**Options**: `true|false`
+**Default**: `false`
+**Example**:
+
+```yaml
+sysdig:
+ redisHa: false
+```
+
+## **sysdig.useRedis6**
+**Required**: `false`
+**Description**: Determines if redis should be installed with version 6.x
+**Options**: `true|false`
+**Default**: `true`
+**Example**:
+
+```yaml
+sysdig:
+ useRedis6: false
+```
+
+## **sysdig.redis6Version**
+**Required**: `false`
+**Description**: Docker image tag of Redis 6, relevant when configured
+`sysdig.useRedis6` is `true`.
+**Options**:
+**Default**: 6.0.10.1
+**Example**:
+
+```yaml
+sysdig:
+ redis6Version: 6.0.10.1
+```
+
+## **sysdig.redis6SentinelVersion**
+**Required**: `false`
+**Description**: Docker image tag of Redis Sentinel, relevant when configured
+`sysdig.useRedis6` is `true`.
+**Options**:
+**Default**: 6.0.10.1
+**Example**:
+
+```yaml
+sysdig:
+ redis6SentinelVersion: 6.0.10.1
+```
+
+## **sysdig.redis6ExporterVersion**
+**Required**: `false`
+**Description**: Docker image tag of Redis Metrics Exporter, relevant when configured
+`sysdig.useRedis6` is `true`.
+**Options**:
+**Default**: 1.15.1.1
+**Example**:
+
+```yaml
+sysdig:
+ redis6ExporterVersion: 1.15.1.1
+```
+
+
+## **sysdig.resources.cassandra.limits.cpu**
+**Required**: `false`
+**Description**: The amount of cpu assigned to cassandra pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 2 |
+| medium | 4 |
+| large | 8 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ cassandra:
+ limits:
+ cpu: 2
+```
+
+## **sysdig.resources.cassandra.limits.memory**
+**Required**: `false`
+**Description**: The amount of memory assigned to cassandra pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 8Gi |
+| medium | 8Gi |
+| large | 8Gi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ cassandra:
+ limits:
+ memory: 8Gi
+```
+
+## **sysdig.resources.cassandra.requests.cpu**
+**Required**: `false`
+**Description**: The amount of cpu required to schedule cassandra pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 1 |
+| medium | 2 |
+| large | 4 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ cassandra:
+ requests:
+ cpu: 2
+```
+
+## **sysdig.resources.cassandra.requests.memory**
+**Required**: `false`
+**Description**: The amount of memory required to schedule cassandra pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 8Gi |
+| medium | 8Gi |
+| large | 8Gi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ cassandra:
+ requests:
+ memory: 8Gi
+```
+
+## **sysdig.resources.elasticsearch.limits.cpu**
+**Required**: `false`
+**Description**: The amount of cpu assigned to elasticsearch pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 2 |
+| medium | 4 |
+| large | 8 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ elasticsearch:
+ limits:
+ cpu: 2
+```
+
+## **sysdig.resources.elasticsearch.limits.memory**
+**Required**: `false`
+**Description**: The amount of memory assigned to elasticsearch pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 8Gi |
+| medium | 8Gi |
+| large | 8Gi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ elasticsearch:
+ limits:
+ memory: 8Gi
+```
+
+## **sysdig.resources.elasticsearch.requests.cpu**
+**Required**: `false`
+**Description**: The amount of cpu required to schedule elasticsearch pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 1 |
+| medium | 2 |
+| large | 4 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ elasticsearch:
+ requests:
+ cpu: 2
+```
+
+## **sysdig.resources.elasticsearch.requests.memory**
+**Required**: `false`
+**Description**: The amount of memory required to schedule elasticsearch pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 4Gi |
+| medium | 4Gi |
+| large | 4Gi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ elasticsearch:
+ requests:
+ memory: 2Gi
+```
+
+## **sysdig.resources.mysql-router.limits.cpu**
+**Required**: `false`
+**Description**: The amount of cpu assigned to mysql-router pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 500m |
+| medium | 500m |
+| large | 500m |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ mysql-router:
+ limits:
+ cpu: 2
+```
+
+## **sysdig.resources.mysql-router.limits.memory**
+**Required**: `false`
+**Description**: The amount of memory assigned to mysql-router pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 500Mi |
+| medium | 500Mi |
+| large | 500Mi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ mysql-router:
+ limits:
+ memory: 8Gi
+```
+
+## **sysdig.resources.mysql-router.requests.cpu**
+**Required**: `false`
+**Description**: The amount of cpu required to schedule mysql-router pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 250m |
+| medium | 250m |
+| large | 250m |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ mysql-router:
+ requests:
+ cpu: 2
+```
+
+## **sysdig.resources.mysql-router.requests.memory**
+**Required**: `false`
+**Description**: The amount of memory required to schedule mysql-router pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 100Mi |
+| medium | 100Mi |
+| large | 100Mi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ mysql-router:
+ requests:
+ memory: 2Gi
+```
+
+## **sysdig.resources.mysql.limits.cpu**
+**Required**: `false`
+**Description**: The amount of cpu assigned to mysql pods
+**Options**:
+**Default**:
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ mysql:
+ limits:
+ cpu: 2
+```
+
+## **sysdig.resources.mysql.limits.memory**
+**Required**: `false`
+**Description**: The amount of memory assigned to mysql pods
+**Options**:
+**Default**:
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ mysql:
+ limits:
+ memory: 8Gi
+```
+
+## **sysdig.resources.mysql.requests.cpu**
+**Required**: `false`
+**Description**: The amount of cpu required to schedule mysql pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 500m |
+| medium | 500m |
+| large | 500m |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ mysql:
+ requests:
+ cpu: 2
+```
+
+## **sysdig.resources.mysql.requests.memory**
+**Required**: `false`
+**Description**: The amount of memory required to schedule mysql pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 1Gi |
+| medium | 1Gi |
+| large | 1Gi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ mysql:
+ requests:
+ memory: 2Gi
+```
+
+## **sysdig.resources.postgresql.limits.cpu**
+**Required**: `false`
+**Description**: The amount of cpu assigned to postgresql pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 2 |
+| medium | 4 |
+| large | 4 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ postgresql:
+ limits:
+ cpu: 2
+```
+
+## **sysdig.resources.postgresql.limits.memory**
+**Required**: `false`
+**Description**: The amount of memory assigned to postgresql pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 4Gi |
+| medium | 4Gi |
+| large | 8Gi |
+
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ postgresql:
+ limits:
+ memory: 8Gi
+```
+
+## **sysdig.resources.postgresql.requests.cpu**
+**Required**: `false`
+**Description**: The amount of cpu required to schedule postgresql pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 500m |
+| medium | 1 |
+| large | 2 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ postgresql:
+ requests:
+ cpu: 2
+```
+
+## **sysdig.resources.postgresql.requests.memory**
+**Required**: `false`
+**Description**: The amount of memory required to schedule postgresql pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 500Mi |
+| medium | 1Gi |
+| large | 2Gi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ postgresql:
+ requests:
+ memory: 2Gi
+```
+
+## **sysdig.resources.redis.limits.cpu**
+**Required**: `false`
+**Description**: The amount of cpu assigned to redis pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 2 |
+| medium | 2 |
+| large | 2 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ redis:
+ limits:
+ cpu: 2
+```
+
+## **sysdig.resources.redis.limits.memory**
+**Required**: `false`
+**Description**: The amount of memory assigned to redis pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 2Gi |
+| medium | 2Gi |
+| large | 2Gi |
+
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ redis:
+ limits:
+ memory: 1Gi
+```
+
+## **sysdig.resources.redis.requests.cpu**
+**Required**: `false`
+**Description**: The amount of cpu required to schedule redis pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 100m |
+| medium | 100m |
+| large | 100m |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ redis:
+ requests:
+ cpu: 2
+```
+
+## **sysdig.resources.redis.requests.memory**
+**Required**: `false`
+**Description**: The amount of memory required to schedule redis pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 100Mi |
+| medium | 100Mi |
+| large | 100Mi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ redis:
+ requests:
+ memory: 2Gi
+```
+
+## **sysdig.resources.redis-sentinel.limits.cpu**
+**Required**: `false`
+**Description**: The amount of cpu assigned to redis-sentinel pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 300m |
+| medium | 300m |
+| large | 300m |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ redis-sentinel:
+ limits:
+ cpu: 2
+```
+
+## **sysdig.resources.redis-sentinel.limits.memory**
+**Required**: `false`
+**Description**: The amount of memory assigned to redis-sentinel pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 20Mi |
+| medium | 20Mi |
+| large | 20Mi |
+
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ redis-sentinel:
+ limits:
+ memory: 10Mi
+```
+
+## **sysdig.resources.redis-sentinel.requests.cpu**
+**Required**: `false`
+**Description**: The amount of cpu required to schedule redis-sentinel pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 50m |
+| medium | 50m |
+| large | 50m |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ redis-sentinel:
+ requests:
+ cpu: 2
+```
+
+## **sysdig.resources.redis-sentinel.requests.memory**
+**Required**: `false`
+**Description**: The amount of memory required to schedule redis-sentinel pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 5Mi |
+| medium | 5Mi |
+| large | 5Mi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ redis-sentinel:
+ requests:
+ memory: 200Mi
+```
+
+## **sysdig.resources.redis-sentinel.limits.cpu**
+**Required**: `false`
+**Description**: The amount of cpu assigned to redis-sentinel pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 300m |
+| medium | 300m |
+| large | 300m |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ redis-sentinel:
+ limits:
+ cpu: 2
+```
+
+## **sysdig.resources.redis-sentinel.limits.memory**
+**Required**: `false`
+**Description**: The amount of memory assigned to redis-sentinel pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 20Mi |
+| medium | 20Mi |
+| large | 20Mi |
+
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ redis-sentinel:
+ limits:
+ memory: 10Mi
+```
+
+## **sysdig.resources.redis-sentinel.requests.cpu**
+**Required**: `false`
+**Description**: The amount of cpu required to schedule redis-sentinel pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 50m |
+| medium | 50m |
+| large | 50m |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ redis-sentinel:
+ requests:
+ cpu: 2
+```
+
+## **sysdig.resources.redis-sentinel.requests.memory**
+**Required**: `false`
+**Description**: The amount of memory required to schedule redis-sentinel pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 5Mi |
+| medium | 5Mi |
+| large | 5Mi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ redis-sentinel:
+ requests:
+ memory: 200Mi
+```
+
+## **sysdig.resources.timescale-adapter.limits.cpu**
+**Required**: `false`
+**Description**: The amount of cpu assigned to timescale-adapter containers
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 4 |
+| medium | 4 |
+| large | 16 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ timescale-adapter:
+ limits:
+ cpu: 2
+```
+
+## **sysdig.resources.timescale-adapter.limits.memory**
+**Required**: `false`
+**Description**: The amount of memory assigned to timescale-adapter containers
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 4Gi |
+| medium | 4Gi |
+| large | 16Gi |
+
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ timescale-adapter:
+ limits:
+ memory: 10Mi
+```
+
+## **sysdig.resources.timescale-adapter.requests.cpu**
+**Required**: `false`
+**Description**: The amount of cpu required to schedule timescale-adapter containers
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 1 |
+| medium | 1 |
+| large | 4 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ timescale-adapter:
+ requests:
+ cpu: 2
+```
+
+## **sysdig.resources.timescale-adapter.requests.memory**
+**Required**: `false`
+**Description**: The amount of memory required to schedule timescale-adapter containers
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 1Gi |
+| medium | 1Gi |
+| large | 4Gi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ timescale-adapter:
+ requests:
+ memory: 200Mi
+```
+
+## **sysdig.resources.ingressControllerHaProxy.limits.cpu**
+**Required**: `false`
+**Description**: The amount of cpu assigned to haproxy-ingress containers in haproxy-ingress daemon set
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 500m |
+| medium | 1 |
+| large | 1 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ ingressControllerHaProxy:
+ limits:
+ cpu: 2
+```
+
+## **sysdig.resources.ingressControllerHaProxy.limits.memory**
+**Required**: `false`
+**Description**: The amount of memory assigned to haproxy-ingress containers in haproxy-ingress daemon set
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 250Mi |
+| medium | 500Mi |
+| large | 500Mi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ ingressControllerHaProxy:
+ limits:
+ memory: 2Gi
+```
+
+## **sysdig.resources.ingressControllerHaProxy.requests.cpu**
+**Required**: `false`
+**Description**: The amount of cpu required to schedule haproxy-ingress containers in haproxy-ingress daemon set
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 50m |
+| medium | 100m |
+| large | 100m |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ ingressControllerHaProxy:
+ requests:
+ cpu: 2
+```
+
+## **sysdig.resources.ingressControllerHaProxy.requests.memory**
+**Required**: `false`
+**Description**: The amount of memory required to schedule haproxy-ingress containers in haproxy-ingress daemon set
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 50Mi |
+| medium | 100Mi |
+| large | 100Mi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ ingressControllerHaProxy:
+ requests:
+ memory: 1Gi
+```
+
+## **sysdig.resources.ingressControllerRsyslog.limits.cpu**
+**Required**: `false`
+**Description**: The amount of cpu assigned to rsyslog-server containers in haproxy-ingress daemon set
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 125m |
+| medium | 250m |
+| large | 250m |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ ingressControllerRsyslog:
+ limits:
+ cpu: 2
+```
+
+## **sysdig.resources.ingressControllerRsyslog.limits.memory**
+**Required**: `false`
+**Description**: The amount of memory assigned to rsyslog-server containers in haproxy-ingress daemon set
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 50Mi |
+| medium | 100Mi |
+| large | 100Mi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ ingressControllerRsyslog:
+ limits:
+ memory: 1Gi
+```
+
+## **sysdig.resources.ingressControllerRsyslog.requests.cpu**
+**Required**: `false`
+**Description**: The amount of cpu required to schedule rsyslog-server containers in haproxy-ingress daemon set
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 50m |
+| medium | 50m |
+| large | 50m |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ ingressControllerRsyslog:
+ requests:
+ cpu: 500m
+```
+
+## **sysdig.resources.ingressControllerRsyslog.requests.memory**
+**Required**: `false`
+**Description**: The amount of memory required to schedule rsyslog-server containers in haproxy-ingress daemon set
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 20Mi |
+| medium | 20Mi |
+| large | 20Mi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ ingressControllerRsyslog:
+ requests:
+ memory: 500Mi
+```
+
+## **sysdig.resources.api.limits.cpu**
+**Required**: `false`
+**Description**: The amount of cpu assigned to api containers in api pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 4 |
+| medium | 4 |
+| large | 16 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ api:
+ limits:
+ cpu: 2
+```
+
+## **sysdig.resources.api.limits.memory**
+**Required**: `false`
+**Description**: The amount of memory assigned to api containers in api pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 4Gi |
+| medium | 4Gi |
+| large | 16Gi |
+
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ api:
+ limits:
+ memory: 10Mi
+```
+
+## **sysdig.resources.api.requests.cpu**
+**Required**: `false`
+**Description**: The amount of cpu required to schedule api containers in api pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 1 |
+| medium | 1 |
+| large | 4 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ api:
+ requests:
+ cpu: 2
+```
+
+## **sysdig.resources.api.requests.memory**
+**Required**: `false`
+**Description**: The amount of memory required to schedule api containers in api pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 1Gi |
+| medium | 1Gi |
+| large | 4Gi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ api:
+ requests:
+ memory: 200Mi
+```
+
+## **sysdig.resources.apiNginx.limits.cpu**
+**Required**: `false`
+**Description**: The amount of cpu assigned to nginx containers in api pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 1 |
+| medium | 1 |
+| large | 1 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ apiNginx:
+ limits:
+ cpu: 1
+```
+
+## **sysdig.resources.apiNginx.limits.memory**
+**Required**: `false`
+**Description**: The amount of memory assigned to nginx containers in api pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 500Mi |
+| medium | 500Mi |
+| large | 500Mi |
+
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ apiNginx:
+ limits:
+ memory: 500Mi
+```
+
+## **sysdig.resources.apiNginx.requests.cpu**
+**Required**: `false`
+**Description**: The amount of cpu required to schedule nginx containers in api pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 500m |
+| medium | 500m |
+| large | 500m |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ apiNginx:
+ requests:
+ cpu: 500m
+```
+
+## **sysdig.resources.apiNginx.requests.memory**
+**Required**: `false`
+**Description**: The amount of memory required to schedule nginx containers in api pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 100Mi |
+| medium | 100Mi |
+| large | 100Mi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ apiNginx:
+ requests:
+ memory: 100Mi
+```
+
+## **sysdig.resources.apiEmailRenderer.limits.cpu**
+**Required**: `false`
+**Description**: The amount of cpu assigned to email-renderer containers in api pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 1 |
+| medium | 1 |
+| large | 1 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ apiEmailRenderer:
+ limits:
+ cpu: 1
+```
+
+## **sysdig.resources.apiEmailRenderer.limits.memory**
+**Required**: `false`
+**Description**: The amount of memory assigned to email-renderer containers in api pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 500Mi |
+| medium | 500Mi |
+| large | 500Mi |
+
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ apiEmailRenderer:
+ limits:
+ memory: 500Mi
+```
+
+## **sysdig.resources.apiEmailRenderer.requests.cpu**
+**Required**: `false`
+**Description**: The amount of cpu required to schedule email-renderer containers in api pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 500m |
+| medium | 500m |
+| large | 500m |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ apiEmailRenderer:
+ requests:
+ cpu: 500m
+```
+
+## **sysdig.resources.apiEmailRenderer.requests.memory**
+**Required**: `false`
+**Description**: The amount of memory required to schedule email-renderer containers in api pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 100Mi |
+| medium | 100Mi |
+| large | 100Mi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ apiEmailRenderer:
+ requests:
+ memory: 100Mi
+```
+
+## **sysdig.resources.worker.limits.cpu**
+**Required**: `false`
+**Description**: The amount of cpu assigned to worker pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 4 |
+| medium | 8 |
+| large | 16 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ worker:
+ limits:
+ cpu: 2
+```
+
+## **sysdig.resources.worker.limits.memory**
+**Required**: `false`
+**Description**: The amount of memory assigned to worker pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 4Gi |
+| medium | 8Gi |
+| large | 16Gi |
+
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ worker:
+ limits:
+ memory: 10Mi
+```
+
+## **sysdig.resources.worker.requests.cpu**
+**Required**: `false`
+**Description**: The amount of cpu required to schedule worker pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 1 |
+| medium | 2 |
+| large | 4 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ worker:
+ requests:
+ cpu: 2
+```
+
+## **sysdig.resources.worker.requests.memory**
+**Required**: `false`
+**Description**: The amount of memory required to schedule worker pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 1Gi |
+| medium | 2Gi |
+| large | 4Gi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ worker:
+ requests:
+ memory: 200Mi
+```
+
+## **sysdig.resources.alerter.limits.cpu**
+**Required**: `false`
+**Description**: The amount of cpu assigned to alerter pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 4 |
+| medium | 8 |
+| large | 16 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ alerter:
+ limits:
+ cpu: 2
+```
+
+## **sysdig.resources.alerter.limits.memory**
+**Required**: `false`
+**Description**: The amount of memory assigned to alerter pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 4Gi |
+| medium | 8Gi |
+| large | 16Gi |
+
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ alerter:
+ limits:
+ memory: 10Mi
+```
+
+## **sysdig.resources.alerter.requests.cpu**
+**Required**: `false`
+**Description**: The amount of cpu required to schedule alerter pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 1 |
+| medium | 2 |
+| large | 4 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ alerter:
+ requests:
+ cpu: 2
+```
+
+## **sysdig.resources.alerter.requests.memory**
+**Required**: `false`
+**Description**: The amount of memory required to schedule alerter pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 1Gi |
+| medium | 2Gi |
+| large | 4Gi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ alerter:
+ requests:
+ memory: 200Mi
+```
+
+## **sysdig.resources.collector.limits.cpu**
+**Required**: `false`
+**Description**: The amount of cpu assigned to collector pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 4 |
+| medium | 4 |
+| large | 16 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ collector:
+ limits:
+ cpu: 2
+```
+
+## **sysdig.resources.collector.limits.memory**
+**Required**: `false`
+**Description**: The amount of memory assigned to collector pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 4Gi |
+| medium | 4Gi |
+| large | 16Gi |
+
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ collector:
+ limits:
+ memory: 10Mi
+```
+
+## **sysdig.resources.collector.requests.cpu**
+**Required**: `false`
+**Description**: The amount of cpu required to schedule collector pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 1 |
+| medium | 1 |
+| large | 4 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ collector:
+ requests:
+ cpu: 2
+```
+
+## **sysdig.resources.collector.requests.memory**
+**Required**: `false`
+**Description**: The amount of memory required to schedule collector pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 1Gi |
+| medium | 1Gi |
+| large | 4Gi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ collector:
+ requests:
+ memory: 200Mi
+```
+
+## **sysdig.resources.anchore-core.limits.cpu**
+**Required**: `false`
+**Description**: The amount of cpu assigned to anchore-core pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 1 |
+| medium | 1 |
+| large | 1 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ anchore-core:
+ limits:
+ cpu: 1
+```
+
+## **sysdig.resources.anchore-api.limits.cpu**
+**Required**: `false`
+**Description**: The amount of cpu assigned to anchore-api pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 1 |
+| medium | 1 |
+| large | 1 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ anchore-api:
+ limits:
+ cpu: 1
+```
+
+## **sysdig.resources.anchore-catalog.limits.cpu**
+**Required**: `false`
+**Description**: The amount of cpu assigned to anchore-catalog pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 1 |
+| medium | 1 |
+| large | 1 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ anchore-catalog:
+ limits:
+ cpu: 1
+```
+
+## **sysdig.resources.anchore-policy-engine.limits.cpu**
+**Required**: `false`
+**Description**: The amount of cpu assigned to anchore-policy-engine pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 1 |
+| medium | 1 |
+| large | 1 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ anchore-policy-engine:
+ limits:
+ cpu: 1
+```
+
+## **sysdig.resources.anchore-core.limits.memory**
+**Required**: `false`
+**Description**: The amount of memory assigned to anchore-core pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 1Gi |
+| medium | 1Gi |
+| large | 1Gi |
+
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ anchore-core:
+ limits:
+ memory: 10Mi
+```
+
+
+## **sysdig.resources.anchore-api.limits.memory**
+**Required**: `false`
+**Description**: The amount of memory assigned to anchore-api pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 1Gi |
+| medium | 1Gi |
+| large | 1Gi |
+
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ anchore-api:
+ limits:
+ memory: 10Mi
+```
+
+
+## **sysdig.resources.anchore-catalog.limits.memory**
+**Required**: `false`
+**Description**: The amount of memory assigned to anchore-catalog pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 2Gi |
+| medium | 2Gi |
+| large | 3Gi |
+
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ anchore-catalog:
+ limits:
+ memory: 10Mi
+```
+
+
+## **sysdig.resources.anchore-policy-engine.limits.memory**
+**Required**: `false`
+**Description**: The amount of memory assigned to anchore-policy-engine pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 2Gi |
+| medium | 2Gi |
+| large | 3Gi |
+
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ anchore-policy-engine:
+ limits:
+ memory: 10Mi
+```
+
+## **sysdig.resources.anchore-core.requests.cpu**
+**Required**: `false`
+**Description**: The amount of cpu required to schedule anchore-core pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 500m |
+| medium | 500m |
+| large | 500m |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ anchore-core:
+ requests:
+ cpu: 2
+```
+
+## **sysdig.resources.anchore-api.requests.cpu**
+**Required**: `false`
+**Description**: The amount of cpu required to schedule anchore-api pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 500m |
+| medium | 500m |
+| large | 500m |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ anchore-api:
+ requests:
+ cpu: 2
+```
+
+## **sysdig.resources.anchore-catalog.requests.cpu**
+**Required**: `false`
+**Description**: The amount of cpu required to schedule anchore-catalog pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 500m |
+| medium | 500m |
+| large | 500m |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ anchore-catalog:
+ requests:
+ cpu: 2
+```
+
+## **sysdig.resources.anchore-policy-engine.requests.cpu**
+**Required**: `false`
+**Description**: The amount of cpu required to schedule anchore-policy-engine pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 500m |
+| medium | 500m |
+| large | 500m |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ anchore-policy-engine:
+ requests:
+ cpu: 2
+```
+
+## **sysdig.resources.anchore-core.requests.memory**
+**Required**: `false`
+**Description**: The amount of memory required to schedule anchore-core pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 256Mi |
+| medium | 256Mi |
+| large | 256Mi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ anchore-core:
+ requests:
+ memory: 200Mi
+```
+
+## **sysdig.resources.anchore-api.requests.memory**
+**Required**: `false`
+**Description**: The amount of memory required to schedule anchore-api pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 256Mi |
+| medium | 256Mi |
+| large | 256Mi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ anchore-api:
+ requests:
+ memory: 200Mi
+```
+
+## **sysdig.resources.anchore-catalog.requests.memory**
+**Required**: `false`
+**Description**: The amount of memory required to schedule anchore-catalog pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 1Gi |
+| medium | 1Gi |
+| large | 1Gi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ anchore-catalog:
+ requests:
+ memory: 200Mi
+```
+
+## **sysdig.resources.anchore-policy-engine.requests.memory**
+**Required**: `false`
+**Description**: The amount of memory required to schedule anchore-policy-engine pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 1Gi |
+| medium | 1Gi |
+| large | 1Gi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ anchore-policy-engine:
+ requests:
+ memory: 200Mi
+```
+
+## **sysdig.resources.anchore-worker.limits.cpu**
+**Required**: `false`
+**Description**: The amount of cpu assigned to anchore-worker pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 4 |
+| medium | 4 |
+| large | 4 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ anchore-worker:
+ limits:
+ cpu: 2
+```
+
+## **sysdig.resources.anchore-worker.limits.memory**
+**Required**: `false`
+**Description**: The amount of memory assigned to anchore-worker pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 4Gi |
+| medium | 4Gi |
+| large | 4Gi |
+
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ anchore-worker:
+ limits:
+ memory: 10Mi
+```
+
+## **sysdig.resources.anchore-worker.requests.cpu**
+**Required**: `false`
+**Description**: The amount of cpu required to schedule anchore-worker pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 500m |
+| medium | 1 |
+| large | 1 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ anchore-worker:
+ requests:
+ cpu: 2
+```
+
+## **sysdig.resources.anchore-worker.requests.memory**
+**Required**: `false`
+**Description**: The amount of memory required to schedule anchore-worker pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 1Gi |
+| medium | 1Gi |
+| large | 1Gi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ anchore-worker:
+ requests:
+ memory: 200Mi
+```
+
+## **sysdig.resources.scanning-api.limits.cpu**
+**Required**: `false`
+**Description**: The amount of cpu assigned to scanning-api pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 4 |
+| medium | 4 |
+| large | 4 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ scanning-api:
+ limits:
+ cpu: 2
+```
+
+## **sysdig.resources.scanning-api.limits.memory**
+**Required**: `false`
+**Description**: The amount of memory assigned to scanning-api pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 4Gi |
+| medium | 4Gi |
+| large | 4Gi |
+
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ scanning-api:
+ limits:
+ memory: 10Mi
+```
+
+## **sysdig.resources.scanning-api.requests.cpu**
+**Required**: `false`
+**Description**: The amount of cpu required to schedule scanning-api pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 500m |
+| medium | 1 |
+| large | 1 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ scanning-api:
+ requests:
+ cpu: 2
+```
+
+## **sysdig.resources.scanning-api.requests.memory**
+**Required**: `false`
+**Description**: The amount of memory required to schedule scanning-api pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 1Gi |
+| medium | 1Gi |
+| large | 1Gi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ scanning-api:
+ requests:
+ memory: 200Mi
+```
+
+
+## **sysdig.resources.scanningalertmgr.limits.cpu**
+**Required**: `false`
+**Description**: The amount of cpu assigned to scanningalertmgr pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 4 |
+| medium | 4 |
+| large | 4 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ scanningalertmgr:
+ limits:
+ cpu: 2
+```
+
+## **sysdig.resources.scanningalertmgr.limits.memory**
+**Required**: `false`
+**Description**: The amount of memory assigned to scanningalertmgr pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 4Gi |
+| medium | 4Gi |
+| large | 4Gi |
+
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ scanningalertmgr:
+ limits:
+ memory: 10Mi
+```
+
+## **sysdig.resources.scanningalertmgr.requests.cpu**
+**Required**: `false`
+**Description**: The amount of cpu required to schedule scanningalertmgr pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 500m |
+| medium | 1 |
+| large | 1 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ scanningalertmgr:
+ requests:
+ cpu: 2
+```
+
+## **sysdig.resources.scanningalertmgr.requests.memory**
+**Required**: `false`
+**Description**: The amount of memory required to schedule scanningalertmgr pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 1Gi |
+| medium | 1Gi |
+| large | 1Gi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ scanningalertmgr:
+ requests:
+ memory: 200Mi
+```
+
+## **sysdig.resources.scanning-retention-mgr.limits.cpu**
+**Required**: `false`
+**Description**: The amount of cpu assigned to scanning retention-mgr pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 4 |
+| medium | 4 |
+| large | 4 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ scanning-retention-mgr:
+ limits:
+ cpu: 2
+```
+
+## **sysdig.resources.scanning-retention-mgr.limits.memory**
+**Required**: `false`
+**Description**: The amount of memory assigned to scanning retention-mgr pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 4Gi |
+| medium | 4Gi |
+| large | 4Gi |
+
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ scanning-retention-mgr:
+ limits:
+ memory: 10Mi
+```
+
+## **sysdig.resources.scanning-retention-mgr.requests.cpu**
+**Required**: `false`
+**Description**: The amount of cpu required to schedule scanning retention-mgr pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 500m |
+| medium | 1 |
+| large | 1 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ scanning-retention-mgr:
+ requests:
+ cpu: 2
+```
+
+## **sysdig.resources.scanning-retention-mgr.requests.memory**
+**Required**: `false`
+**Description**: The amount of memory required to schedule scanning retention-mgr pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 1Gi |
+| medium | 1Gi |
+| large | 1Gi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ scanning-retention-mgr:
+ requests:
+ memory: 200Mi
+```
+
+## **sysdig.secure.scanning.retentionMgr.cronjob**
+**Required**: `false`
+**Description**: Retention manager Cronjob
+**Options**:
+**Default**: 0 3 * * *
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ scanning:
+ retentionMgr:
+ cronjob: 0 3 * * *
+```
+
+## **sysdig.secure.scanning.retentionMgr.retentionPolicyMaxExecutionDuration**
+**Required**: `false`
+**Description**: Max execution duration for the retention policy
+**Options**:
+**Default**: 23h
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ scanning:
+ retentionMgr:
+ retentionPolicyMaxExecutionDuration: 23h
+```
+
+## **sysdig.secure.scanning.retentionMgr.retentionPolicyGracePeriodDuration**
+**Required**: `false`
+**Description**: Grace period for the retention policy
+**Options**:
+**Default**: 168h
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ scanning:
+ retentionMgr:
+ retentionPolicyGracePeriodDuration: 168h
+```
+
+## **sysdig.secure.scanning.retentionMgr.retentionPolicyArtificialDelayAfterDelete**
+**Required**: `false`
+**Description**: Artifical delay after each image deletion
+**Options**:
+**Default**: 1s
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ scanning:
+ retentionMgr:
+ retentionPolicyArtificialDelayAfterDelete: 1s
+```
+
+## **sysdig.secure.scanning.retentionMgr.scanningGRPCEndpoint**
+**Required**: `false`
+**Description**: Scanning GRPC endpoint
+**Options**:
+**Default**: sysdigcloud-scanning-api:6000
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ scanning:
+ retentionMgr:
+ scanningGRPCEndpoint: sysdigcloud-scanning-api:6000
+```
+
+## **sysdig.secure.scanning.retentionMgr.scanningDBEngine**
+**Required**: `false`
+**Description**: Scanning DB engine
+**Options**:
+**Default**: mysql
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ scanning:
+ retentionMgr:
+ scanningDBEngine: mysql
+```
+
+## **sysdig.secure.scanning.retentionMgr.defaultValues.datePolicy**
+**Required**: `false`
+**Description**: Default value for the date policy
+**Options**:
+**Default**: 90
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ scanning:
+ retentionMgr:
+ defaultValues:
+ datePolicy: 90
+```
+
+## **sysdig.secure.scanning.retentionMgr.defaultValues.tagsPolicy**
+**Required**: `false`
+**Description**: Default value for the tags policy
+**Options**:
+**Default**: 5
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ scanning:
+ retentionMgr:
+ defaultValues:
+ tagsPolicy: 5
+```
+
+## **sysdig.secure.scanning.retentionMgr.defaultValues.digestsPolicy**
+**Required**: `false`
+**Description**: Default value for the digests policy
+**Options**:
+**Default**: 5
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ scanning:
+ retentionMgr:
+ defaultValues:
+ digestsPolicy: 5
+```
+
+## **sysdig.resources.scanning-ve-janitor.limits.cpu**
+**Required**: `false`
+**Description**: The amount of cpu assigned to scanning-ve-janitor cronjob
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 300m |
+| medium | 500m |
+| large | 1 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ scanning-ve-janitor:
+ limits:
+ cpu: 2
+```
+
+## **sysdig.resources.scanning-ve-janitor.limits.memory**
+**Required**: `false`
+**Description**: The amount of memory assigned to scanning-ve-janitor cronjob
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 256Mi |
+| medium | 2Gi |
+| large | 4Gi |
+
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ scanning-ve-janitor:
+ limits:
+ memory: 10Mi
+```
+
+## **sysdig.resources.scanning-ve-janitor.requests.cpu**
+**Required**: `false`
+**Description**: The amount of cpu required to schedule scanning-ve-janitor cronjob
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 100m |
+| medium | 100m |
+| large | 100m |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ scanning-ve-janitor:
+ requests:
+ cpu: 2
+```
+
+## **sysdig.resources.scanning-ve-janitor.requests.memory**
+**Required**: `false`
+**Description**: The amount of memory required to schedule scanning-ve-janitor cronjob
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 256Mi |
+| medium | 256Mi |
+| large | 256Mi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ scanning-ve-janitor:
+ requests:
+ memory: 200Mi
+```
+
+## **sysdig.resources.scanningAdmissionControllerApi.limits.cpu**
+**Required**: `false`
+**Description**: The amount of cpu assigned to admission-controller-api containers
+**Options**:
+**Default**:
+
+|cluster-size|limits |
+|------------|--------|
+| small | 1 |
+| medium | 1 |
+| large | 1 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ scanningAdmissionControllerApi:
+ limits:
+ cpu: 1
+```
+
+## **sysdig.resources.scanningAdmissionControllerApi.limits.memory**
+**Required**: `false`
+**Description**: The amount of memory assigned to admission-controller-api containers
+**Options**:
+**Default**:
+
+|cluster-size|limits |
+|------------|--------|
+| small | 500Mi |
+| medium | 500Mi |
+| large | 500Mi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ scanningAdmissionControllerApi:
+ limits:
+ memory: 500Mi
+```
+
+## **sysdig.resources.scanningAdmissionControllerApi.requests.cpu**
+**Required**: `false`
+**Description**: The amount of cpu required to schedule admission-controller-api containers
+**Options**:
+**Default**:
+
+|cluster-size|requests|
+|------------|--------|
+| small | 250m |
+| medium | 250m |
+| large | 250m |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ scanningAdmissionControllerApi:
+ requests:
+ cpu: 250m
+```
+
+## **sysdig.resources.scanningAdmissionControllerApi.requests.memory**
+**Required**: `false`
+**Description**: The amount of memory required to schedule admission-controller-api containers
+**Options**:
+**Default**:
+
+|cluster-size|requests|
+|------------|--------|
+| small | 50Mi |
+| medium | 50Mi |
+| large | 50Mi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ admission-controller-api:
+ requests:
+ memory: 50Mi
+```
+
+## **sysdig.resources.scanningAdmissionControllerApiPgMigrate.limits.cpu**
+**Required**: `false`
+**Description**: The amount of cpu assigned to admission-controller-api PG
+migrate containers
+**Options**:
+**Default**:
+
+|cluster-size|limits |
+|------------|--------|
+| small | 1 |
+| medium | 1 |
+| large | 1 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ scanningAdmissionControllerApiPgMigrate:
+ limits:
+ cpu: 1
+```
+
+## **sysdig.resources.scanningAdmissionControllerApiPgMigrate.limits.memory**
+**Required**: `false`
+**Description**: The amount of memory assigned to admission-controller-api PG
+migrate containers
+**Options**:
+**Default**:
+
+|cluster-size|limits |
+|------------|--------|
+| small | 256Mi |
+| medium | 256Mi |
+| large | 256Mi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ scanningAdmissionControllerApiPgMigrate:
+ limits:
+ memory: 256Mi
+```
+
+## **sysdig.resources.scanningAdmissionControllerApiPgMigrate.requests.cpu**
+**Required**: `false`
+**Description**: The amount of cpu required to schedule admission-controller-api
+PG migrate containers
+**Options**:
+**Default**:
+
+|cluster-size|requests|
+|------------|--------|
+| small | 100m |
+| medium | 100m |
+| large | 100m |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ scanningAdmissionControllerApiPgMigrate:
+ requests:
+ cpu: 100m
+```
+
+## **sysdig.resources.scanningAdmissionControllerApiPgMigrate.requests.memory**
+**Required**: `false`
+**Description**: The amount of memory required to schedule admission-controller-api
+PG migrate containers
+**Options**:
+**Default**:
+
+|cluster-size|requests|
+|------------|--------|
+| small | 50Mi |
+| medium | 50Mi |
+| large | 50Mi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ admission-controller-api-pg-migrate:
+ requests:
+ memory: 50Mi
+```
+
+## **sysdig.resources.reporting-init.limits.cpu**
+**Required**: `false`
+**Description**: The amount of cpu assigned to reporting-init pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 1 |
+| medium | 1 |
+| large | 1 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ reporting-init:
+ limits:
+ cpu: 1
+```
+
+## **sysdig.resources.reporting-init.limits.memory**
+**Required**: `false`
+**Description**: The amount of memory assigned to reporting-init pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 256Mi |
+| medium | 256Mi |
+| large | 256Mi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ reporting-init:
+ limits:
+ memory: 256Mi
+```
+
+## **sysdig.resources.reporting-init.requests.cpu**
+**Required**: `false`
+**Description**: The amount of cpu required to schedule reporting-init pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 100m |
+| medium | 100m |
+| large | 100m |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ reporting-init:
+ requests:
+ cpu: 100m
+```
+
+## **sysdig.resources.reporting-init.requests.memory**
+**Required**: `false`
+**Description**: The amount of memory required to schedule reporting-init pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 50Mi |
+| medium | 50Mi |
+| large | 50Mi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ reporting-init:
+ requests:
+ memory: 50Mi
+```
+
+## **sysdig.resources.reporting-api.limits.cpu**
+**Required**: `false`
+**Description**: The amount of cpu assigned to reporting-api pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 1500m |
+| medium | 1500m |
+| large | 1500m |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ reporting-api:
+ limits:
+ cpu: 1500m
+```
+
+## **sysdig.resources.reporting-api.limits.memory**
+**Required**: `false`
+**Description**: The amount of memory assigned to reporting-api pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 1536Mi |
+| medium | 1536Mi |
+| large | 1536Mi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ reporting-api:
+ limits:
+ memory: 1536Mi
+```
+
+## **sysdig.resources.reporting-api.requests.cpu**
+**Required**: `false`
+**Description**: The amount of cpu required to schedule reporting-api pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 200m |
+| medium | 200m |
+| large | 200m |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ reporting-api:
+ requests:
+ cpu: 200m
+```
+
+## **sysdig.resources.reporting-api.requests.memory**
+**Required**: `false`
+**Description**: The amount of memory required to schedule reporting-api pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 256Mi |
+| medium | 256Mi |
+| large | 256Mi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ reporting-api:
+ requests:
+ memory: 256Mi
+```
+
+## **sysdig.resources.reporting-worker.limits.cpu**
+**Required**: `false`
+**Description**: The amount of cpu assigned to reporting-worker pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 2 |
+| medium | 2 |
+| large | 2 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ reporting-worker:
+ limits:
+ cpu: 2
+```
+
+## **sysdig.resources.reporting-worker.limits.memory**
+**Required**: `false`
+**Description**: The amount of memory assigned to reporting-worker pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 16Gi |
+| medium | 16Gi |
+| large | 16Gi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ reporting-worker:
+ limits:
+ memory: 16Gi
+```
+
+## **sysdig.resources.reporting-worker.requests.cpu**
+**Required**: `false`
+**Description**: The amount of cpu required to schedule reporting-worker pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 200m |
+| medium | 200m |
+| large | 200m |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ reporting-worker:
+ requests:
+ cpu: 200m
+```
+
+## **sysdig.resources.reporting-worker.requests.memory**
+**Required**: `false`
+**Description**: The amount of memory required to schedule reporting-worker pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 10Gi |
+| medium | 10Gi |
+| large | 10Gi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ reporting-worker:
+ requests:
+ memory: 10Gi
+```
+
+## **sysdig.secure.scanning.reporting.debug**
+**Required**: `false`
+**Description**: Enable logging at debug level
+**Options**:
+**Default**: false
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ scanning:
+ reporting:
+ debug: false
+```
+
+## **sysdig.secure.scanning.reporting.apiGRPCEndpoint**
+**Required**: `false`
+**Description**: Reporting GRPC endpoint
+**Options**:
+**Default**: sysdigcloud-scanning-reporting-api-grpc:6000
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ scanning:
+ reporting:
+ apiGRPCEndpoint: sysdigcloud-scanning-reporting-api-grpc:6000
+```
+
+## **sysdig.secure.scanning.reporting.scanningGRPCEndpoint**
+**Required**: `false`
+**Description**: Scanning GRPC endpoint
+**Options**:
+**Default**: sysdigcloud-scanning-api:6000
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ scanning:
+ reporting:
+ scanningGRPCEndpoint: sysdigcloud-scanning-api:6000
+```
+
+## **sysdig.secure.scanning.reporting.storageDriver**
+**Required**: `false`
+**Description**: Storage kind for generated reports
+**Options**: postgres, fs, s3
+**Default**: postgres
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ scanning:
+ reporting:
+ storageDriver: postgres
+```
+
+## **sysdig.secure.scanning.reporting.storageCompression**
+**Required**: `false`
+**Description**: Compression format for generated reports
+**Options**: zip, gzip, none
+**Default**: zip
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ scanning:
+ reporting:
+ storageCompression: zip
+```
+
+## **sysdig.secure.scanning.reporting.storageFsDir**
+**Required**: `false`
+**Description**: The directory where reports will saved (required when using `fs` driver)
+**Options**:
+**Default**: .
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ scanning:
+ reporting:
+ storageFsDir: /reports
+```
+
+## **sysdig.secure.scanning.reporting.storagePostgresRetentionDays**
+**Required**: `false`
+**Description**: The number of days the generated reports will be kept for download (available when using `postgres` driver)
+**Options**:
+**Default**: 1
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ scanning:
+ reporting:
+ storagePostgresRetentionDays: 1
+```
+
+## **sysdig.secure.scanning.reporting.storageS3Bucket**
+**Required**: `false`
+**Description**: The bucket name where reports will be saved (required when using `s3` driver)
+**Options**:
+**Default**:
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ scanning:
+ reporting:
+ storageS3Bucket: secure-scanning-reporting
+```
+
+## **sysdig.secure.scanning.reporting.storageS3Prefix**
+**Required**: `false`
+**Description**: The object name prefix (directory) used when saving reports in a S3 bucket
+**Options**:
+**Default**:
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ scanning:
+ reporting:
+ storageS3Prefix: reports
+```
+
+## **sysdig.secure.scanning.reporting.storageS3Endpoint**
+**Required**: `false`
+**Description**: The service endpoint of a S3-compatible storage (required when using `s3` driver in a non-AWS deployment)
+**Options**:
+**Default**:
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ scanning:
+ reporting:
+ storageS3Endpoint: s3.example.com
+```
+
+## **sysdig.secure.scanning.reporting.storageS3Region**
+**Required**: `false`
+**Description**: The AWS region where the S3 bucket is created (required when using `s3` driver in a AWS deployment)
+**Options**:
+**Default**:
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ scanning:
+ reporting:
+ storageS3Region: us-east-1
+```
+
+## **sysdig.secure.scanning.reporting.storageS3AccessKeyID**
+**Required**: `false`
+**Description**: The Access Key ID used to authenticate with a S3-compatible storage (required when using `s3` driver in a non-AWS deployment)
+**Options**:
+**Default**:
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ scanning:
+ reporting:
+ storageS3AccessKeyID: AKIAIOSFODNN7EXAMPLE
+```
+
+## **sysdig.secure.scanning.reporting.storageS3SecretAccessKey**
+**Required**: `false`
+**Description**: The Secret Access Key used to authenticate with a S3-compatible storage (required when using `s3` driver in a non-AWS deployment)
+**Options**:
+**Default**:
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ scanning:
+ reporting:
+ storageS3SecretAccessKey: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
+```
+
+## **sysdig.secure.scanning.reporting.onDemandGenerationEnabled**
+**Required**: `true`
+**Description**: The flag to enable on-demand generation of reports globally
+**Options**: false, true
+**Default**: false
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ scanning:
+ reporting:
+ onDemandGenerationEnabled: true
+```
+
+## **sysdig.secure.scanning.reporting.onDemandGenerationCustomers**
+**Required**: `false`
+**Description**: The list of customers where on-demand generation of reports has to be enabled, if on-demand generation wasn't enabled globally
+**Options**:
+**Default**:
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ scanning:
+ reporting:
+ onDemandGenerationCustomers: "1,12,123"
+```
+
+## **sysdig.secure.scanning.reporting.workerSleepTime**
+**Required**: `false`
+**Description**: The sleep interval between two runs of the reporting worker
+**Options**:
+**Default**: 120s
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ scanning:
+ reporting:
+ workerSleepTime: 120s
+```
+
+## **sysdig.resources.policy-advisor.limits.cpu**
+**Required**: `false`
+**Description**: The amount of cpu assigned to policy-advisor pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 4 |
+| medium | 4 |
+| large | 4 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ policy-advisor:
+ limits:
+ cpu: 2
+```
+
+## **sysdig.resources.policy-advisor.limits.memory**
+**Required**: `false`
+**Description**: The amount of memory assigned to policy-advisor pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 4Gi |
+| medium | 4Gi |
+| large | 4Gi |
+
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ policy-advisor:
+ limits:
+ memory: 10Mi
+```
+
+## **sysdig.resources.policy-advisor.requests.cpu**
+**Required**: `false`
+**Description**: The amount of cpu required to schedule policy-advisor pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 1 |
+| medium | 1 |
+| large | 1 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ policy-advisor:
+ requests:
+ cpu: 2
+```
+
+## **sysdig.resources.policy-advisor.requests.memory**
+**Required**: `false`
+**Description**: The amount of memory required to schedule policy-advisor pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 1Gi |
+| medium | 1Gi |
+| large | 1Gi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ policy-advisor:
+ requests:
+ memory: 200Mi
+```
+
+## **sysdig.resources.netsec-api.limits.cpu**
+**Required**: `false`
+**Description**: The amount of cpu assigned to netsec-api pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 1 |
+| medium | 2 |
+| large | 2 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ netsec-api:
+ limits:
+ cpu: 1
+```
+
+## **sysdig.resources.netsec-api.limits.memory**
+**Required**: `false`
+**Description**: The amount of memory assigned to netsec-api pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 1Gi |
+| medium | 2Gi |
+| large | 2Gi |
+
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ netsec-api:
+ limits:
+ memory: 1Gi
+```
+
+## **sysdig.resources.netsec-api.requests.cpu**
+**Required**: `false`
+**Description**: The amount of cpu required to schedule netsec-api pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 300m |
+| medium | 500m |
+| large | 1 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ netsec-api:
+ requests:
+ cpu: 300m
+```
+
+## **sysdig.resources.netsec-api.requests.memory**
+**Required**: `false`
+**Description**: The amount of memory required to schedule netsec-api pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 1Gi |
+| medium | 1Gi |
+| large | 1Gi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ netsec-api:
+ requests:
+ memory: 1Gi
+```
+
+## **sysdig.resources.netsec-ingest.limits.cpu**
+**Required**: `false`
+**Description**: The amount of cpu assigned to netsec-ingest pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 1 |
+| medium | 2 |
+| large | 2 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ netsec-ingest:
+ limits:
+ cpu: 1
+```
+
+## **sysdig.resources.netsec-ingest.limits.memory**
+**Required**: `false`
+**Description**: The amount of memory assigned to netsec-ingest pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 4Gi |
+| medium | 6Gi |
+| large | 8Gi |
+
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ netsec-ingest:
+ limits:
+ memory: 4Gi
+```
+
+## **sysdig.resources.netsec-ingest.requests.cpu**
+**Required**: `false`
+**Description**: The amount of cpu required to schedule netsec-ingest pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 500m |
+| medium | 1 |
+| large | 1 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ netsec-ingest:
+ requests:
+ cpu: 500m
+```
+
+## **sysdig.resources.netsec-ingest.requests.memory**
+**Required**: `false`
+**Description**: The amount of memory required to schedule to netsec-ingest pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 1Gi |
+| medium | 2Gi |
+| large | 4Gi |
+
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ netsec-ingest:
+ limits:
+ memory: 2Gi
+```
+
+## **sysdig.resources.netsec-janitor.limits.cpu**
+**Required**: `false`
+**Description**: The amount of cpu assigned to netsec-janitor pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 1 |
+| medium | 2 |
+| large | 2 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ netsec-janitor:
+ limits:
+ cpu: 1
+```
+
+## **sysdig.resources.netsec-janitor.limits.memory**
+**Required**: `false`
+**Description**: The amount of memory assigned to netsec-janitor pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 1Gi |
+| medium | 2Gi |
+| large | 2Gi |
+
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ netsec-janitor:
+ limits:
+ memory: 1Gi
+```
+
+## **sysdig.resources.netsec-janitor.requests.cpu**
+**Required**: `false`
+**Description**: The amount of cpu required to schedule netsec-janitor pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 300m |
+| medium | 500m |
+| large | 1 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ netsec-janitor:
+ requests:
+ cpu: 1
+```
+
+## **sysdig.resources.netsec-janitor.requests.memory**
+**Required**: `false`
+**Description**: The amount of memory required to schedule netsec-janitor pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 1Gi |
+| medium | 1Gi |
+| large | 1Gi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ netsec-janitor:
+ requests:
+ memory: 1Gi
+```
+
+## **sysdig.resources.nats-streaming.limits.cpu**
+**Required**: `false`
+**Description**: The amount of cpu assigned to nats-streaming pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 2 |
+| medium | 2 |
+| large | 2 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ nats-streaming:
+ limits:
+ cpu: 2
+```
+
+## **sysdig.resources.nats-streaming.limits.memory**
+**Required**: `false`
+**Description**: The amount of memory assigned to nats-streaming pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 2Gi |
+| medium | 2Gi |
+| large | 2Gi |
+
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ nats-streaming:
+ limits:
+ memory: 2Gi
+```
+
+## **sysdig.resources.nats-streaming.requests.cpu**
+**Required**: `false`
+**Description**: The amount of cpu required to schedule nats-streaming pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 250m |
+| medium | 250m |
+| large | 250m |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ nats-streaming:
+ requests:
+ cpu: 250m
+```
+
+## **sysdig.resources.nats-streaming.requests.memory**
+**Required**: `false`
+**Description**: The amount of memory required to schedule nats-streaming pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 1Gi |
+| medium | 1Gi |
+| large | 1Gi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ nats-streaming:
+ requests:
+ memory: 1Gi
+```
+
+## **sysdig.resources.activity-audit-api.limits.cpu**
+**Required**: `false`
+**Description**: The amount of cpu assigned to activity-audit-api pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 2 |
+| medium | 2 |
+| large | 2 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ activity-audit-api:
+ limits:
+ cpu: 2
+```
+
+## **sysdig.resources.activity-audit-api.limits.memory**
+**Required**: `false`
+**Description**: The amount of memory assigned to activity-audit-api pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 500Mi |
+| medium | 500Mi |
+| large | 500Mi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ activity-audit-api:
+ limits:
+ memory: 500Mi
+```
+
+## **sysdig.resources.activity-audit-api.requests.cpu**
+**Required**: `false`
+**Description**: The amount of cpu required to schedule activity-audit-api pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 250m |
+| medium | 250m |
+| large | 250m |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ activity-audit-api:
+ requests:
+ cpu: 250m
+```
+
+## **sysdig.resources.activity-audit-api.requests.memory**
+**Required**: `false`
+**Description**: The amount of memory required to schedule activity-audit-api pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 50Mi |
+| medium | 50Mi |
+| large | 50Mi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ activity-audit-api:
+ requests:
+ memory: 50Mi
+```
+
+## **sysdig.resources.activity-audit-worker.limits.cpu**
+**Required**: `false`
+**Description**: The amount of cpu assigned to activity-audit-worker pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 1 |
+| medium | 1 |
+| large | 1 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ activity-audit-worker:
+ limits:
+ cpu: 1
+```
+
+## **sysdig.resources.activity-audit-worker.limits.memory**
+**Required**: `false`
+**Description**: The amount of memory assigned to activity-audit-worker pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 500Mi |
+| medium | 500Mi |
+| large | 500Mi |
+
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ activity-audit-worker:
+ limits:
+ memory: 500Mi
+```
+
+## **sysdig.resources.activity-audit-worker.requests.cpu**
+**Required**: `false`
+**Description**: The amount of cpu required to schedule activity-audit-worker pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 250m |
+| medium | 250m |
+| large | 250m |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ activity-audit-worker:
+ requests:
+ cpu: 250m
+```
+
+## **sysdig.resources.activity-audit-worker.requests.memory**
+**Required**: `false`
+**Description**: The amount of memory required to schedule activity-audit-worker pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 50Mi |
+| medium | 50Mi |
+| large | 50Mi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ activity-audit-worker:
+ requests:
+ memory: 50Mi
+```
+
+## **sysdig.resources.activity-audit-janitor.limits.cpu**
+**Required**: `false`
+**Description**: The amount of cpu assigned to activity-audit-janitor pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 250m |
+| medium | 250m |
+| large | 250m |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ activity-audit-janitor:
+ limits:
+ cpu: 250m
+```
+
+## **sysdig.resources.activity-audit-janitor.limits.memory**
+**Required**: `false`
+**Description**: The amount of memory assigned to activity-audit-janitor pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 200Mi |
+| medium | 200Mi |
+| large | 200Mi |
+
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ activity-audit-janitor:
+ limits:
+ memory: 200Mi
+```
+
+## **sysdig.resources.activity-audit-janitor.requests.cpu**
+**Required**: `false`
+**Description**: The amount of cpu required to schedule activity-audit-janitor pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 250m |
+| medium | 250m |
+| large | 250m |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ activity-audit-janitor:
+ requests:
+ cpu: 250m
+```
+
+## **sysdig.resources.activity-audit-janitor.requests.memory**
+**Required**: `false`
+**Description**: The amount of memory required to schedule activity-audit-janitor pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 50Mi |
+| medium | 50Mi |
+| large | 50Mi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ activity-audit-janitor:
+ requests:
+ memory: 50Mi
+```
+
+## **sysdig.resources.profiling-api.limits.cpu**
+**Required**: `false`
+**Description**: The amount of cpu assigned to profiling-api pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 2 |
+| medium | 2 |
+| large | 2 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ profiling-api:
+ limits:
+ cpu: 2
+```
+
+## **sysdig.resources.profiling-api.limits.memory**
+**Required**: `false`
+**Description**: The amount of memory assigned to profiling-api pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 500Mi |
+| medium | 500Mi |
+| large | 500Mi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ profiling-api:
+ limits:
+ memory: 500Mi
+```
+
+## **sysdig.resources.profiling-api.requests.cpu**
+**Required**: `false`
+**Description**: The amount of cpu required to schedule profiling-api pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 250m |
+| medium | 250m |
+| large | 250m |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ profiling-api:
+ requests:
+ cpu: 250m
+```
+
+## **sysdig.resources.profiling-api.requests.memory**
+**Required**: `false`
+**Description**: The amount of memory required to schedule profiling-api pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 50Mi |
+| medium | 50Mi |
+| large | 50Mi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ profiling-api:
+ requests:
+ memory: 50Mi
+```
+
+## **sysdig.resources.profiling-worker.limits.cpu**
+**Required**: `false`
+**Description**: The amount of cpu assigned to profiling-worker pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 1 |
+| medium | 1 |
+| large | 1 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ profiling-worker:
+ limits:
+ cpu: 1
+```
+
+## **sysdig.resources.profiling-worker.limits.memory**
+**Required**: `false`
+**Description**: The amount of memory assigned to profiling-worker pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 500Mi |
+| medium | 500Mi |
+| large | 500Mi |
+
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ profiling-worker:
+ limits:
+ memory: 500Mi
+```
+
+## **sysdig.resources.profiling-worker.requests.cpu**
+**Required**: `false`
+**Description**: The amount of cpu required to schedule profiling-worker pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 250m |
+| medium | 250m |
+| large | 250m |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ profiling-worker:
+ requests:
+ cpu: 250m
+```
+
+## **sysdig.resources.profiling-worker.requests.memory**
+**Required**: `false`
+**Description**: The amount of memory required to schedule profiling-worker pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 50Mi |
+| medium | 50Mi |
+| large | 50Mi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ profiling-worker:
+ requests:
+ memory: 50Mi
+```
+
+## **sysdig.resources.secure-overview-api.limits.cpu**
+**Required**: `false`
+**Description**: The amount of cpu assigned to secure-overview-api containers
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 2 |
+| medium | 2 |
+| large | 2 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ secure-overview-api:
+ limits:
+ cpu: 2
+```
+
+## **sysdig.resources.secure-overview-api.limits.memory**
+**Required**: `false`
+**Description**: The amount of memory assigned to secure-overview-api containers
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 1Gi |
+| medium | 1Gi |
+| large | 1Gi |
+
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ secure-overview-api:
+ limits:
+ memory: 1Gi
+```
+
+## **sysdig.resources.secure-overview-api.requests.cpu**
+**Required**: `false`
+**Description**: The amount of cpu required to schedule secure-overview-api containers
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 500m |
+| medium | 500m |
+| large | 500m |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ secure-overview-api:
+ requests:
+ cpu: 500m
+```
+
+## **sysdig.resources.secure-overview-api.requests.memory**
+**Required**: `false`
+**Description**: The amount of memory required to schedule secure-overview-api containers
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 512Mi |
+| medium | 512Mi |
+| large | 512Mi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ secure-overview-api:
+ requests:
+ memory: 512Mi
+```
+
+## **sysdig.resources.secure-prometheus.limits.cpu**
+**Required**: `false`
+**Description**: The amount of cpu assigned to secure-prometheus containers
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 2 |
+| medium | 2 |
+| large | 2 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ secure-prometheus:
+ limits:
+ cpu: 2
+```
+
+## **sysdig.resources.secure-prometheus.limits.memory**
+**Required**: `false`
+**Description**: The amount of memory assigned to secure-prometheus containers
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 8Gi |
+| medium | 8Gi |
+| large | 8Gi |
+
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ secure-prometheus:
+ limits:
+ memory: 8Gi
+```
+
+## **sysdig.resources.secure-prometheus.requests.cpu**
+**Required**: `false`
+**Description**: The amount of cpu required to schedule secure-prometheus containers
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 500m |
+| medium | 500m |
+| large | 500m |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ secure-prometheus:
+ requests:
+ cpu: 500m
+```
+
+## **sysdig.resources.secure-prometheus.requests.memory**
+**Required**: `false`
+**Description**: The amount of memory required to schedule secure-prometheus containers
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 2Gi |
+| medium | 2Gi |
+| large | 2Gi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ secure-prometheus:
+ requests:
+ memory: 2Gi
+```
+
+## **sysdig.resources.events-api.limits.cpu**
+**Required**: `false`
+**Description**: The amount of cpu assigned to events-api pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 1 |
+| medium | 1 |
+| large | 1 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ events-api:
+ limits:
+ cpu: 1
+```
+
+## **sysdig.resources.events-api.limits.memory**
+**Required**: `false`
+**Description**: The amount of memory assigned to events-api pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 500Mi |
+| medium | 500Mi |
+| large | 500Mi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ events-api:
+ limits:
+ memory: 500Mi
+```
+
+## **sysdig.resources.events-api.requests.cpu**
+**Required**: `false`
+**Description**: The amount of cpu required to schedule events-api pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 250m |
+| medium | 250m |
+| large | 250m |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ events-api:
+ requests:
+ cpu: 250m
+```
+
+## **sysdig.resources.events-api.requests.memory**
+**Required**: `false`
+**Description**: The amount of memory required to schedule events-api pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 50Mi |
+| medium | 50Mi |
+| large | 50Mi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ events-api:
+ requests:
+ memory: 50Mi
+```
+
+## **sysdig.resources.events-gatherer.limits.cpu**
+**Required**: `false`
+**Description**: The amount of cpu assigned to events-gatherer pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 2 |
+| medium | 2 |
+| large | 2 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ events-gatherer:
+ limits:
+ cpu: 2
+```
+
+## **sysdig.resources.events-gatherer.limits.memory**
+**Required**: `false`
+**Description**: The amount of memory assigned to events-gatherer pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 1Gi |
+| medium | 1Gi |
+| large | 1Gi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ events-gatherer:
+ limits:
+ memory: 1Gi
+```
+
+## **sysdig.resources.events-gatherer.requests.cpu**
+**Required**: `false`
+**Description**: The amount of cpu required to schedule events-gatherer pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 250m |
+| medium | 250m |
+| large | 250m |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ events-gatherer:
+ requests:
+ cpu: 250m
+```
+
+## **sysdig.resources.events-gatherer.requests.memory**
+**Required**: `false`
+**Description**: The amount of memory required to schedule events-gatherer pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 250Mi |
+| medium | 250Mi |
+| large | 250Mi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ events-gatherer:
+ requests:
+ memory: 250Mi
+```
+
+## **sysdig.resources.events-dispatcher.limits.cpu**
+**Required**: `false`
+**Description**: The amount of cpu assigned to events-dispatcher pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 1 |
+| medium | 1 |
+| large | 1 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ events-dispatcher:
+ limits:
+ cpu: 1
+```
+
+## **sysdig.resources.events-dispatcher.limits.memory**
+**Required**: `false`
+**Description**: The amount of memory assigned to events-dispatcher pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 250Mi |
+| medium | 250Mi |
+| large | 250Mi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ events-dispatcher:
+ limits:
+ memory: 250Mi
+```
+
+## **sysdig.resources.events-dispatcher.requests.cpu**
+**Required**: `false`
+**Description**: The amount of cpu required to schedule events-dispatcher pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 250m |
+| medium | 250m |
+| large | 250m |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ events-dispatcher:
+ requests:
+ cpu: 250m
+```
+
+## **sysdig.resources.events-dispatcher.requests.memory**
+**Required**: `false`
+**Description**: The amount of memory required to schedule events-dispatcher pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 50Mi |
+| medium | 50Mi |
+| large | 50Mi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ events-dispatcher:
+ requests:
+ memory: 50Mi
+```
+
+## **sysdig.resources.events-forwarder-api.limits.cpu**
+**Required**: `false`
+**Description**: The amount of cpu assigned to events-forwarder-api pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 1 |
+| medium | 1 |
+| large | 1 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ events-forwarder-api:
+ limits:
+ cpu: 1
+```
+
+## **sysdig.resources.events-forwarder-api.limits.memory**
+**Required**: `false`
+**Description**: The amount of memory assigned to events-forwarder-api pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 500Mi |
+| medium | 500Mi |
+| large | 500Mi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ events-forwarder-api:
+ limits:
+ memory: 500Mi
+```
+
+## **sysdig.resources.events-forwarder-api.requests.cpu**
+**Required**: `false`
+**Description**: The amount of cpu required to schedule events-forwarder-api pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 250m |
+| medium | 250m |
+| large | 250m |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ events-forwarder-api:
+ requests:
+ cpu: 250m
+```
+
+## **sysdig.resources.events-forwarder-api.requests.memory**
+**Required**: `false`
+**Description**: The amount of memory required to schedule events-forwarder-api pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 50Mi |
+| medium | 50Mi |
+| large | 50Mi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ events-forwarder-api:
+ requests:
+ memory: 50Mi
+```
+
+## **sysdig.resources.events-forwarder.limits.cpu**
+**Required**: `false`
+**Description**: The amount of cpu assigned to events-forwarder pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 1 |
+| medium | 1 |
+| large | 1 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ events-forwarder:
+ limits:
+ cpu: 1
+```
+
+## **sysdig.resources.events-forwarder.limits.memory**
+**Required**: `false`
+**Description**: The amount of memory assigned to events-forwarder pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 500Mi |
+| medium | 500Mi |
+| large | 500Mi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ events-forwarder:
+ limits:
+ memory: 500Mi
+```
+
+## **sysdig.resources.events-forwarder.requests.cpu**
+**Required**: `false`
+**Description**: The amount of cpu required to schedule events-forwarder pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 250m |
+| medium | 250m |
+| large | 250m |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ events-forwarder:
+ requests:
+ cpu: 250m
+```
+
+## **sysdig.resources.events-forwarder.requests.memory**
+**Required**: `false`
+**Description**: The amount of memory required to schedule events-forwarder pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 50Mi |
+| medium | 50Mi |
+| large | 50Mi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ events-forwarder:
+ requests:
+ memory: 50Mi
+```
+
+## **sysdig.resources.events-janitor.limits.cpu**
+**Required**: `false`
+**Description**: The amount of cpu assigned to events-janitor pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 1 |
+| medium | 1 |
+| large | 1 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ events-janitor:
+ limits:
+ cpu: 1
+```
+
+## **sysdig.resources.events-janitor.limits.memory**
+**Required**: `false`
+**Description**: The amount of memory assigned to events-janitor pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 200Mi |
+| medium | 200Mi |
+| large | 200Mi |
+
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ events-janitor:
+ limits:
+ memory: 200Mi
+```
+
+## **sysdig.resources.events-janitor.requests.cpu**
+**Required**: `false`
+**Description**: The amount of cpu required to schedule events-janitor pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 250m |
+| medium | 250m |
+| large | 250m |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ events-janitor:
+ requests:
+ cpu: 250m
+```
+
+## **sysdig.resources.events-janitor.requests.memory**
+**Required**: `false`
+**Description**: The amount of memory required to schedule events-janitor pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 50Mi |
+| medium | 50Mi |
+| large | 50Mi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ events-janitor:
+ requests:
+ memory: 50Mi
+```
+
+## **sysdig.restrictPasswordLogin**
+**Required**: `false`
+**Description**: Restricts password login to only super admin user forcing all
+non-default users to login using the configured
+[IdP](https://en.wikipedia.org/wiki/Identity_provider).
+**Options**: `true|false`
+**Default**: `false`
+**Example**:
+
+```yaml
+sysdig:
+ restrictPasswordLogin: true
+```
+
+## **sysdig.rsyslogVersion**
+**Required**: `false`
+**Description**: Docker image tag of rsyslog, relevant only when configured
+`deployment` is `kubernetes`.
+**Options**:
+**Default**: 8.34.0.7
+**Example**:
+
+```yaml
+sysdig:
+ rsyslogVersion: 8.34.0.7
+```
+
+## **sysdig.smtpFromAddress**
+**Required**: `false`
+**Description**: Email address to use for the FROM field of sent emails.
+**Options**:
+**Default**:
+**Example**:
+
+```yaml
+sysdig:
+ smtpFromAddress: from-address@my-company.com
+```
+
+## **sysdig.smtpPassword**
+**Required**: `false`
+**Description**: Password for the configured `sysdig.smtpUser`.
+**Options**:
+**Default**:
+**Example**:
+
+```yaml
+sysdig:
+ smtpPassword: my-@w350m3-p@55w0rd
+```
+
+## **sysdig.smtpProtocolSSL**
+**Required**: `false`
+**Description**: Specifies if SSL should be used when sending emails via SMTP.
+**Options**: `true|false`
+**Default**:
+**Example**:
+
+```yaml
+sysdig:
+ smtpProtocolSSL: true
+```
+
+## **sysdig.smtpProtocolTLS**
+**Required**: `false`
+**Description**: Specifies if TLS should be used when sending emails via SMTP
+**Options**: `true|false`
+**Default**:
+**Example**:
+
+```yaml
+sysdig:
+ smtpProtocolTLS: true
+```
+
+## **sysdig.smtpServer**
+**Required**: `false`
+**Description**: SMTP server to use to send emails
+**Options**:
+**Default**:
+**Example**:
+
+```yaml
+sysdig:
+ smtpServer: smtp.gmail.com
+```
+
+## **sysdig.smtpServerPort**
+**Required**: `false`
+**Description**: Port of the configured `sysdig.smtpServer`
+**Options**: `1-65535`
+**Default**: `25`
+**Example**:
+
+```yaml
+sysdig:
+ smtpServerPort: 587
+```
+
+## **sysdig.smtpUser**
+**Required**: `false`
+**Description**: User for the configured `sysdig.smtpServer`
+**Options**:
+**Default**:
+**Example**:
+
+```yaml
+sysdig:
+ smtpUser: bob+alice@gmail.com
+```
+
+## **sysdig.tolerations**
+**Required**: `false`
+**Description**:
+[Toleration](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/)
+that will be created on Sysdig platform pods, this can be combined with
+[nodeaffinityLabel.key](#nodeaffinityLabelkey) and
+[nodeaffinityLabel.value](#nodeaffinityLabelvalue) to ensure only Sysdig
+Platform pods run on particular nodes
+**Options**:
+**Default**:
+**Example**:
+
+```yaml
+sysdig:
+ tolerations:
+ - key: "dedicated"
+ operator: "Equal"
+ value: sysdig
+ effect: "NoSchedule"
+```
+
+## **sysdig.anchoreCoreReplicaCount**
+**Required**: `false`
+**Description**: Number of Sysdig Anchore Core replicas, this is a noop for
+clusters of `size` `small`.
+**Options**:
+**Default**:
+
+| cluster-size | count |
+| ------------ | ----- |
+| small | 1 |
+| medium | 1 |
+| large | 1 |
+
+**Example**:
+
+```yaml
+sysdig:
+ anchoreCoreReplicaCount: 5
+```
+
+## **sysdig.anchoreAPIReplicaCount**
+**Required**: `false`
+**Description**: Number of Sysdig Anchore API replicas, this is a noop for
+clusters of `size` `small`.
+**Options**:
+**Default**:
+
+| cluster-size | count |
+| ------------ | ----- |
+| small | 1 |
+| medium | 2 |
+| large | 2 |
+
+**Example**:
+
+```yaml
+sysdig:
+ anchoreAPIReplicaCount: 4
+```
+
+## **sysdig.anchoreCatalogReplicaCount**
+**Required**: `false`
+**Description**: Number of Sysdig Anchore Catalog replicas, this is a noop for
+clusters of `size` `small`.
+**Options**:
+**Default**:
+
+| cluster-size | count |
+| ------------ | ----- |
+| small | 1 |
+| medium | 2 |
+| large | 4 |
+
+**Example**:
+
+```yaml
+sysdig:
+ anchoreCatalogReplicaCount: 4
+```
+
+## **sysdig.anchorePolicyEngineReplicaCount**
+**Required**: `false`
+**Description**: Number of Sysdig Anchore Policy Engine replicas, this is a noop for
+clusters of `size` `small`.
+**Options**:
+**Default**:
+
+| cluster-size | count |
+| ------------ | ----- |
+| small | 1 |
+| medium | 2 |
+| large | 4 |
+
+**Example**:
+
+```yaml
+sysdig:
+ anchorePolicyEngineReplicaCount: 4
+```
+
+## **sysdig.anchoreWorkerReplicaCount**
+**Required**: `false`
+**Description**: Number of Sysdig Anchore Worker replicas.
+**Options**:
+**Default**:
+
+| cluster-size | count |
+| ------------ | ----- |
+| small | 1 |
+| medium | 1 |
+| large | 1 |
+
+**Example**:
+
+```yaml
+sysdig:
+ anchoreWorkerReplicaCount: 5
+```
+
+## **sysdig.apiReplicaCount**
+**Required**: `false`
+**Description**: Number of Sysdig API replicas, this is a noop for clusters of
+`size` `small`.
+**Options**:
+**Default**:
+
+| cluster-size | count |
+| ------------ | ----- |
+| small | 1 |
+| medium | 3 |
+| large | 5 |
+
+**Example**:
+
+```yaml
+sysdig:
+ apiReplicaCount: 5
+```
+
+## **sysdig.cassandraReplicaCount**
+**Required**: `false`
+**Description**: Number of Cassandra replicas, this is a noop for clusters of
+`size` `small`.
+**Options**:
+**Default**:
+
+| cluster-size | count |
+| ------------ | ----- |
+| small | 1 |
+| medium | 3 |
+| large | 6 |
+
+**Example**:
+
+```yaml
+sysdig:
+ cassandraReplicaCount: 20
+```
+
+## **sysdig.collectorReplicaCount**
+**Required**: `false`
+**Description**: Number of Sysdig collector replicas, this is a noop for
+clusters of `size` `small`.
+**Options**:
+**Default**:
+
+| cluster-size | count |
+| ------------ | ----- |
+| small | 1 |
+| medium | 3 |
+| large | 5 |
+
+**Example**:
+
+```yaml
+sysdig:
+ collectorReplicaCount: 7
+```
+
+## **sysdig.activityAuditWorkerReplicaCount**
+**Required**: `false`
+**Description**: Number of Activity Audit Worker replicas.
+**Options**:
+**Default**:
+
+| cluster-size | count |
+| ------------ | ----- |
+| small | 1 |
+| medium | 2 |
+| large | 4 |
+
+**Example**:
+
+```yaml
+sysdig:
+ activityAuditWorkerReplicaCount: 20
+```
+
+## **sysdig.activityAuditApiReplicaCount**
+**Required**: `false`
+**Description**: Number of Activity Audit API replicas.
+**Options**:
+**Default**:
+
+| cluster-size | count |
+| ------------ | ----- |
+| small | 1 |
+| medium | 1 |
+| large | 1 |
+
+**Example**:
+
+```yaml
+sysdig:
+ activityAuditApiReplicaCount: 20
+```
+
+## **sysdig.policyAdvisorReplicaCount**
+**Required**: `false`
+**Description**: Number of Policy Advisor replicas.
+**Options**:
+**Default**:
+
+| cluster-size | count |
+| ------------ | ----- |
+| small | 1 |
+| medium | 1 |
+| large | 1 |
+
+**Example**:
+
+```yaml
+sysdig:
+ policyAdvisorReplicaCount: 20
+```
+
+## **sysdig.scanningAdmissionControllerAPIReplicaCount**
+**Required**: `false`
+**Description**: Number of scanning Admission Controller API replicas, this is
+a noop for clusters of `size` `small`.
+**Options**:
+**Default**:
+
+|cluster-size|count|
+|------------|-----|
+| small | 1 |
+| medium | 1 |
+| large | 1 |
+
+**Example**:
+
+```yaml
+sysdig:
+ scanningAdmissionControllerAPIReplicaCount: 1
+```
+
+## **sysdig.netsecApiReplicaCount**
+**Required**: `false`
+**Description**: Number of Netsec API replicas.
+**Options**:
+**Default**:
+
+| cluster-size | count |
+| ------------ | ----- |
+| small | 1 |
+| medium | 1 |
+| large | 1 |
+
+**Example**:
+
+```yaml
+sysdig:
+ netsecApiReplicaCount: 1
+```
+
+## **sysdig.netsecIngestReplicaCount**
+**Required**: `false`
+**Description**: Number of Netsec Ingest replicas.
+**Options**:
+**Default**:
+
+| cluster-size | count |
+| ------------ | ----- |
+| small | 1 |
+| medium | 1 |
+| large | 1 |
+
+**Example**:
+
+```yaml
+sysdig:
+ netsecIngestReplicaCount: 1
+```
+## **sysdig.netsecCommunicationShards**
+**Required**: `false`
+**Description**: Number of Netsec communications index shards.
+**Options**:
+**Default**:
+
+| cluster-size | count |
+| ------------ | ----- |
+| small | 3 |
+| medium | 9 |
+| large | 15 |
+
+**Example**:
+
+```yaml
+sysdig:
+ netsecCommunicationShards: 5
+```
+
+## **sysdig.anchoreCoreReplicaCount**
+**Required**: `false`
+**Description**: Number of Anchore Core replicas.
+**Options**:
+**Default**:
+
+| cluster-size | count |
+| ------------ | ----- |
+| small | 1 |
+| medium | 1 |
+| large | 1 |
+
+**Example**:
+
+```yaml
+sysdig:
+ anchoreCoreReplicaCount: 2
+```
+
+## **sysdig.scanningApiReplicaCount**
+**Required**: `false`
+**Description**: Number of Scanning API replicas.
+**Options**:
+**Default**:
+
+| cluster-size | count |
+| ------------ | ----- |
+| small | 1 |
+| medium | 1 |
+| large | 1 |
+
+**Example**:
+
+```yaml
+sysdig:
+ scanningApiReplicaCount: 3
+```
+
+## **sysdig.elasticsearchReplicaCount**
+**Required**: `false`
+**Description**: Number of ElasticSearch replicas, this is a noop for clusters of
+`size` `small`.
+**Options**:
+**Default**:
+
+| cluster-size | count |
+| ------------ | ----- |
+| small | 1 |
+| medium | 3 |
+| large | 6 |
+
+**Example**:
+
+```yaml
+sysdig:
+ elasticsearchReplicaCount: 20
+```
+
+## **sysdig.workerReplicaCount**
+**Required**: `false`
+**Description**: Number of Sysdig worker replicas, this is a noop for clusters
+of `size` `small`.
+**Options**:
+**Default**:
+
+| cluster-size | count |
+| ------------ | ----- |
+| small | 1 |
+| medium | 3 |
+| large | 5 |
+
+**Example**:
+
+```yaml
+sysdig:
+ workerReplicaCount: 7
+```
+
+## **sysdig.alerterReplicaCount**
+**Required**: `false`
+**Description**: Number of Sysdig alerter replicas, this is a noop for clusters
+of `size` `small`.
+**Options**:
+**Default**:
+
+| cluster-size | count |
+| ------------ | ----- |
+| small | 1 |
+| medium | 3 |
+| large | 5 |
+
+**Example**:
+
+```yaml
+sysdig:
+ alerterReplicaCount: 7
+```
+
+## **sysdig.eventsGathererReplicaCount**
+**Required**: `false`
+**Description**: Number of events gatherer replicas, this is a noop for clusters
+of `size` `small`.
+**Options**:
+**Default**:
+
+| cluster-size | count |
+| ------------ | ----- |
+| small | 1 |
+| medium | 2 |
+| large | 4 |
+
+**Example**:
+
+```yaml
+sysdig:
+ eventsGathererReplicaCount: 2
+```
+
+## **sysdig.eventsAPIReplicaCount**
+**Required**: `false`
+**Description**: Number of events API replicas, this is a noop for clusters
+of `size` `small`.
+**Options**:
+**Default**:
+
+| cluster-size | count |
+| ------------ | ----- |
+| small | 1 |
+| medium | 1 |
+| large | 1 |
+
+**Example**:
+
+```yaml
+sysdig:
+ eventsAPIReplicaCount: 1
+```
+
+## **sysdig.eventsDispatcherReplicaCount**
+**Required**: `false`
+**Description**: Number of events dispatcher replicas, this is a noop for clusters
+of `size` `small`.
+**Options**:
+**Default**:
+
+| cluster-size | count |
+| ------------ | ----- |
+| small | 1 |
+| medium | 1 |
+| large | 1 |
+
+**Example**:
+
+```yaml
+sysdig:
+ eventsDispatcherReplicaCount: 1
+```
+
+## **sysdig.eventsForwarderReplicaCount**
+**Required**: `false`
+**Description**: Number of events forwarder replicas, this is a noop for clusters
+of `size` `small`.
+**Options**:
+**Default**:
+
+| cluster-size | count |
+| ------------ | ----- |
+| small | 1 |
+| medium | 1 |
+| large | 2 |
+
+**Example**:
+
+```yaml
+sysdig:
+ eventsForwarderReplicaCount: 2
+```
+
+## **sysdig.eventsForwarderAPIReplicaCount**
+**Required**: `false`
+**Description**: Number of events forwarder API replicas, this is a noop for clusters
+of `size` `small`.
+**Options**:
+**Default**:
+
+| cluster-size | count |
+| ------------ | ----- |
+| small | 1 |
+| medium | 1 |
+| large | 1 |
+
+**Example**:
+
+```yaml
+sysdig:
+ eventsForwarderAPIReplicaCount: 1
+```
+
+## **sysdig.admin.username**
+**Required**: `true`
+**Description**: Sysdig Platform super admin user. This will be used for
+initial login to the web interface. Make sure this is a valid email address
+that you can receive emails at.
+**Options**:
+**Default**:
+**Example**:
+
+```yaml
+sysdig:
+ admin:
+ username: my-awesome-email@my-awesome-domain-name.com
+```
+
+## **sysdig.admin.password**
+**Required**: `false`
+**Description**: Sysdig Platform super admin password. This along with
+`sysdig.admin.username` will be used for initial login to the web interface.
+It is auto-generated when not explicitly configured.
+**Options**:
+**Default**:
+**Example**:
+
+```yaml
+sysdig:
+ admin:
+ password: my-@w350m3-p@55w0rd
+```
+
+## **sysdig.api.jvmOptions**
+**Required**: `false`
+**Description**: Custom configuration for Sysdig API jvm.
+**Options**:
+**Default**:
+**Example**:
+
+```yaml
+sysdig:
+ api:
+ jvmOptions: -Xms4G -Xmx4G -Ddraios.jvm-monitoring.ticker.enabled=true
+ -XX:-UseContainerSupport -Ddraios.metrics-push.query.enabled=true
+```
+
+## **sysdig.certificate.generate**
+**Required**: `false`
+**Description**: Determines if Installer should generate self-signed
+certificates for the domain configured in `sysdig.dnsName`.
+**Options**: `true|false`
+**Default**: `true`
+**Example**:
+
+```yaml
+sysdig:
+ certificate:
+ generate: true
+```
+
+## **sysdig.certificate.crt**
+**Required**: `false`
+**Description**: Path(the path must be in same directory as `values.yaml` file
+and must be relative to `values.yaml`) to user provided certificate that will
+be used in serving the Sysdig api, if `sysdig.certificate.generate` is set to
+`false` this has to be configured. The certificate common name or subject
+altername name must match configured `sysdig.dnsName`.
+**Options**:
+**Default**: `true`
+**Example**:
+
+```yaml
+sysdig:
+ certificate:
+ crt: certs/server.crt
+```
+
+## **sysdig.certificate.key**
+**Required**: `false`
+**Description**: Path(the path must be in same directory as `values.yaml` file
+and must be relative to `values.yaml`) to user provided key that will be used
+in serving the sysdig api, if `sysdig.certificate.generate` is set to `false`
+this has to be configured. The key must match the certificate in
+`sysdig.certificate.crt`.
+**Options**:
+**Default**: `true`
+**Example**:
+
+```yaml
+sysdig:
+ certificate:
+ key: certs/server.key
+```
+
+## **sysdig.collector.dnsName**
+**Required**: `false`
+**Description**: Domain name the Sysdig collector will be served on, when not
+configured it defaults to whatever is configured for `sysdig.dnsName`.
+**Options**:
+**Default**:
+**Example**:
+
+```yaml
+sysdig:
+ collector:
+ dnsName: collector.my-awesome-domain-name.com
+```
+
+## **sysdig.collector.jvmOptions**
+**Required**: `false`
+**Description**: Custom configuration for Sysdig collector jvm.
+**Options**:
+**Default**:
+**Example**:
+
+```yaml
+sysdig:
+ collector:
+ jvmOptions: -Xms4G -Xmx4G -Ddraios.jvm-monitoring.ticker.enabled=true
+ -XX:-UseContainerSupport
+```
+
+## **sysdig.collector.certificate.generate**
+**Required**: `false`
+**Description**: This determines if Installer should generate self-signed
+certificates for the domain configured in `sysdig.collector.dnsName`.
+**Options**: `true|false`
+**Default**: `true`
+**Example**:
+
+```yaml
+sysdig:
+ collector:
+ certificate:
+ generate: true
+```
+
+## **sysdig.collector.certificate.crt**
+**Required**: `false`
+**Description**: Path(the path must be in same directory as `values.yaml` file
+and must be relative to `values.yaml`) to user provided certificate that will
+be used in serving the sysdig collector, if
+`sysdig.collector.certificate.generate` is set to `false` this has to be
+configured. The certificate common name or subject altername name must match
+configured `sysdig.collector.dnsName`.
+**Options**:
+**Default**:
+**Example**:
+
+```yaml
+sysdig:
+ collector:
+ certificate:
+ crt: certs/collector.crt
+```
+
+## **sysdig.collector.certificate.key**
+**Required**: `false`
+**Description**: Path(the path must be in same directory as `values.yaml` file
+and must be relative to `values.yaml`) to user provided key that will be used
+in serving the sysdig collector, if `sysdig.collector.certificate.generate` is
+set to `false` this has to be configured. The key must match the certificate
+in `sysdig.collector.certificate.crt`.
+**Options**:
+**Default**:
+**Example**:
+
+```yaml
+sysdig:
+ collector:
+ certificate:
+ key: certs/collector.key
+```
+## **sysdig.worker.enabled**
+**Required**: `false`
+**Description**: Enables Sysdig Worker component
+**Options**:`true|false`
+**Default**: `true`
+**Example**:
+
+```yaml
+sysdig:
+ worker:
+ enabled: true
+```
+
+## **sysdig.worker.jvmOptions**
+**Required**: `false`
+**Description**: Custom configuration for Sysdig worker jvm.
+**Options**:
+**Default**:
+**Example**:
+
+```yaml
+sysdig:
+ worker:
+ jvmOptions: -Xms4G -Xmx4G -Ddraios.jvm-monitoring.ticker.enabled=true
+ -XX:-UseContainerSupport
+```
+
+## **sysdig.alerter.jvmOptions**
+**Required**: `false`
+**Description**: Custom configuration for Sysdig Alerter jvm.
+**Options**:
+**Default**:
+**Example**:
+
+```yaml
+sysdig:
+ alerter:
+ jvmOptions: -Xms4G -Xmx4G -Ddraios.jvm-monitoring.ticker.enabled=true
+ -XX:-UseContainerSupport
+```
+
+## **agent.apiKey**
+**Required**: `false`
+**Description**: Sysdig Agent api key for running agents. Instructions for retrieving the api key can be found [here](https://docs.sysdig.com/en/agent-installation--overview-and-key.html).
+_**Note**: Required for agent setup. If setting up Monitor and Agent at the same time, you can leave this as blank._
+**Options**:
+**Default**:
+**Example**:
+
+```yaml
+agent:
+ apiKey: replace_with_your_monitor_access_key
+```
+
+## **agent.appChecks.settings.limit**
+**Required**: `false`
+**Description**: The maximum number of app checks metrics that will be reported to Sysdig Monitor.
+**Options**:
+**Default**:
+**Example**:
+
+```yaml
+agent:
+ appChecks:
+ settings:
+ limit: 1500
+```
+
+## **agent.collectorEndpoint**
+**Required**: `false`
+**Description**: Sysdig Collector Address. Defaults to [`sysdig.collector.dnsName`](#sysdig.collector.dnsName) if monitor is included in apps.
+**Options**:
+**Default**:
+**Example**:
+
+```yaml
+agent:
+ collectorEndpoint: my-awesome-collector-domain-name.com
+```
+
+## **agent.collectorPort**
+**Required**: `false`
+**Description**: Sysdig Colletor TCP Port.
+**Options**: `1024-65535`
+**Default**: `6443`
+**Example**:
+
+```yaml
+agent:
+ collectorPort: 6443
+```
+
+## **agent.namespace**
+**Required**: `false`
+**Description**: A kubernetes namespace for setting up the agent in.
+**Options**:
+**Default**: `agent`
+**Example**:
+
+```yaml
+agent:
+ namespace: sysdig-agent
+```
+
+## **agent.useSlim**
+**Required**: `false`
+**Description**: Whether to use the slim version of agent or not.
+**Options**: `true|false`
+**Default**: `false`
+**Example**:
+
+```yaml
+agent:
+ useSlim: true
+```
+
+## **agent.version**
+**Required**: `false`
+**Description**: Version of agent to install.
+_**Note**: You can lookup all the available versions of agent [here](https://hub.docker.com/r/sysdig/agent/tags)_
+**Options**:
+**Default**: `latest`
+**Example**:
+
+```yaml
+agent:
+ version: 1.10.1
+```
+
+## **agent.useSSL**
+**Required**: `false`
+**Description**: Whether Sysdig Collector accepts SSL connections or not.
+**Options**: `true|false`
+**Default**: `true`
+**Example**:
+
+```yaml
+agent:
+ useSSL: false
+```
+
+## **agent.verifySSL**
+**Required**: `false`
+**Description**: Whether to validate Sysdig Collector SSL certificate or not.
+_**Note**: This should be set to false if a self-signed certificate or private, CA-signed cert is used._
+**Options**: `true|false`
+**Default**: `false`
+**Example**:
+
+```yaml
+agent:
+ verifySSL: false
+```
+
+## **agent.clusterName**
+**Required**: `false`
+**Description**: Setting a cluster name here allows you to view, scope, and segment metrics in the Sysdig Monitor UI by Kubernetes cluster.
+**Options**:
+**Default**: `production`
+**Example**:
+
+```yaml
+agent:
+ clusterName: false
+```
+
+## **agent.tags**
+**Required**: `false`
+**Description**: List of user-provided metadata at agent level.
+**Options**:
+**Default**:
+**Example**:
+
+```yaml
+agent:
+ tags: environment:production linux:ubuntu
+```
+
+## **agent.capturesEnabled**
+**Required**: `false`
+**Description**: TBD.
+**Options**: `true|false`
+**Default**: `true`
+**Example**:
+
+```yaml
+agent:
+ capturesEnabled: false
+```
+
+## **agent.feature_mode**
+**Required**: `false`
+**Description**: TBD.
+**Options**: `monitor|monitor_light|essentials|troubleshooting|secure`
+**Default**: `monitor`
+**Example**:
+
+```yaml
+agent:
+ feature_mode: troubleshooting
+```
+
+## **agent.timezone**
+**Required**: `false`
+**Description**: Set daemonset timezone.
+**Options**:
+**Default**:
+**Example**:
+
+```yaml
+agent:
+ timezone: America/New_York.
+```
+
+## **agent.proxy.httpProxy**
+**Required**: `false`
+**Description**: The URL to use as a proxy for http requests. If the proxy requires authentication, you need to specify this information as part of the URL.
+**Options**:
+**Default**:
+**Example**:
+
+```yaml
+agent:
+ proxy:
+ httpProxy: http://username:password@your-awesome-http-proxy.com
+```
+
+## **agent.proxy.httpsProxy**
+**Required**: `false`
+**Description**: The URL to use as a proxy for https requests. If the proxy requires authentication, you need to specify this information as part of the URL.
+**Options**:
+**Default**:
+**Example**:
+
+```yaml
+agent:
+ proxy:
+ httpsProxy: https://username:password@your-awesome-https-proxy.com
+```
+
+## **agent.proxy.noProxy**
+**Required**: `false`
+**Description**: A space-separated list of URLs for which no proxy should be used.
+**Options**:
+**Default**:
+**Example**:
+
+```yaml
+agent:
+ proxy:
+ noProxy: your-awesome-no-proxy.com
+```
+
+## **agent.snaplenPortRange.start**
+**Required**: `false`
+**Description**: Starting port in the range of ports to enable a larger snaplen on.
+_**Note**: This should only be set if you push a lot of statsd metrics._
+**Options**:
+**Default**: `0`
+**Example**:
+
+```yaml
+agent:
+ snaplenPortRange:
+ start: "8125"
+```
+
+## **agent.snaplenPortRange.end**
+**Required**: `false`
+**Description**: Ending port in the range of ports to enable a larger snaplen on.
+_**Note**: This should only be set if you push a lot of statsd metrics._
+**Options**:
+**Default**: `0`
+**Example**:
+
+```yaml
+agent:
+ snaplenPortRange:
+ start: "8125"
+```
+
+## **agent.customKernelModules.enabled**
+**Required**: `false`
+**Description**: Whether to pick up custom kernel modules from /root or not. This setting only applies to non-slim agent.
+**Options**: `true|false`
+**Default**: `false`
+**Example**:
+
+```yaml
+agent:
+ customKernelModules:
+ enabled: true
+```
+
+## **agent.secure.enabled**
+**Required**: `false`
+**Description**: Whether your Sysdig platform has Sysdig Secure enabled or not.
+**Options**: `true|false`
+**Default**: `false`
+**Example**:
+
+```yaml
+agent:
+ secure:
+ enabled: true
+```
+
+## **agent.secure.commandLineCapturesEnabled**
+**Required**: `false`
+**Description**: Whether you want to enable Command Line Captures or not.
+_**Note**: This setting is dependent on `agent.secure.enabled` being set to `true`._
+**Options**: `true|false`
+**Default**: `true`
+**Example**:
+
+```yaml
+agent:
+ secure:
+ commandLineCapturesEnabled: true
+```
+
+## **agent.secure.memoryDumpEnabled**
+**Required**: `false`
+**Description**: Whether you want to enable Memory Dump or not.
+_**Note**: This setting is dependent on `agent.secure.enabled` being set to `true`._
+**Options**: `true|false`
+**Default**: `true`
+**Example**:
+
+```yaml
+agent:
+ secure:
+ memoryDumpEnabled: true
+```
+
+## **agent.secure.settings.k8sAuditServerURL**
+**Required**: `false`
+**Description**: Kubernetes Audit Server URL.
+_**Note**: This setting is dependent on `agent.secure.enabled` being set to `true`._
+**Options**:
+**Default**: `0.0.0.0`
+**Example**:
+
+```yaml
+agent:
+ secure:
+ settings:
+ k8sAuditServerURL: 127.0.0.1
+```
+
+## **agent.secure.settings.k8sAuditServerPort**
+**Required**: `false`
+**Description**: Kubernetes Audit Server Port.
+_**Note**: This setting is dependent on `agent.secure.enabled` being set to `true`._
+**Options**: `1024-65535`
+**Default**: `7765`
+**Example**:
+
+```yaml
+agent:
+ secure:
+ settings:
+ k8sAuditServerPort: 7765
+```
+
+## **agent.prometheus.enabled**
+**Required**: `false`
+**Description**: Whether to enable ingestion of prometheus metrics or not.
+**Options**: `true|false`
+**Default**: `true`
+**Example**:
+
+```yaml
+agent:
+ prometheus:
+ enabled: true
+```
+
+## **agent.prometheus.settings.interval**
+**Required**: `false`
+**Description**: How often (in seconds) the agent will scrape a port for prometheus metrics.
+_**Note**: This setting is dependent on `agent.prometheus.enabled` being set to true._
+**Options**:
+**Default**: `10`
+**Example**:
+
+```yaml
+agent:
+ prometheus:
+ settings:
+ interval: 30
+```
+
+## **agent.prometheus.settings.logErrors**
+**Required**: `false`
+**Description**: Whether the Agent should log details on failed attempts to scrape eligible targets or not.
+_**Note**: This setting is dependent on `agent.prometheus.enabled` being set to true._
+**Options**: `true|false`
+**Default**: `true`
+**Example**:
+
+```yaml
+agent:
+ prometheus:
+ settings:
+ logErrors: true
+```
+
+## **agent.prometheus.settings.maxMetrics**
+**Required**: `false`
+**Description**: The maximum number of total prometheus metrics that will be scraped across all targets. This value is the maximum per-Agent, and is a separate limit from other Custom Metrics (e.g. statsd, JMX, and other Application Checks).
+_**Note**: This setting is dependent on `agent.prometheus.enabled` being set to true._
+**Options**:
+**Default**: `3000`
+**Example**:
+
+```yaml
+agent:
+ prometheus:
+ settings:
+ maxMetrics: 1000
+```
+
+## **agent.prometheus.settings.maxMetricsPerProcess**
+**Required**: `false`
+**Description**: The maximum number of prometheus metrics that the agent will save from a single scraped target.
+_**Note**: This setting is dependent on `agent.prometheus.enabled` being set to true._
+**Options**:
+**Default**: `3000`
+**Example**:
+
+```yaml
+agent:
+ prometheus:
+ settings:
+ maxMetricsPerProcess: 1000
+```
+
+## **agent.prometheus.settings.maxTagsPerMetric**
+**Required**: `false`
+**Description**: The maximum number of tags per prometheus metric that the Agent will save from a scraped target.
+_**Note**: This setting is dependent on `agent.prometheus.enabled` being set to true._
+**Options**:
+**Default**: `40`
+**Example**:
+
+```yaml
+agent:
+ prometheus:
+ settings:
+ maxTagsPerMetric: 20
+```
+
+## **agent.prometheus.settings.histograms**
+**Required**: `false`
+**Description**: Whether the Agent should scrape and report histogram metrics.
+_**Note**: This setting is dependent on `agent.prometheus.enabled` being set to true._
+**Options**: `true|false`
+**Default**: `true`
+**Example**:
+
+```yaml
+agent:
+ prometheus:
+ settings:
+ histograms: 3000
+```
+
+## **agent.statsd.enabled**
+**Required**: `false`
+**Description**: Whether to enable ingestion of statsd metrics or not.
+**Options**: `true|false`
+**Default**: `true`
+**Example**:
+
+```yaml
+agent:
+ statsd:
+ enabled: true
+```
+
+## **agent.statsd.settings.limit**
+**Required**: `false`
+**Description**: The maximum number of statsd metrics that will be reported to Sysdig Monitor.
+**Options**:
+**Default**: `100`
+**Example**:
+
+```yaml
+agent:
+ statsd:
+ settings:
+ limit: 1000
+```
+
+## **agent.jmx.enabled**
+**Required**: `false`
+**Description**: Whether to enable ingestion of jvm metrics via jmx protocol or not. If enabled, the agent will discover java virtual machines and poll them for basic jvm metrics like heap and gc as well as a few application sepecific metrics.
+**Options**: `true|false`
+**Default**: `true`
+**Example**:
+
+```yaml
+agent:
+ jmx:
+ enabled: true
+```
+
+## **agent.jmx.settings.limit**
+**Required**: `false`
+**Description**: The total number of JMX metrics polled per host.
+**Options**:
+**Default**: `3000`
+**Example**:
+
+```yaml
+agent:
+ jmx:
+ settings:
+ limit: 1000
+```
+
+## **agent.ebpf.enabled**
+**Required**: `false`
+**Description**: Enable eBPF support for Sysdig instead of sysdig-probe kernel module.
+_**Note**: This should be enabled for GKE COS as the installation of sysdig-probe kernel is not allowed._
+**Options**: `true|false`
+**Default**: `false`
+**Example**:
+
+```yaml
+agent:
+ ebpf:
+ enabled: true
+```
+
+## **agent.ebpf.settings.mountEtcVolume**
+**Required**: `false`
+**Description**: Needed to detect which kernel version are running in Google COS.
+_**Note**: This should be configured appropriately for GKE COS as the installation of sysdig-probe kernel is not allowed._
+**Options**: `true|false`
+**Default**: `true`
+**Example**:
+
+```yaml
+agent:
+ ebpf:
+ settings:
+ mountEtcVolume: 1000
+```
+
+## **agent.appChecks.elasticsearch.authEnabled**
+**Required**: `false`
+**Description**: Whether elasticsearch has auth enabled or not.
+**Options**: `true|false`
+**Default**: `false`
+**Example**:
+
+```yaml
+agent:
+ appChecks:
+ elasticsearch:
+ authEnabled: true
+```
+
+## **agent.appChecks.elasticsearch.url**
+**Required**: `false`
+**Description**: Elasticsearch Endpoint.
+_**Note**: This should be configured if `agent.appChecks.elasticsearch.authEnabled` is set to `true`._
+**Options**:
+**Default**:
+**Example**:
+
+```yaml
+agent:
+ appChecks:
+ elasticsearch:
+ url: https://sysdigcloud-elasticsearch
+```
+
+## **agent.appChecks.elasticsearch.port**
+**Required**: `false`
+**Description**: Elasticsearch Port.
+_**Note**: This should be configured if `agent.appChecks.elasticsearch.authEnabled` is set to `true`._
+**Options**: `1024-65535`
+**Default**:
+**Example**:
+
+```yaml
+agent:
+ appChecks:
+ elasticsearch:
+ port: 9200
+```
+
+## **agent.appChecks.elasticsearch.username**
+**Required**: `false`
+**Description**: Username to use for authentication to elasticsearch.
+_**Note**: This should be configured if `agent.appChecks.elasticsearch.authEnabled` is set to `true`._
+**Options**:
+**Default**:
+**Example**:
+
+```yaml
+agent:
+ appChecks:
+ elasticsearch:
+ username: readonly
+```
+
+## **agent.appChecks.elasticsearch.password**
+**Required**: `false`
+**Description**: Password to use for authentication to elasticsearch.
+_**Note**: This should be configured if `agent.appChecks.elasticsearch.authEnabled` is set to `true`._
+**Options**:
+**Default**:
+**Example**:
+
+```yaml
+agent:
+ appChecks:
+ elasticsearch:
+ password: some_password
+```
+
+## **agent.appChecks.elasticsearch.verifySSL**
+**Required**: `false`
+**Description**: Whether to validate Elasticsearch SSL certificate or not.
+_**Note**: This should be configured if `agent.appChecks.elasticsearch.authEnabled` is set to `true`._
+**Options**: `true|false`
+**Default**:
+**Example**:
+
+```yaml
+agent:
+ appChecks:
+ elasticsearch:
+ verifySSL: false
+```
+
+## **agent.appChecks.kafka.enabled**
+**Required**: `false`
enabled
+**Description**: Whether to enable collection of metrics for kafka using JMX polling or not.
+**Options**: `true|false`
+**Default**: `false`
+**Example**:
+
+```yaml
+agent:
+ appChecks:
+ kafka:
+ enabled: true
+```
+
+## **agent.appChecks.kafka.arg**
+**Required**: `false`
enabled
+**Description**: Process arguments to match for Kafka
+_**Note**: This should be configured if `agent.appChecks.kafka.enabled` is set to `true`._
+**Options**:
+**Default**:
+**Example**:
+
+```yaml
+agent:
+ appChecks:
+ kafka:
+ arg: Kafka.kafka
+```
+
+## **agent.appChecks.kafka.url**
+**Required**: `false`
+**Description**: Kafka Endpoint.
+_**Note**: This should be configured if `agent.appChecks.kafka.enabled` is set to `true`._
+**Options**:
+**Default**:
+**Example**:
+
+```yaml
+agent:enabled
+ appChecks:
+ kafka:
+ url: localhost
+```
+
+## **agent.appChecks.kafka.port**
+**Required**: `false`
+**Description**: Kafka Port.
+_**Note**: This should be configured if `agent.appChecks.kafka.enabled` is set to `true`._
+**Options**: `1024-65535`
+**Default**:
+**Example**:
+
+```yaml
+agent:
+ appChecks:
+ kafka:
+ port: 9200
+```
+
+## **agent.appChecks.kafka.zk.url**
+**Required**: `false`
+**Description**: Kafka Zookeeper Endpoint.
+_**Note**: This should be configured if `agent.appChecks.kafka.enabled` is set to `true`._
+**Options**:
+**Default**:
+**Example**:
+
+```yaml
+agent:enabled
+ appChecks:
+ kafka:
+ zk:
+ url: localhost
+```
+
+## **agent.appChecks.kafka.zk.port**
+**Required**: `false`
+**Description**: Kafka Zookeeper Port.
+_**Note**: This should be configured if `agent.appChecks.kafka.enabled` is set to `true`._
+**Options**: `1024-65535`
+**Default**:
+**Example**:
+
+```yaml
+agent:
+ appChecks:
+ kafka:
+ zk:
+ port: 2181
+```
+
+## **agent.appChecks.kafka.enableConsumerOffsets**
+**Required**: `false`
enabled
+**Description**: Whether to store consumer group config info inside Kafka itself or not. Enabling this will provide better performance.
+**Options**: `true|false`
+**Default**: `false`
+**Example**:
+
+```yaml
+agent:
+ appChecks:
+ kafka:
+ enableConsumerOffsets: true
+```
+
+## **agent.appChecks.kafka.enableAggregationPartitions**
+**Required**: `false`
enabled
+**Description**: Whether to enable aggregation of partitions at the topic level or not.
+**Options**: `true|false`
+**Default**: `false`
+**Example**:
+
+```yaml
+agent:
+ appChecks:
+ kafka:
+ enableAggregationPartitions: true
+```
+
+## **agent.appChecks.mysql.enabled**
+**Required**: `false`
+**Description**: Whether to enable mysql app check.
+**Options**: `true|false`
+**Default**: `false`
+**Example**:
+
+```yaml
+agent:
+ appChecks:
+ mysql:
+ enabled: true
+```
+
+## **agent.appChecks.mysql.hostname**
+**Required**: `false`
+**Description**: Name of the mySQL host that the agent should connect to.
+**Options**: `true|false`
+**Default**: `false`
+**Example**:
+
+```yaml
+agent:
+ appChecks:
+ mysql:
+ hostname: mysql-service-url
+```
+
+## **agent.appChecks.mysql.user**
+**Required**: `false`
+**Description**: The username of the MySQL user that the agent will use in communicating with MySQL.
+**Options**: `true|false`
+**Default**: `false`
+**Example**:
+
+```yaml
+agent:
+ appChecks:
+ mysql:
+ user: mysql-user
+```
+
+## **agent.appChecks.mysql.password**
+**Required**: `false`
+**Description**: The password of the MySQL user that the agent will use in communicating with MySQL.
+**Options**: `true|false`
+**Default**: `false`
+**Example**:
+
+```yaml
+agent:
+ appChecks:
+ mysql:
+ password: mysql-password
+```
+
+## **agent.resources.limits.cpu**
+**Required**: `false`
+**Description**: The amount of cpu assigned to agent pods.
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 3 |
+| medium | 5 |
+| large | 8 |
+
+**Example**:
+
+```yaml
+agent:
+ resources:
+ limits:
+ cpu: 2
+```
+
+## **agent.resources.limits.memory**
+**Required**: `false`
+**Description**: The amount of memory assigned to agent pods.
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 3Gi |
+| medium | 6Gi |
+| large | 10Gi |
+
+**Example**:
+
+```yaml
+agent:
+ resources:
+ limits:
+ memory: 2
+```
+
+## **agent.resources.requests.cpu**
+**Required**: `false`
+**Description**: The amount of cpu required to schedule agent pods.
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 1 |
+| medium | 3 |
+| large | 5 |
+
+**Example**:
+
+```yaml
+agent:
+ resources:
+ requests:
+ cpu: 2
+```
+
+## **agent.resources.requests.memory**
+**Required**: `false`
+**Description**: The amount of memory required to schedule agent pods.
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 1Gi |
+| medium | 3Gi |
+| large | 6Gi |
+
+**Example**:
+
+```yaml
+agent:
+ resources:
+ requests:
+ memory: 2
+```
+
+## **agent.resources.watchdog.max_memory_usage_mb**
+**Required**: `false`
+**Description**: The max amount of memory the dragent process can take. Units for this value are Megabytes(mb)
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 512 |
+| medium | 1024 |
+| large | 2048 |
+
+**Example**:
+
+```yaml
+agent:
+ resources:
+ watchdog:
+ max_memory_usage_mb: 1024
+```
+
+## **agent.resources.watchdog.cointerface**
+**Required**: `false`
+**Description**: The max amount of memory cointerface is allowed to consume. Units for this value are Megabytes(mb). Cointerface is responsible for fetching k8s events from api server and also builds the relationship graph for all k8s objects. This can take up a lot of memory during startup and in large clusters.
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 512 |
+| medium | 2048 |
+| large | 4096 |
+
+**Example**:
+
+```yaml
+agent:
+ resources:
+ watchdog:
+ cointerface: 1024
+```
+
+## **sysdig.eventsForwarderEnabledIntegrations**
+**Required**: `false`
+**Description**: List of enabled integrations, e.g. "MCM,QRADAR"
+**Options**:
+**Default**: ""
+**Example**:
+
+```yaml
+sysdig:
+ eventsForwarderEnabledIntegrations: "MCM,QRADAR"
+```
+
+## **sysdig.secure.scanning.admissionControllerAPI.maxDurationBeforeDisconnection**
+**Required**: `false`
+**Description**: Max duration after the last ping from an AC before it is considered
+disconnected. It cannot be greater than 30m. See also pingTTLDuration
+**Options**:
+**Default**: 10m
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ scanning:
+ admissionControllerAPI:
+ maxDurationBeforeDisconnection: 20m
+```
+
+## **sysdig.secure.scanning.admissionControllerAPI.confTTLDuration**
+**Required**: `false`
+**Description**: TTL of the cache for the cluster configuration. It should be
+used by the AC as polling interval to retrieve the updated cluster configuration
+from the API. It cannot be greater than 30m
+**Options**:
+**Default**: 5m
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ scanning:
+ admissionControllerAPI:
+ confTTLDuration: 10m
+```
+
+## **sysdig.secure.scanning.admissionControllerAPI.pingTTLDuration**
+**Required**: `false`
+**Description**: TTL of an AC ping. It should be used by the AC as polling
+interval to perform a HEAD on the ping endpoint to notify it's still alive and
+connected. It cannot be greater than 30m and it cannot be greater than
+maxDurationBeforeDisconnection
+**Options**:
+**Default**: 5m
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ scanning:
+ admissionControllerAPI:
+ pingTTLDuration: 8m
+```
+
+## **sysdig.secure.scanning.admissionControllerAPI.clusterConfCacheMaxDuration**
+**Required**: `false`
+**Description**: Max duration of the cluster configuration cache. The API returns
+this value as max-age in seconds and the FE uses it for caching the cluster
+configuration. FE also asks for a new cluster configuration using this value
+as time interval. It cannot be greater than 30m
+**Options**:
+**Default**: 5m
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ scanning:
+ admissionControllerAPI:
+ clusterConfCacheMaxDuration: 9m
+```
+
+## **sysdig.scanningAnalysiscollectorConcurrentUploads**
+**Required**: `false`
+**Description**: Number of concurrent uploads for Scanning Analysis Collector
+**Options**:
+**Default**: "5"
+**Example**:
+
+```yaml
+sysdig:
+ scanningAnalysiscollectorConcurrentUploads: 5
+```
+
+## **sysdig.scanningAlertMgrForceAutoScan**
+**Required**: `false`
+**Description**: Enable the runtime image autoscan feature. Note that for adopting a more distributed way of scanning runtime images, the Node Image Analyzer (NIA) is preferable.
+**Options**:
+**Default**: `false`
+**Example**:
+
+```yaml
+sysdig:
+ scanningAlertMgrForceAutoScan: false
+```
+
+## **sysdig.secure.scanning.veJanitor.cronjob**
+**Required**: `false`
+**Description**: Cronjob schedule
+**Options**:
+**Default**: "0 0 * * *"
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ veJanitor:
+ cronjob: "5 0 * * *"
+```
+
+## **sysdig.secure.scanning.veJanitor.anchoreDBsslmode**
+**Required**: `false`
+**Description**: Anchore db ssl mode. More info: https://www.postgresql.org/docs/9.1/libpq-ssl.html
+**Options**:
+**Default**: "disable"
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ veJanitor:
+ anchoreDBsslmode: "disable"
+```
+
+## **sysdig.secure.scanning.veJanitor.scanningDbEngine**
+**Required**: `false`
+**Description**: which scanning database engine to use.
+**Options**: mysql
+**Default**: "mysql"
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ veJanitor:
+ scanningDbEngine: "mysql"
+```
+
+
+## **sysdig.metadataService.enabled**
+**Required**: `false`
+**Description**: Whether to enable metadata-service or not
+**Do not modify this unless you
+know what you are doing as modifying it could have unintended
+consequences**
+**Options**:`true|false`
+**Default**: `false`
+**Example**:
+
+```yaml
+sysdig:
+ metadataService:
+ enabled: true
+```
+
+## **sysdig.resources.metadataService.limits.cpu**
+**Required**: `false`
+**Description**: The amount of cpu assigned to metadataService pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 4 |
+| medium | 8 |
+| large | 16 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ metadataService:
+ limits:
+ cpu: 2
+```
+
+## **sysdig.resources.metadataService.limits.memory**
+**Required**: `false`
+**Description**: The amount of memory assigned to metadataService pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 4Gi |
+| medium | 8Gi |
+| large | 16Gi |
+
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ metadataService:
+ limits:
+ memory: 10Mi
+```
+
+## **sysdig.resources.metadataService.requests.cpu**
+**Required**: `false`
+**Description**: The amount of cpu required to schedule metadataService pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 1 |
+| medium | 2 |
+| large | 4 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ metadataService:
+ requests:
+ cpu: 2
+```
+
+## **sysdig.resources.metadataService.requests.memory**
+**Required**: `false`
+**Description**: The amount of memory required to schedule metadataService pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 1Gi |
+| medium | 2Gi |
+| large | 4Gi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ metadataService:
+ requests:
+ memory: 200Mi
+```
+
+## **sysdig.metadataServiceReplicaCount**
+**Required**: `false`
+**Description**: Number of Sysdig metadataService replicas, this is a noop for clusters
+of `size` `small`.
+**Options**:
+**Default**:
+
+| cluster-size | count |
+| ------------ | ----- |
+| small | 2 |
+| medium | 6 |
+| large | 10 |
+
+**Example**:
+
+```yaml
+sysdig:
+ metadataServiceReplicaCount: 4
+```
+
+## **sysdig.metadataServiceVersion**
+**Required**: `false`
+**Description**: Docker image tag of metadataService, relevant when `sysdig.metadataService.enabled` is `true`.
+**Options**:
+**Default**: 1.0.1.1
+**Example**:
+
+```yaml
+sysdig:
+ metadataServiceVersion: 1.0.1.12
+```
+
+## **sysdig.helmRenderer.enabled**
+**Required**: `false`
+**Description**: Whether to enable helm-renderer or not
+**Do not modify this unless you
+know what you are doing as modifying it could have unintended
+consequences**
+**Options**:`true|false`
+**Default**: `false`
+**Example**:
+
+```yaml
+sysdig:
+ helmRenderer:
+ enabled: true
+```
+
+## **sysdig.resources.helmRenderer.limits.cpu**
+**Required**: `false`
+**Description**: The amount of cpu assigned to helmRenderer pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 4 |
+| medium | 8 |
+| large | 16 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ helmRenderer:
+ limits:
+ cpu: 2
+```
+
+## **sysdig.resources.helmRenderer.limits.memory**
+**Required**: `false`
+**Description**: The amount of memory assigned to helmRenderer pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 4Gi |
+| medium | 8Gi |
+| large | 16Gi |
+
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ helmRenderer:
+ limits:
+ memory: 10Mi
+```
+
+## **sysdig.resources.helmRenderer.requests.cpu**
+**Required**: `false`
+**Description**: The amount of cpu required to schedule helmRenderer pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 1 |
+| medium | 2 |
+| large | 4 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ helmRenderer:
+ requests:
+ cpu: 2
+```
+
+## **sysdig.resources.helmRenderer.requests.memory**
+**Required**: `false`
+**Description**: The amount of memory required to schedule helmRenderer pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 1Gi |
+| medium | 2Gi |
+| large | 4Gi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ helmRenderer:
+ requests:
+ memory: 200Mi
+```
+
+## **sysdig.helmRendererReplicaCount**
+**Required**: `false`
+**Description**: Number of Sysdig helmRenderer replicas, this is a noop for clusters
+of `size` `small`.
+**Options**:
+**Default**:
+
+| cluster-size | count |
+| ------------ | ----- |
+| small | 2 |
+| medium | 6 |
+| large | 10 |
+
+**Example**:
+
+```yaml
+sysdig:
+ helmRendererReplicaCount: 4
+```
+
+## **sysdig.helmRendererVersion**
+**Required**: `false`
+**Description**: Docker image tag of helmRenderer, relevant when `sysdig.helmRenderer.enabled` is `true`.
+**Options**:
+**Default**: 0.1.32
+**Example**:
+
+```yaml
+sysdig:
+ helmRendererVersion: 0.1.32
+```
+
+## **sysdig.secure.activityAudit.enabled**
+**Required**: `false`
+**Description**: Enable activity audit for Sysdig secure.
+**Options**:
+**Default**: true
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ activityAudit:
+ enabled: true
+```
+
+## **sysdig.secure.activityAudit.janitor.retentionDays**
+**Required**: `false`
+**Description**: Retention period for Activity Audit data.
+**Options**:
+**Default**: 90
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ activityAudit:
+ janitor:
+ retentionDays: 90
+```
+
+## **sysdig.secure.anchore.enabled**
+**Required**: `false`
+**Description**: Enable anchore for Sysdig Secure.
+**Options**:
+**Default**: true
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ anchore:
+ enabled: true
+```
+
+## **sysdig.secure.compliance.enabled**
+**Required**: `false`
+**Description**: Enable compliance for Sysdig Secure.
+**Options**:
+**Default**: true
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ compliance:
+ enabled: true
+```
+
+## **sysdig.secure.netsec.enabled**
+**Required**: `false`
+**Description**: Enable netsec for Sysdig Secure.
+**Options**:
+**Default**: true
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ netsec:
+ enabled: true
+```
+
+## **sysdig.secure.overview.enabled**
+**Required**: `false`
+**Description**: Enable overview for Sysdig Secure.
+**Options**:
+**Default**: true
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ overview:
+ enabled: true
+```
+
+## **sysdig.secure.padvisor.enabled**
+**Required**: `false`
+**Description**: Enable policy advisor for Sysdig Secure.
+**Options**:
+**Default**: true
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ padvisor:
+ enabled: true
+```
+
+## **sysdig.secure.profiling.enabled**
+**Required**: `false`
+**Description**: Enable profiling for Sysdig Secure.
+**Options**:
+**Default**: true
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ profiling:
+ enabled: true
+```
+
+## **sysdig.secure.scanning.reporting.enabled**
+**Required**: `false`
+**Description**: Enable reporting for Sysdig Secure.
+**Options**:
+**Default**: true
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ scanning:
+ reporting:
+ enabled: true
+```
+
+## **sysdig.secure.scanning.enabled**
+**Required**: `false`
+**Description**: Enable scanning for Sysdig Secure.
+**Options**:
+**Default**: true
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ scanning:
+ enabled: true
+```
+
+## **sysdig.secure.events.enabled**
+**Required**: `false`
+**Description**: Enable events for Sysdig Secure.
+**Options**:
+**Default**: true
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ events:
+ enabled: true
+```
+
+## **sysdig.secure.eventsForwarder.enabled**
+**Required**: `false`
+**Description**: Enable events forwarder for Sysdig Secure.
+**Options**:
+**Default**: true
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ eventsForwarder:
+ enabled: true
+```
+
+## **sysdig.secure.falcoRulesUpdater.enabled**
+**Required**: `false`
+**Description**: Enable the falcoRulesUpdater CronJob. It runs an automated update of the Falco rules. For airgap installs, it expects to find the image in the same registry used for all other services.
+**Options**:
+**Default**: false
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ falcoRulesUpdater:
+ enabled: true
+```
+
+## **sysdig.secure.falcoRulesUpdater.schedule**
+**Required**: `false`
+**Description**: Sets the `.spec.schedule` for the falcoRulesUpdater CronJob
+**Options**:
+**Default**: "0 1 * * *"
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ falcoRulesUpdater:
+ schedule: "*/10 * * * *"
+```
+
+## **sysdig.resources.rapid-response-connector.limits.cpu**
+**Required**: `false`
+**Description**: The amount of cpu assigned to rapid-response-connector pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 1 |
+| medium | 1 |
+| large | 1 |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ rapid-response-connector:
+ limits:
+ cpu: 1
+```
+
+## **sysdig.resources.rapid-response-connector.limits.memory**
+**Required**: `false`
+**Description**: The amount of memory assigned to rapid-response-connector pods
+**Options**:
+**Default**:
+
+| cluster-size | limits |
+| ------------ | ------ |
+| small | 500Mi |
+| medium | 500Mi |
+| large | 500Mi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ rapid-response-connector:
+ limits:
+ memory: 500Mi
+```
+
+## **sysdig.resources.rapid-response-connector.requests.cpu**
+**Required**: `false`
+**Description**: The amount of cpu required to schedule rapid-response-connector pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 250m |
+| medium | 250m |
+| large | 250m |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ rapid-response-connector:
+ requests:
+ cpu: 250m
+```
+
+## **sysdig.resources.rapid-response-connector.requests.memory**
+**Required**: `false`
+**Description**: The amount of memory required to schedule rapid-response-connector pods
+**Options**:
+**Default**:
+
+| cluster-size | requests |
+| ------------ | -------- |
+| small | 50Mi |
+| medium | 50Mi |
+| large | 50Mi |
+
+**Example**:
+
+```yaml
+sysdig:
+ resources:
+ rapid-response-connector:
+ requests:
+ memory: 50Mi
+```
+
+## **sysdig.rapidResponseConnectorReplicaCount**
+**Required**: `false`
+**Description**: Number of Sysdig rapid-response-connector replicas.
+**Options**:
+**Default**:
+
+| cluster-size | count |
+| ------------ | ----- |
+| small | 1 |
+| medium | 1 |
+| large | 1 |
+
+**Example**:
+
+```yaml
+sysdig:
+ rapidResponseConnectorReplicaCount: 1
+```
+
+## **sysdig.secure.rapidResponse.enabled**
+**Required**: `false`
+**Description**: Whether to deploy rapid response or not.
+**Options**:
+**Default**: false
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ rapidResponse:
+ enabled: false
+```
+
+## **sysdig.secure.rapidResponse.validationCodeLength**
+**Required**: `false`
+**Description**: Length of mfa validation code sent via e-mail.
+**Options**:
+**Default**: 6
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ rapidResponse:
+ validationCodeLength: 8
+```
+
+## **sysdig.secure.rapidResponse.validationCodeSecondsDuration**
+**Required**: `false`
+**Description**: Duration in seconds of mfa validation code sent via e-mail.
+**Options**:
+**Default**: 180
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ rapidResponse:
+ validationCodeSecondsDuration: 8
+```
+
+## **sysdig.secure.rapidResponse.sessionTotalSecondsTTL**
+**Required**: `false`
+**Description**: Global duration of session in seconds.
+**Options**:
+**Default**: 7200
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ rapidResponse:
+ sessionTotalSecondsTTL: 7200
+```
+
+
+## **sysdig.secure.rapidResponse.sessionIdleSecondsTTL**
+**Required**: `false`
+**Description**: Idle duration of session in seconds.
+**Options**:
+**Default**: 300
+**Example**:
+
+```yaml
+sysdig:
+ secure:
+ rapidResponse:
+ sessionIdleSecondsTTL: 300
+```
+
+
+## **sysdig.secure.scanning.feedsEnabled**
+**Required**: `false`
+**Description**: Deploys a local Sysdig Secure feeds API and DB for airgapped installs that cannot reach out to one of Sysdig SaaS products
+**Options**: `true|false`
+**Default**: `false`
+
+**Example**:
+```yaml
+sysdig:
+ secure:
+ scanning:
+ feedsEnabled: true
+```
+
+## **sysdig.feedsAPIVersion**
+**Required**: `false`
+**Description**: Sets feeds API version
+**Options**:
+**Default**: `latest`
+
+**Example**:
+```yaml
+sysdig:
+ feedsAPIVersion: 0.5.0
+```
+
+## **sysdig.feedsDBVersion**
+**Required**: `false`
+**Description**: Sets feeds database version
+**Options**:
+**Default**: `latest`
+
+**Example**:
+```yaml
+sysdig:
+ feedsDBVersion: 0.5.0-2020-03-11
+```
diff --git a/installer/docs/upgrade.md b/installer/docs/upgrade.md
new file mode 100644
index 00000000..17fd8ca7
--- /dev/null
+++ b/installer/docs/upgrade.md
@@ -0,0 +1,93 @@
+# Upgrade
+
+## Overview
+
+The Installer can be used to upgrade a Sysdig implementation. As in an
+install, you must meet the prerequisites, download the values.yaml, edit the
+values as indicated, and run the Installer. The main difference is that you
+run it twice: once to discover the differences between the old and new
+versions, and the second time to deploy the new version.
+
+As with installs, it can be used in airgapped or non-airgapped environments.
+
+Review the [Prerequisites](../README.md#prerequisites) and [Installation
+Options](../README.md#quickstart-install) for more context.
+
+## Upgrade Steps
+
+To upgrade:
+
+1. Copy the current version sysdig-chart/values.yaml to your working directory.
+ ```bash
+ wget https://raw.githubusercontent.com/draios/sysdigcloud-kubernetes/installer/installer/values.yaml
+ ```
+2. Edit the following values:
+ - [`scripts`](docs/configuration_parameters.md#scripts): Set this to
+ `generate diff`. This setting will generate the differences between the
+ installed environment and the upgrade version. The changes will be displayed
+ in your terminal.
+ - [`size`](docs/configuration_parameters.md#size): Specifies the size of the
+ cluster. Size defines CPU, Memory, Disk, and Replicas. Valid options are:
+ small, medium and large.
+ - [`quaypullsecret`](docs/configuration_parameters.md#quaypullsecret):
+ quay.io credentials provided with your Sysdig purchase confirmation mail.
+ - [`storageClassProvisioner`](docs/configuration_parameters.md#storageClassProvisioner):
+ The name of the storage class provisioner to use when creating the
+ configured storageClassName parameter. If you do not use one of those two
+ dynamic storage provisioners, then enter: hostPath and refer to the Advanced
+ examples for how to configure static storage provisioning with this option.
+ Valid options: aws, gke, hostPath
+ - [`sysdig.license`](docs/configuration_parameters.md#sysdiglicense): Sysdig license key
+ provided with your Sysdig purchase confirmation mail
+ - [`sysdig.dnsName`](docs/configuration_parameters.md#sysdigdnsName): The domain name
+ the Sysdig APIs will be served on.
+ - [`sysdig.collector.dnsName`](docs/configuration_parameters.md#sysdigcollectordnsName):
+ (OpenShift installs only) Domain name the Sysdig collector will be served on.
+ When not configured it defaults to whatever is configured for sysdig.dnsName.
+ - [`sysdig.ingressNetworking`](docs/configuration_parameters.md#sysdigingressnetworking):
+ The networking construct used to expose the Sysdig API and collector. Options
+ are:
+ - hostnetwork: sets the hostnetworking in the ingress daemonset and opens
+ host ports for api and collector. This does not create a Kubernetes service.
+ - loadbalancer: creates a service of type loadbalancer and expects that
+ your Kubernetes cluster can provision a load balancer with your cloud provider.
+ - nodeport: creates a service of type nodeport. The node ports can be
+ customized with:
+
+ - sysdig.ingressNetworkingInsecureApiNodePort
+ - sysdig.ingressNetworkingApiNodePort
+ - sysdig.ingressNetworkingCollectorNodePort
+
+ **NOTE**: If doing an airgapped install (see Airgapped Installation Options), you
+ would also edit the following values:
+
+ - [`airgapped_registry_name`](docs/configuration_parameters.md#airgapped_registry_name):
+ The URL of the airgapped (internal) docker registry. This URL is used for
+ installations where the Kubernetes cluster can not pull images directly from
+ Quay.
+ - [`airgapped_registry_password`](docs/configuration_parameters.md#airgapped_registry_password):
+ The password for the configured airgapped_registry_username. Ignore this
+ parameter if the registry does not require authentication.
+ - [`airgapped_registry_username`](docs/configuration_parameters.md#airgapped_registry_username):
+ The username for the configured airgapped_registry_name. Ignore this
+ parameter if the registry does not require authentication.
+
+3. Run the Installer (if you are in airgapped environment make sure you follow
+instructions from installation on how to get the images to your airgapped
+registry)
+ ```bash
+ ./installer diff
+ ```
+4. If you are fine with the differences displayed, then run:
+ ```bash
+ ./installer deploy
+ ```
+ If you find differences that you want to preserve you should
+ look in the [Configuration Parameters](docs/configuration_parameters.md)
+ documentation for the configuration parameter that matches the difference
+ you intend preserving and update your values.yaml accordingly then repeat
+ step 3 until you are fine with the differences. Then set scripts to deploy
+ and run for the final time.
+
+5. The datastores Cassandra and ElasticSearch have onDelete update strategy
+ and need to be manually restarted to complete upgrade.
diff --git a/installer/examples/elasticsearch-init-vmmaxmapcount/overlays/patch.yaml b/installer/examples/elasticsearch-init-vmmaxmapcount/overlays/patch.yaml
new file mode 100644
index 00000000..5862d399
--- /dev/null
+++ b/installer/examples/elasticsearch-init-vmmaxmapcount/overlays/patch.yaml
@@ -0,0 +1,32 @@
+#This patchfile adds an initcontainer to ElasticSearch and sets vmmaxmap count in ES hosts
+#
+# WARNING: This patch is no longer necessary. Instead, you can add this option to the installer values:
+#
+# elasticsearch:
+# ...
+# setVmMaxMapCount: true
+#
+---
+apiVersion: apps/v1
+kind: StatefulSet
+metadata:
+ name: sysdigcloud-elasticsearch
+spec:
+ template:
+ spec:
+ initContainers:
+ - name: elasticsearch-init-vmmaxmapcount
+ image: quay.io/sysdig/opensearch-1:
+ securityContext:
+ capabilities:
+ drop:
+ - ALL
+ privileged: true
+ readOnlyRootFilesystem: true
+ runAsNonRoot: false
+ runAsUser: 0
+ command:
+ - sysctl
+ - -w
+ args:
+ - vm.max_map_count=262144
diff --git a/installer/examples/elasticsearch-init-vmmaxmapcount/values.yaml b/installer/examples/elasticsearch-init-vmmaxmapcount/values.yaml
new file mode 100644
index 00000000..9955e422
--- /dev/null
+++ b/installer/examples/elasticsearch-init-vmmaxmapcount/values.yaml
@@ -0,0 +1,13 @@
+apps: monitor
+schema_version: 1.0.0
+size: small
+quaypullsecret:
+storageClassProvisioner: aws
+sysdig:
+ ingressNetworking: loadbalancer
+ admin:
+ username: foo@bar.com
+ license:
+ dnsName: foo.bar
+elasticsearch:
+ setVmMaxMapCount: true
diff --git a/installer/examples/node-labels-and-taints/values.yaml b/installer/examples/node-labels-and-taints/values.yaml
new file mode 100644
index 00000000..5b06e9b5
--- /dev/null
+++ b/installer/examples/node-labels-and-taints/values.yaml
@@ -0,0 +1,52 @@
+# Node labels and node taints can be combined to ensure only Sysdig platform pods run on a particular node, the example below show starting from the `tolerations` sections shows how to configure the installer to take advantage of labels and tolerations.
+size: medium
+# Replace with quay.io pull secrets provided by the sales team.
+quaypullsecret:
+# Acceptable values here are awe|gke|none|hostPath, change this to none and configure storageClassName if you want to use an existing storageClass
+storageClassProvisioner: hostPath
+# Uncomment the below to specify an existing storageClass, if not configured a storageClass is created with the configured storageClassProvisioner
+# storageClassName: sysdig
+elasticsearch:
+ hostPathNodes:
+ # replace with the name section of kubectl get nodes
+ - my-awesome-node01
+ - my-awesome-node02
+ - my-awesome-node03
+sysdig:
+ mysql:
+ hostPathNodes:
+ # replace with the name section of kubectl get nodes
+ - my-awesome-node01
+ postgresql:
+ hostPathNodes:
+ # replace with the name section of kubectl get nodes
+ - my-awesome-node01
+ cassandra:
+ hostPathNodes:
+ # replace with the name section of kubectl get nodes
+ - my-awesome-node01
+ - my-awesome-node02
+ - my-awesome-node03
+ # Replace with domain name the api should be served on.
+ dnsName:
+ admin:
+ username: pov@sysdig.com
+ # Replace with license provided by the sales team.
+ license:
+
+ # Everything below here is the core piece of this configuration.
+
+ # Nodes needs to have been assigned the taint dedicated=sysdig:NoSchedule, e.g:
+ # kubectl taint my-awesome-node01 dedicated=sysdig:NoSchedule
+ # for the below to work.
+ tolerations:
+ - key: "dedicated"
+ operator: "Equal"
+ value: sysdig
+ effect: "NoSchedule"
+# Nodes needs to have been assigned labels role=sysdig for the below to work
+# e.g: kubectl label nodes my-awesome-node01 role=sysdig
+# for the below to work.
+nodeaffinityLabel:
+ key: role
+ value: sysdig
diff --git a/installer/examples/openshift-with-hostpath/values.yaml b/installer/examples/openshift-with-hostpath/values.yaml
new file mode 100644
index 00000000..63133d72
--- /dev/null
+++ b/installer/examples/openshift-with-hostpath/values.yaml
@@ -0,0 +1,58 @@
+size: medium
+# The below can be ignored for non-openshift clusters.
+deployment: openshift
+# Replace with quay.io pull secrets provided by the sales team.
+quaypullsecret:
+# Acceptable values here are awe|gke|none|hostPath, change this to none and configure storageClassName if you want to use an existing storageClass
+storageClassProvisioner: hostPath
+# Uncomment the below to specify an existing storageClass, if not configured a storageClass is created with the configured storageClassProvisioner
+# storageClassName: sysdig
+elasticsearch:
+ hostPathNodes:
+ # replace with the name section of kubectl get nodes
+ - my-cool-host1.com
+ - my-cool-host2.com
+ - my-cool-host3.com
+ - my-cool-host4.com
+ - my-cool-host5.com
+ - my-cool-host6.com
+sysdig:
+ # Openshift API url along with its port number
+ openshiftUrl:
+ # Username of the user to access the configured openshift url
+ openshiftUser:
+ # Password of the user to access the configured openshift url
+ openshiftPassword:
+ collector:
+ dnsName:
+ # Replace with domain name the api should be served on.
+ dnsName:
+ admin:
+ username:
+ # Replace with license provided by the sales team.
+ license:
+ cassandra:
+ hostPathNodes:
+ # replace with the name section of kubectl get nodes
+ - my-cool-host1.com
+ - my-cool-host2.com
+ - my-cool-host3.com
+ - my-cool-host4.com
+ - my-cool-host5.com
+ - my-cool-host6.com
+ postgresql:
+ hostPathNodes:
+ # replace with the name section of kubectl get nodes
+ - my-cool-host1.com
+ kafka:
+ hostPathNodes:
+ # replace with the name section of kubectl get nodes
+ - my-cool-host1.com
+ - my-cool-host2.com
+ - my-cool-host3.com
+ zookeeper:
+ hostPathNodes:
+ # replace with the name section of kubectl get nodes
+ - my-cool-host1.com
+ - my-cool-host2.com
+ - my-cool-host3.com
diff --git a/installer/examples/rbac/README.md b/installer/examples/rbac/README.md
new file mode 100644
index 00000000..cd8ae585
--- /dev/null
+++ b/installer/examples/rbac/README.md
@@ -0,0 +1,39 @@
+# RBAC for Installer
+
+- RBAC resources required to run the `installer`
+
+- each of the three directories contains YAMLs for a specific case:
+
+[readonly](readonly)
+- readonly access to the namespace and minimal resources necessary for the installer to
+ `generate` and `secure-diff` the existing install (or for a new install)
+
+[external-ingress](external-ingress)
+- more restrictive RBAC access rights by using an external `ingress` object
+- TBD
+
+[fullaccess](fullaccess)
+- allows the execution of `installer` as-is, including rights for `StorageClass` and `IngressController`
+
+[openshift](openshift)
+- same base of `fullaccess` with some ocp specific bindings: the scc ones that give the installer the power of running `oc adm policy add-scc-to-user `. Please be aware that this example will not work with openshift 3.11, in that case you need to create the scc roles first (with `use` as verb)
+
+[openshift-pgha](openshift-pgha)
+- same of `openshift` but the installer sa has more grants since it need to create a clusterroles for the zalando postgres operator service account.
+
+[openshift-nopgha-noagent](openshift-nopgha-noagent)
+- openshift case where we don't need rbac to deploy the agent since is done externally to the installer and we already have a zalando postgres operator installed so we just need to use it.
+
+## Instructions
+
+- for each usecase we provide YAMLs to create the necessary RBAC resources
+
+- this example assumes that Sysdig will be installed in the `sysdigcloud` namespace
+
+- apply these YAMLs to your cluster from an `admin` level account
+
+- create a `kubeconfig` for the ServiceAccount installer
+
+- use the `kubeconfig` to execute the installer
+
+- protip: if you have the openshift binary installed you can just use `oc serviceaccounts create-kubeconfig installer` and this will create the serviceaccount kubeconfig for you
diff --git a/installer/examples/rbac/fullaccess/clusterrole.yaml b/installer/examples/rbac/fullaccess/clusterrole.yaml
new file mode 100644
index 00000000..4c011a3d
--- /dev/null
+++ b/installer/examples/rbac/fullaccess/clusterrole.yaml
@@ -0,0 +1,53 @@
+---
+apiVersion: rbac.authorization.k8s.io/v1
+kind: ClusterRole
+metadata:
+ name: installer
+rules:
+- apiGroups:
+ - ""
+ resources:
+ - configmaps
+ - endpoints
+ - nodes
+ - persistentvolumes
+ - pods
+ - secrets
+ - services
+ verbs:
+ - get
+ - list
+ - watch
+- apiGroups:
+ - storage.k8s.io
+ resources:
+ - storageclasses
+ verbs:
+ - get
+ - list
+ - create
+ - update
+- apiGroups:
+ - rbac.authorization.k8s.io
+ resources:
+ - clusterrolebindings
+ - clusterroles
+ verbs:
+ - get
+ - list
+ - create
+ - update
+- apiGroups:
+ - extensions
+ resources:
+ - ingresses
+ verbs:
+ - list
+ - get
+ - watch
+- apiGroups:
+ - extensions
+ resources:
+ - ingresses/status
+ verbs:
+ - update
diff --git a/installer/examples/rbac/fullaccess/clusterrolebinding.yaml b/installer/examples/rbac/fullaccess/clusterrolebinding.yaml
new file mode 100644
index 00000000..3697f88c
--- /dev/null
+++ b/installer/examples/rbac/fullaccess/clusterrolebinding.yaml
@@ -0,0 +1,13 @@
+---
+kind: ClusterRoleBinding
+apiVersion: rbac.authorization.k8s.io/v1
+metadata:
+ name: installer
+roleRef:
+ apiGroup: rbac.authorization.k8s.io
+ kind: ClusterRole
+ name: installer
+subjects:
+- kind: ServiceAccount
+ name: installer
+ namespace: sysdigcloud
diff --git a/installer/examples/rbac/fullaccess/role.yaml b/installer/examples/rbac/fullaccess/role.yaml
new file mode 100644
index 00000000..1ee55027
--- /dev/null
+++ b/installer/examples/rbac/fullaccess/role.yaml
@@ -0,0 +1,103 @@
+---
+apiVersion: rbac.authorization.k8s.io/v1
+kind: Role
+metadata:
+ namespace: sysdigcloud
+ name: installer
+rules:
+ - apiGroups:
+ - 'extensions'
+ resources:
+ - ingresses
+ verbs:
+ - get
+ - create
+ - list
+ - patch
+ - update
+ - delete
+ - apiGroups:
+ - 'policy'
+ resources:
+ - poddisruptionbudgets
+ verbs:
+ - create
+ - update
+ - get
+ - list
+ - patch
+ - apiGroups:
+ - '*'
+ resources:
+ - networkpolicies
+ verbs:
+ - get
+ - list
+ - watch
+ - create
+ - update
+ - patch
+ - delete
+ - apiGroups:
+ - '*'
+ resources:
+ - cronjobs
+ - configmaps
+ - deployments
+ - deployments/scale
+ - daemonsets
+ - endpoints
+ - events
+ - jobs
+ - namespaces
+ - podtemplates
+ - podsecuritypolicies
+ - pods
+ - pods/log
+ - pods/exec
+ - pod/delete
+ - pod/status
+ - podpreset
+ - persistentvolumeclaims
+ - replicationcontrollers
+ - replicasets
+ - secrets
+ - services
+ - serviceaccounts
+ - statefulsets
+ verbs:
+ - get
+ - list
+ - watch
+ - create
+ - update
+ - patch
+ - delete
+ - apiGroups:
+ - '*'
+ resources:
+ - storageclasses
+ verbs:
+ - get
+ - list
+ - apiGroups:
+ - '*'
+ resources:
+ - namespace
+ verbs:
+ - create
+ - get
+ - list
+ - update
+ - apiGroups:
+ - rbac.authorization.k8s.io
+ resources:
+ - roles
+ - rolebindings
+ verbs:
+ - create
+ - update
+ - delete
+ - get
+ - list
+
diff --git a/installer/examples/rbac/fullaccess/rolebinding.yaml b/installer/examples/rbac/fullaccess/rolebinding.yaml
new file mode 100644
index 00000000..19845dc6
--- /dev/null
+++ b/installer/examples/rbac/fullaccess/rolebinding.yaml
@@ -0,0 +1,14 @@
+---
+apiVersion: rbac.authorization.k8s.io/v1
+kind: RoleBinding
+metadata:
+ name: installer
+ namespace: sysdigcloud
+roleRef:
+ apiGroup: rbac.authorization.k8s.io
+ kind: Role
+ name: installer
+subjects:
+- kind: ServiceAccount
+ name: installer
+ namespace: sysdigcloud
diff --git a/installer/examples/rbac/fullaccess/sa.yaml b/installer/examples/rbac/fullaccess/sa.yaml
new file mode 100644
index 00000000..a8c086bd
--- /dev/null
+++ b/installer/examples/rbac/fullaccess/sa.yaml
@@ -0,0 +1,6 @@
+---
+apiVersion: v1
+kind: ServiceAccount
+metadata:
+ name: installer
+ namespace: sysdigcloud
diff --git a/installer/examples/rbac/openshift-nopgha-noagent/clusterrole.yaml b/installer/examples/rbac/openshift-nopgha-noagent/clusterrole.yaml
new file mode 100644
index 00000000..a3b97b33
--- /dev/null
+++ b/installer/examples/rbac/openshift-nopgha-noagent/clusterrole.yaml
@@ -0,0 +1,39 @@
+---
+apiVersion: rbac.authorization.k8s.io/v1
+kind: ClusterRole
+metadata:
+ name: installer
+rules:
+- apiGroups:
+ - ""
+ resources:
+ - namespaces
+ - nodes
+ - persistentvolumes
+ verbs:
+ - get
+ - list
+ - watch
+- apiGroups:
+ - storage.k8s.io
+ resources:
+ - storageclasses
+ verbs:
+ - get
+ - list
+- apiGroups:
+ - storage.k8s.io
+ resources:
+ - storageclasses
+ verbs:
+ - get
+ - list
+- apiGroups:
+ - apiextensions.k8s.io
+ resources:
+ - customresourcedefinitions
+ verbs:
+ - get
+ - list
+ - watch
+...
diff --git a/installer/examples/rbac/openshift-nopgha-noagent/clusterrolebinding.yaml b/installer/examples/rbac/openshift-nopgha-noagent/clusterrolebinding.yaml
new file mode 100644
index 00000000..44151781
--- /dev/null
+++ b/installer/examples/rbac/openshift-nopgha-noagent/clusterrolebinding.yaml
@@ -0,0 +1,57 @@
+---
+kind: ClusterRoleBinding
+apiVersion: rbac.authorization.k8s.io/v1
+metadata:
+ name: installer
+roleRef:
+ apiGroup: rbac.authorization.k8s.io
+ kind: ClusterRole
+ name: installer
+subjects:
+- kind: ServiceAccount
+ name: installer
+ namespace: sysdigcloud
+---
+# We need the scc clusterrole to be able to
+# grants scc to sysdig service-accounts in
+# sysdig namespace.
+#
+# Starting from OCP 4.6 we already have
+# all the built-in clusteroles:
+#
+# system:openshift:scc:anyuid
+# system:openshift:scc:hostaccess
+# system:openshift:scc:hostmount
+# system:openshift:scc:hostnetwork
+# system:openshift:scc:nonroot
+# system:openshift:scc:privileged
+# system:openshift:scc:restricted
+#
+# According to:
+# https://github.com/draios/installer/blob/4d7b1886c4c91796a17c706eb85a20e6e25ba041/installer/pkg/installer/deploy.go#L1298-L1306
+kind: ClusterRoleBinding
+apiVersion: rbac.authorization.k8s.io/v1
+metadata:
+ name: installer-scc-anyuid
+roleRef:
+ apiGroup: rbac.authorization.k8s.io
+ kind: ClusterRole
+ name: system:openshift:scc:anyuid
+subjects:
+- kind: ServiceAccount
+ name: installer
+ namespace: sysdigcloud
+---
+kind: ClusterRoleBinding
+apiVersion: rbac.authorization.k8s.io/v1
+metadata:
+ name: installer-scc-privileged
+roleRef:
+ apiGroup: rbac.authorization.k8s.io
+ kind: ClusterRole
+ name: system:openshift:scc:privileged
+subjects:
+- kind: ServiceAccount
+ name: installer
+ namespace: sysdigcloud
+...
diff --git a/installer/examples/rbac/openshift-nopgha-noagent/role.yaml b/installer/examples/rbac/openshift-nopgha-noagent/role.yaml
new file mode 100644
index 00000000..6825dcdd
--- /dev/null
+++ b/installer/examples/rbac/openshift-nopgha-noagent/role.yaml
@@ -0,0 +1,117 @@
+---
+apiVersion: rbac.authorization.k8s.io/v1
+kind: Role
+metadata:
+ namespace: sysdigcloud
+ name: installer
+rules:
+ - apiGroups:
+ - networking.k8s.io
+ resources:
+ - ingresses
+ verbs:
+ - get
+ - create
+ - list
+ - patch
+ - update
+ - delete
+ - apiGroups:
+ - 'policy'
+ resources:
+ - poddisruptionbudgets
+ verbs:
+ - create
+ - update
+ - get
+ - list
+ - patch
+ - apiGroups:
+ - '*'
+ resources:
+ - networkpolicies
+ verbs:
+ - get
+ - list
+ - watch
+ - create
+ - update
+ - patch
+ - delete
+ - apiGroups:
+ - '*'
+ resources:
+ - cronjobs
+ - configmaps
+ - deployments
+ - deployments/scale
+ - daemonsets
+ - endpoints
+ - events
+ - jobs
+ - namespaces
+ - podtemplates
+ - podsecuritypolicies
+ - pods
+ - pods/log
+ - pods/exec
+ - pod/delete
+ - pod/status
+ - podpreset
+ - persistentvolumeclaims
+ - replicationcontrollers
+ - replicasets
+ - secrets
+ - services
+ - serviceaccounts
+ - statefulsets
+ verbs:
+ - get
+ - list
+ - watch
+ - create
+ - update
+ - patch
+ - delete
+ - apiGroups:
+ - rbac.authorization.k8s.io
+ resources:
+ - roles
+ - rolebindings
+ verbs:
+ - create
+ - update
+ - delete
+ - get
+ - list
+ - apiGroups:
+ - acid.zalan.do
+ resources:
+ - postgresqls
+ - postgresqls/status
+ verbs:
+ - create
+ - delete
+ - deletecollection
+ - get
+ - list
+ - patch
+ - update
+ - watch
+ - apiGroups:
+ - acid.zalan.do
+ resources:
+ - postgresteams
+ verbs:
+ - get
+ - list
+ - watch
+ - apiGroups:
+ - acid.zalan.do
+ resources:
+ - operatorconfigurations
+ verbs:
+ - get
+ - list
+ - watch
+...
diff --git a/installer/examples/rbac/openshift-nopgha-noagent/rolebinding.yaml b/installer/examples/rbac/openshift-nopgha-noagent/rolebinding.yaml
new file mode 100644
index 00000000..6ccd2581
--- /dev/null
+++ b/installer/examples/rbac/openshift-nopgha-noagent/rolebinding.yaml
@@ -0,0 +1,15 @@
+---
+apiVersion: rbac.authorization.k8s.io/v1
+kind: RoleBinding
+metadata:
+ name: installer
+ namespace: sysdigcloud
+roleRef:
+ apiGroup: rbac.authorization.k8s.io
+ kind: Role
+ name: installer
+subjects:
+- kind: ServiceAccount
+ name: installer
+ namespace: sysdigcloud
+...
diff --git a/installer/examples/rbac/openshift-nopgha-noagent/sa.yaml b/installer/examples/rbac/openshift-nopgha-noagent/sa.yaml
new file mode 100644
index 00000000..a59bb243
--- /dev/null
+++ b/installer/examples/rbac/openshift-nopgha-noagent/sa.yaml
@@ -0,0 +1,7 @@
+---
+apiVersion: v1
+kind: ServiceAccount
+metadata:
+ name: installer
+ namespace: sysdigcloud
+...
diff --git a/installer/examples/rbac/openshift-pgha/clusterrole.yaml b/installer/examples/rbac/openshift-pgha/clusterrole.yaml
new file mode 100644
index 00000000..e402d2f8
--- /dev/null
+++ b/installer/examples/rbac/openshift-pgha/clusterrole.yaml
@@ -0,0 +1,229 @@
+---
+apiVersion: rbac.authorization.k8s.io/v1
+kind: ClusterRole
+metadata:
+ name: installer
+rules:
+- apiGroups:
+ - ""
+ resources:
+ - configmaps
+ - endpoints
+ - nodes
+ - persistentvolumes
+ - pods
+ - secrets
+ - services
+ verbs:
+ - get
+ - list
+ - watch
+- apiGroups:
+ - storage.k8s.io
+ resources:
+ - storageclasses
+ verbs:
+ - get
+ - list
+ - create
+ - update
+- apiGroups:
+ - rbac.authorization.k8s.io
+ resources:
+ - clusterrolebindings
+ - clusterroles
+ verbs:
+ - get
+ - list
+ - create
+ - update
+ - patch
+- apiGroups:
+ - networking.k8s.io
+ resources:
+ - ingresses
+ verbs:
+ - list
+ - watch
+ - get
+- apiGroups:
+ - networking.k8s.io
+ resources:
+ - ingresses/status
+ verbs:
+ - update
+# -----> PG HA
+# PG Ha notes: Even if we are going to repeat some apigroup/resources this is how we can
+# grants all the rbac we need and at the same time use the less-privileges method
+# -----> Not d-r-y but better than wide grants
+- apiGroups:
+ - ""
+ resources:
+ - configmaps
+ verbs:
+ - delete
+ - create
+ - deletecollection
+ - get
+ - list
+ - patch
+ - update
+ - watch
+- apiGroups:
+ - ""
+ resources:
+ - endpoints
+ verbs:
+ - delete
+ - deletecollection
+ - update
+- apiGroups:
+ - ""
+ resources:
+ - persistentvolumeclaims
+ verbs:
+ - delete
+ - get
+ - list
+ - patch
+ - update
+- apiGroups:
+ - ""
+ resources:
+ - persistentvolumes
+ verbs:
+ - get
+ - list
+- apiGroups:
+ - ""
+ resources:
+ - events
+ verbs:
+ - get
+ - list
+ - update
+ - watch
+ - patch
+ - create
+- apiGroups:
+ - ""
+ resources:
+ - secrets
+ verbs:
+ - create
+ - delete
+ - update
+- apiGroups:
+ - ""
+ resources:
+ - serviceaccounts
+ verbs:
+ - get
+ - create
+- apiGroups:
+ - ""
+ resources:
+ - services
+ verbs:
+ - create
+ - delete
+ - update
+ - patch
+# We need to have the grants to have the power to create grants at cluster level to the target sa
+- apiGroups:
+ - acid.zalan.do
+ resources:
+ - postgresqls
+ - postgresqls/status
+ - operatorconfigurations
+ verbs:
+ - create
+ - delete
+ - deletecollection
+ - get
+ - list
+ - patch
+ - update
+ - watch
+# operator only reads PostgresTeams
+- apiGroups:
+ - acid.zalan.do
+ resources:
+ - postgresteams
+ verbs:
+ - get
+ - list
+ - watch
+# to create or get/update CRDs when starting up
+- apiGroups:
+ - apiextensions.k8s.io
+ resources:
+ - customresourcedefinitions
+ verbs:
+ - create
+ - get
+ - patch
+ - update
+# to watch Spilo pods and do rolling updates. Creation via StatefulSet
+- apiGroups:
+ - ""
+ resources:
+ - pods
+ verbs:
+ - delete
+ - update
+ - patch
+# to resize the filesystem in Spilo pods when increasing volume size
+- apiGroups:
+ - ""
+ resources:
+ - pods/exec
+ verbs:
+ - create
+# to get namespaces operator resources can run in
+- apiGroups:
+ - ""
+ resources:
+ - namespaces
+ verbs:
+ - get
+# to create sts/cronjob/pdb
+- apiGroups:
+ - apps
+ resources:
+ - deployments
+ - statefulsets
+ verbs:
+ - create
+ - delete
+ - get
+ - list
+ - patch
+- apiGroups:
+ - batch
+ resources:
+ - cronjobs
+ verbs:
+ - create
+ - delete
+ - get
+ - list
+ - patch
+ - update
+- apiGroups:
+ - policy
+ resources:
+ - poddisruptionbudgets
+ verbs:
+ - create
+ - delete
+ - get
+- apiGroups:
+ - rbac.authorization.k8s.io
+ resources:
+ - rolebindings
+ verbs:
+ - create
+ - delete
+ - get
+...
diff --git a/installer/examples/rbac/openshift-pgha/clusterrolebinding.yaml b/installer/examples/rbac/openshift-pgha/clusterrolebinding.yaml
new file mode 100644
index 00000000..44151781
--- /dev/null
+++ b/installer/examples/rbac/openshift-pgha/clusterrolebinding.yaml
@@ -0,0 +1,57 @@
+---
+kind: ClusterRoleBinding
+apiVersion: rbac.authorization.k8s.io/v1
+metadata:
+ name: installer
+roleRef:
+ apiGroup: rbac.authorization.k8s.io
+ kind: ClusterRole
+ name: installer
+subjects:
+- kind: ServiceAccount
+ name: installer
+ namespace: sysdigcloud
+---
+# We need the scc clusterrole to be able to
+# grants scc to sysdig service-accounts in
+# sysdig namespace.
+#
+# Starting from OCP 4.6 we already have
+# all the built-in clusteroles:
+#
+# system:openshift:scc:anyuid
+# system:openshift:scc:hostaccess
+# system:openshift:scc:hostmount
+# system:openshift:scc:hostnetwork
+# system:openshift:scc:nonroot
+# system:openshift:scc:privileged
+# system:openshift:scc:restricted
+#
+# According to:
+# https://github.com/draios/installer/blob/4d7b1886c4c91796a17c706eb85a20e6e25ba041/installer/pkg/installer/deploy.go#L1298-L1306
+kind: ClusterRoleBinding
+apiVersion: rbac.authorization.k8s.io/v1
+metadata:
+ name: installer-scc-anyuid
+roleRef:
+ apiGroup: rbac.authorization.k8s.io
+ kind: ClusterRole
+ name: system:openshift:scc:anyuid
+subjects:
+- kind: ServiceAccount
+ name: installer
+ namespace: sysdigcloud
+---
+kind: ClusterRoleBinding
+apiVersion: rbac.authorization.k8s.io/v1
+metadata:
+ name: installer-scc-privileged
+roleRef:
+ apiGroup: rbac.authorization.k8s.io
+ kind: ClusterRole
+ name: system:openshift:scc:privileged
+subjects:
+- kind: ServiceAccount
+ name: installer
+ namespace: sysdigcloud
+...
diff --git a/installer/examples/rbac/openshift-pgha/role.yaml b/installer/examples/rbac/openshift-pgha/role.yaml
new file mode 100644
index 00000000..6b8912bf
--- /dev/null
+++ b/installer/examples/rbac/openshift-pgha/role.yaml
@@ -0,0 +1,96 @@
+---
+apiVersion: rbac.authorization.k8s.io/v1
+kind: Role
+metadata:
+ namespace: sysdigcloud
+ name: installer
+rules:
+ - apiGroups:
+ - networking.k8s.io
+ resources:
+ - ingresses
+ verbs:
+ - get
+ - create
+ - list
+ - patch
+ - update
+ - delete
+ - apiGroups:
+ - 'policy'
+ resources:
+ - poddisruptionbudgets
+ verbs:
+ - create
+ - update
+ - get
+ - list
+ - patch
+ - apiGroups:
+ - '*'
+ resources:
+ - networkpolicies
+ verbs:
+ - get
+ - list
+ - watch
+ - create
+ - update
+ - patch
+ - delete
+ - apiGroups:
+ - '*'
+ resources:
+ - cronjobs
+ - configmaps
+ - deployments
+ - deployments/scale
+ - daemonsets
+ - endpoints
+ - events
+ - jobs
+ - namespaces
+ - podtemplates
+ - podsecuritypolicies
+ - pods
+ - pods/log
+ - pods/exec
+ - pod/delete
+ - pod/status
+ - podpreset
+ - persistentvolumeclaims
+ - replicationcontrollers
+ - replicasets
+ - secrets
+ - services
+ - serviceaccounts
+ - statefulsets
+ verbs:
+ - get
+ - list
+ - watch
+ - create
+ - update
+ - patch
+ - delete
+ - apiGroups:
+ - '*'
+ resources:
+ - namespace
+ verbs:
+ - create
+ - get
+ - list
+ - update
+ - apiGroups:
+ - rbac.authorization.k8s.io
+ resources:
+ - roles
+ - rolebindings
+ verbs:
+ - create
+ - update
+ - delete
+ - get
+ - list
+...
diff --git a/installer/examples/rbac/openshift-pgha/rolebinding.yaml b/installer/examples/rbac/openshift-pgha/rolebinding.yaml
new file mode 100644
index 00000000..6ccd2581
--- /dev/null
+++ b/installer/examples/rbac/openshift-pgha/rolebinding.yaml
@@ -0,0 +1,15 @@
+---
+apiVersion: rbac.authorization.k8s.io/v1
+kind: RoleBinding
+metadata:
+ name: installer
+ namespace: sysdigcloud
+roleRef:
+ apiGroup: rbac.authorization.k8s.io
+ kind: Role
+ name: installer
+subjects:
+- kind: ServiceAccount
+ name: installer
+ namespace: sysdigcloud
+...
diff --git a/installer/examples/rbac/openshift-pgha/sa.yaml b/installer/examples/rbac/openshift-pgha/sa.yaml
new file mode 100644
index 00000000..a59bb243
--- /dev/null
+++ b/installer/examples/rbac/openshift-pgha/sa.yaml
@@ -0,0 +1,7 @@
+---
+apiVersion: v1
+kind: ServiceAccount
+metadata:
+ name: installer
+ namespace: sysdigcloud
+...
diff --git a/installer/examples/rbac/openshift/clusterrole.yaml b/installer/examples/rbac/openshift/clusterrole.yaml
new file mode 100644
index 00000000..65dcbf14
--- /dev/null
+++ b/installer/examples/rbac/openshift/clusterrole.yaml
@@ -0,0 +1,55 @@
+---
+apiVersion: rbac.authorization.k8s.io/v1
+kind: ClusterRole
+metadata:
+ name: installer
+rules:
+- apiGroups:
+ - ""
+ resources:
+ - configmaps
+ - endpoints
+ - nodes
+ - persistentvolumes
+ - pods
+ - secrets
+ - services
+ verbs:
+ - get
+ - list
+ - watch
+- apiGroups:
+ - storage.k8s.io
+ resources:
+ - storageclasses
+ verbs:
+ - get
+ - list
+ - create
+ - update
+- apiGroups:
+ - rbac.authorization.k8s.io
+ resources:
+ - clusterrolebindings
+ - clusterroles
+ verbs:
+ - get
+ - list
+ - patch
+ - create
+ - update
+- apiGroups:
+ - networking.k8s.io
+ resources:
+ - ingresses
+ verbs:
+ - list
+ - watch
+ - get
+- apiGroups:
+ - networking.k8s.io
+ resources:
+ - ingresses/status
+ verbs:
+ - update
+...
diff --git a/installer/examples/rbac/openshift/clusterrolebinding.yaml b/installer/examples/rbac/openshift/clusterrolebinding.yaml
new file mode 100644
index 00000000..44151781
--- /dev/null
+++ b/installer/examples/rbac/openshift/clusterrolebinding.yaml
@@ -0,0 +1,57 @@
+---
+kind: ClusterRoleBinding
+apiVersion: rbac.authorization.k8s.io/v1
+metadata:
+ name: installer
+roleRef:
+ apiGroup: rbac.authorization.k8s.io
+ kind: ClusterRole
+ name: installer
+subjects:
+- kind: ServiceAccount
+ name: installer
+ namespace: sysdigcloud
+---
+# We need the scc clusterrole to be able to
+# grants scc to sysdig service-accounts in
+# sysdig namespace.
+#
+# Starting from OCP 4.6 we already have
+# all the built-in clusteroles:
+#
+# system:openshift:scc:anyuid
+# system:openshift:scc:hostaccess
+# system:openshift:scc:hostmount
+# system:openshift:scc:hostnetwork
+# system:openshift:scc:nonroot
+# system:openshift:scc:privileged
+# system:openshift:scc:restricted
+#
+# According to:
+# https://github.com/draios/installer/blob/4d7b1886c4c91796a17c706eb85a20e6e25ba041/installer/pkg/installer/deploy.go#L1298-L1306
+kind: ClusterRoleBinding
+apiVersion: rbac.authorization.k8s.io/v1
+metadata:
+ name: installer-scc-anyuid
+roleRef:
+ apiGroup: rbac.authorization.k8s.io
+ kind: ClusterRole
+ name: system:openshift:scc:anyuid
+subjects:
+- kind: ServiceAccount
+ name: installer
+ namespace: sysdigcloud
+---
+kind: ClusterRoleBinding
+apiVersion: rbac.authorization.k8s.io/v1
+metadata:
+ name: installer-scc-privileged
+roleRef:
+ apiGroup: rbac.authorization.k8s.io
+ kind: ClusterRole
+ name: system:openshift:scc:privileged
+subjects:
+- kind: ServiceAccount
+ name: installer
+ namespace: sysdigcloud
+...
diff --git a/installer/examples/rbac/openshift/role.yaml b/installer/examples/rbac/openshift/role.yaml
new file mode 100644
index 00000000..6b8912bf
--- /dev/null
+++ b/installer/examples/rbac/openshift/role.yaml
@@ -0,0 +1,96 @@
+---
+apiVersion: rbac.authorization.k8s.io/v1
+kind: Role
+metadata:
+ namespace: sysdigcloud
+ name: installer
+rules:
+ - apiGroups:
+ - networking.k8s.io
+ resources:
+ - ingresses
+ verbs:
+ - get
+ - create
+ - list
+ - patch
+ - update
+ - delete
+ - apiGroups:
+ - 'policy'
+ resources:
+ - poddisruptionbudgets
+ verbs:
+ - create
+ - update
+ - get
+ - list
+ - patch
+ - apiGroups:
+ - '*'
+ resources:
+ - networkpolicies
+ verbs:
+ - get
+ - list
+ - watch
+ - create
+ - update
+ - patch
+ - delete
+ - apiGroups:
+ - '*'
+ resources:
+ - cronjobs
+ - configmaps
+ - deployments
+ - deployments/scale
+ - daemonsets
+ - endpoints
+ - events
+ - jobs
+ - namespaces
+ - podtemplates
+ - podsecuritypolicies
+ - pods
+ - pods/log
+ - pods/exec
+ - pod/delete
+ - pod/status
+ - podpreset
+ - persistentvolumeclaims
+ - replicationcontrollers
+ - replicasets
+ - secrets
+ - services
+ - serviceaccounts
+ - statefulsets
+ verbs:
+ - get
+ - list
+ - watch
+ - create
+ - update
+ - patch
+ - delete
+ - apiGroups:
+ - '*'
+ resources:
+ - namespace
+ verbs:
+ - create
+ - get
+ - list
+ - update
+ - apiGroups:
+ - rbac.authorization.k8s.io
+ resources:
+ - roles
+ - rolebindings
+ verbs:
+ - create
+ - update
+ - delete
+ - get
+ - list
+...
diff --git a/installer/examples/rbac/openshift/rolebinding.yaml b/installer/examples/rbac/openshift/rolebinding.yaml
new file mode 100644
index 00000000..6ccd2581
--- /dev/null
+++ b/installer/examples/rbac/openshift/rolebinding.yaml
@@ -0,0 +1,15 @@
+---
+apiVersion: rbac.authorization.k8s.io/v1
+kind: RoleBinding
+metadata:
+ name: installer
+ namespace: sysdigcloud
+roleRef:
+ apiGroup: rbac.authorization.k8s.io
+ kind: Role
+ name: installer
+subjects:
+- kind: ServiceAccount
+ name: installer
+ namespace: sysdigcloud
+...
diff --git a/installer/examples/rbac/openshift/sa.yaml b/installer/examples/rbac/openshift/sa.yaml
new file mode 100644
index 00000000..a59bb243
--- /dev/null
+++ b/installer/examples/rbac/openshift/sa.yaml
@@ -0,0 +1,7 @@
+---
+apiVersion: v1
+kind: ServiceAccount
+metadata:
+ name: installer
+ namespace: sysdigcloud
+...
diff --git a/installer/examples/rbac/readonly/clusterrole.yaml b/installer/examples/rbac/readonly/clusterrole.yaml
new file mode 100644
index 00000000..5107d58f
--- /dev/null
+++ b/installer/examples/rbac/readonly/clusterrole.yaml
@@ -0,0 +1,35 @@
+---
+apiVersion: rbac.authorization.k8s.io/v1
+kind: ClusterRole
+metadata:
+ name: installer-readonly
+rules:
+- apiGroups:
+ - ""
+ resources:
+ - persistentvolumes
+ verbs:
+ - get
+ - list
+- apiGroups:
+ - storage.k8s.io
+ resources:
+ - storageclasses
+ verbs:
+ - get
+ - list
+- apiGroups:
+ - rbac.authorization.k8s.io
+ resources:
+ - clusterrolebindings
+ - clusterroles
+ verbs:
+ - get
+ - list
+- apiGroups:
+ - extensions
+ resources:
+ - ingresses
+ verbs:
+ - list
+ - get
diff --git a/installer/examples/rbac/readonly/clusterrolebinding.yaml b/installer/examples/rbac/readonly/clusterrolebinding.yaml
new file mode 100644
index 00000000..518a8a42
--- /dev/null
+++ b/installer/examples/rbac/readonly/clusterrolebinding.yaml
@@ -0,0 +1,13 @@
+---
+kind: ClusterRoleBinding
+apiVersion: rbac.authorization.k8s.io/v1
+metadata:
+ name: installer-readonly
+roleRef:
+ apiGroup: rbac.authorization.k8s.io
+ kind: ClusterRole
+ name: installer-readonly
+subjects:
+- kind: ServiceAccount
+ name: installer-readonly
+ namespace: sysdigcloud
diff --git a/installer/examples/rbac/readonly/role.yaml b/installer/examples/rbac/readonly/role.yaml
new file mode 100644
index 00000000..4d195152
--- /dev/null
+++ b/installer/examples/rbac/readonly/role.yaml
@@ -0,0 +1,54 @@
+---
+apiVersion: rbac.authorization.k8s.io/v1
+kind: Role
+metadata:
+ namespace: sysdigcloud
+ name: installer-readonly
+rules:
+ - apiGroups:
+ - 'extensions'
+ resources:
+ - ingresses
+ verbs:
+ - get
+ - list
+ - apiGroups:
+ - '*'
+ resources:
+ - cronjobs
+ - configmaps
+ - deployments
+ - daemonsets
+ - jobs
+ - namespaces
+ - pods
+ - persistentvolumeclaims
+ - secrets
+ - services
+ - serviceaccounts
+ - statefulsets
+ verbs:
+ - get
+ - list
+# - apiGroups:
+# - '*'
+# resources:
+# - storageclasses
+# verbs:
+# - get
+# - list
+ - apiGroups:
+ - '*'
+ resources:
+ - namespace
+ verbs:
+ - get
+ - list
+ - apiGroups:
+ - rbac.authorization.k8s.io
+ resources:
+ - roles
+ - rolebindings
+ verbs:
+ - get
+ - list
diff --git a/installer/examples/rbac/readonly/rolebinding.yaml b/installer/examples/rbac/readonly/rolebinding.yaml
new file mode 100644
index 00000000..ce55ef7a
--- /dev/null
+++ b/installer/examples/rbac/readonly/rolebinding.yaml
@@ -0,0 +1,14 @@
+---
+apiVersion: rbac.authorization.k8s.io/v1
+kind: RoleBinding
+metadata:
+ name: installer-readony
+ namespace: sysdigcloud
+roleRef:
+ apiGroup: rbac.authorization.k8s.io
+ kind: Role
+ name: installer-readonly
+subjects:
+- kind: ServiceAccount
+ name: installer-readonly
+ namespace: sysdigcloud
diff --git a/installer/examples/rbac/readonly/sa.yaml b/installer/examples/rbac/readonly/sa.yaml
new file mode 100644
index 00000000..e14c7c13
--- /dev/null
+++ b/installer/examples/rbac/readonly/sa.yaml
@@ -0,0 +1,6 @@
+---
+apiVersion: v1
+kind: ServiceAccount
+metadata:
+ name: installer-readonly
+ namespace: sysdigcloud
diff --git a/installer/examples/single-node/values.yaml b/installer/examples/single-node/values.yaml
new file mode 100644
index 00000000..a30e3477
--- /dev/null
+++ b/installer/examples/single-node/values.yaml
@@ -0,0 +1,53 @@
+# The instructions here should create Sysdig Platform on a single node with 8 cores and 16Gig of RAM.
+size: small
+# Replace with quay.io pull secrets provided by the sales team.
+quaypullsecret:
+# Acceptable values here are awe|gke|none|hostPath, change this to none and configure storageClassName if you want to use an existing storageClass
+storageClassProvisioner: hostPath
+# Uncomment the below to specify an existing storageClass, if not configured a storageClass is created with the configured storageClassProvisioner
+# storageClassName: sysdig
+elasticsearch:
+ hostPathNodes:
+ # replace with the name section of kubectl get nodes
+ - minikube
+sysdig:
+ mysql:
+ hostPathNodes:
+ # replace with the name section of kubectl get nodes
+ - minikube
+ postgresql:
+ hostPathNodes:
+ # replace with the name section of kubectl get nodes
+ - minikube
+ cassandra:
+ hostPathNodes:
+ # replace with the name section of kubectl get nodes
+ - minikube
+ # Replace with domain name the api should be served on.
+ dnsName:
+ admin:
+ username: pov@sysdig.com
+ # Replace with license provided by the sales team.
+ license:
+ # For PoC do not change the below
+ resources:
+ api:
+ requests:
+ cpu: 500m
+ memory: 1Gi
+ cassandra:
+ requests:
+ cpu: 500m
+ memory: 1Gi
+ collector:
+ requests:
+ cpu: 500m
+ memory: 1Gi
+ elasticsearch:
+ requests:
+ cpu: 500m
+ memory: 1Gi
+ worker:
+ requests:
+ cpu: 500m
+ memory: 1Gi
diff --git a/installer/install.sh b/installer/install.sh
new file mode 100755
index 00000000..4acfceca
--- /dev/null
+++ b/installer/install.sh
@@ -0,0 +1,386 @@
+#!/usr/bin/env bash
+
+set -euo pipefail
+
+# globals
+MINIMUM_CPUS=16
+MINIMUM_MEMORY_KB=31000000
+MINIMUM_DISK_IN_GB=59
+
+function logError() { echo "$@" 1>&2; }
+
+#log to file and stdout
+log_file="/var/log/sysdig-installer.log"
+exec &>> >(tee -a "$log_file")
+
+if [[ "$EUID" -ne 0 ]]; then
+ logError "This script needs to be run as root"
+ logError "Usage: sudo ./$0"
+ exit 1
+fi
+
+MINIKUBE_VERSION=v1.6.2
+KUBERNETES_VERSION=v1.16.0
+DOCKER_VERSION=18.06.3
+ROOT_LOCAL_PATH="/usr/bin"
+QUAYPULLSECRET="PLACEHOLDER"
+LICENSE="PLACEHOLDER"
+DNSNAME="PLACEHOLDER"
+AIRGAP_BUILD="false"
+AIRGAP_INSTALL="false"
+INSTALLER_IMAGE="quay.io/sysdig/installer:3.2.0-2"
+
+function writeValuesYaml() {
+ cat << EOM > values.yaml
+size: small
+quaypullsecret: $QUAYPULLSECRET
+apps: monitor secure agent
+storageClassProvisioner: hostPath
+elasticsearch:
+ hostPathNodes:
+ - minikube
+sysdig:
+ mysql:
+ hostPathNodes:
+ - minikube
+ postgresql:
+ hostPathNodes:
+ - minikube
+ cassandra:
+ hostPathNodes:
+ - minikube
+ dnsName: $DNSNAME
+ admin:
+ username: pov@sysdig.com
+ license: $LICENSE
+ resources:
+ api:
+ requests:
+ cpu: 500m
+ memory: 1Gi
+ cassandra:
+ requests:
+ cpu: 500m
+ memory: 1Gi
+ collector:
+ requests:
+ cpu: 500m
+ memory: 1Gi
+ elasticsearch:
+ requests:
+ cpu: 500m
+ memory: 1Gi
+ worker:
+ requests:
+ cpu: 500m
+ memory: 1Gi
+EOM
+}
+
+function checkCPU() {
+ local -r cpus=$(grep -c processor /proc/cpuinfo)
+
+ if [[ $cpus -lt $MINIMUM_CPUS ]]; then
+ logError "The number of cpus '$cpus' is less than the required number of cpus: '$MINIMUM_CPUS'"
+ exit 1
+ fi
+
+ echo "Enough cpu ✓"
+}
+
+function checkMemory() {
+ local -r memory=$(grep MemTotal /proc/meminfo | awk '{print $2}')
+
+ if [[ $memory -lt $MINIMUM_MEMORY_KB ]]; then
+ logError "The amount of memory '$memory' is less than the required amount of memory in kilobytes '$MINIMUM_MEMORY_KB'"
+ exit 1
+ fi
+
+ echo "Enough memory ✓"
+}
+
+function checkDisk() {
+ local -r diskSizeHumanReadable=$(df -h /var | tail -n1 | awk '{print $2}')
+ local -r diskSize=${diskSizeHumanReadable%G}
+
+ if [[ $diskSize -lt $MINIMUM_DISK_IN_GB ]]; then
+ logError "The volume that holds the var directory needs a minimum of '$MINIMUM_DISK_IN_GB' but currently has '$diskSize'"
+ exit 1
+ fi
+
+ echo "Enough disk ✓"
+}
+
+function preFlight() {
+ echo "Running preFlight checks"
+ checkCPU
+ checkMemory
+ checkDisk
+}
+
+function askQuestions() {
+ if [[ "${AIRGAP_BUILD}" != "true" ]]; then
+ read -rp $'Provide quay pull secret: \n' QUAYPULLSECRET
+ printf "\n"
+ read -rp $'Provide sysdig license: \n' LICENSE
+ printf "\n"
+ read -rp $'Provide domain name, this domain name should resolve to this node on port 443 and 6443: \n' DNSNAME
+ printf "\n"
+ else
+ local -r quayPullSecret="${QUAYPULLSECRET}"
+ if [[ "$quayPullSecret" == "PLACEHOLDER" ]]; then
+ logError "-q|--quaypullsecret is needed for airgap build"
+ exit 1
+ fi
+ fi
+}
+
+function dockerLogin() {
+ local -r quayPullSecret=$QUAYPULLSECRET
+ if [[ "$quayPullSecret" != "PLACEHOLDER" ]]; then
+ local -r auth=$(echo "$quayPullSecret" | base64 --decode | jq -r '.auths."quay.io".auth' | base64 --decode)
+ local -r quay_username=${auth%:*}
+ local -r quay_password=${auth#*:}
+ docker login -u "$quay_username" -p "$quay_password" quay.io
+ else
+ logError "Please rerun the script and configure quay pull secret"
+ exit 1
+ fi
+}
+
+function installUbuntuDeps() {
+ apt-get remove -y docker docker-engine docker.io containerd runc > /dev/null 2>&1
+ apt-get update -qq
+ apt-get install -y apt-transport-https ca-certificates curl software-properties-common "linux-headers-$(uname -r)"
+ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
+ add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
+ apt-get update -qq
+ apt-get install -y --allow-unauthenticated docker-ce=${DOCKER_VERSION}~ce~3-0~ubuntu
+}
+
+function installDebianDeps() {
+ apt-get remove -y docker docker-engine docker.io containerd runc > /dev/null 2>&1
+ apt-get update -qq
+ apt-get install -y apt-transport-https ca-certificates curl gnupg2 software-properties-common "linux-headers-$(uname -r)"
+ curl -fsSL https://download.docker.com/linux/debian/gpg | apt-key add -
+ add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/debian $(lsb_release -cs) stable"
+ apt-get update -qq
+ apt-get install -y --allow-unauthenticated docker-ce=${DOCKER_VERSION}~ce~3-0~debian
+}
+
+function installCentOSDeps() {
+ local -r version=$1
+ yum remove -y docker docker-client docker-client-latest docker-common docker-latest docker-latest-logrotate docker-logrotate docker-engine
+ yum -y update
+ yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
+ if [[ $version == 8 ]]; then
+ yum install -y yum-utils device-mapper-persistent-data lvm2 curl
+ else
+ yum install -y yum-utils device-mapper-persistent-data lvm2 curl
+ fi
+ # Copied from https://github.com/kubernetes/kops/blob/b92babeda277df27b05236d852b5c0dc0803ce5d/nodeup/pkg/model/docker.go#L758-L764
+ yum install -y http://vault.centos.org/7.6.1810/extras/x86_64/Packages/container-selinux-2.68-1.el7.noarch.rpm
+ yum install -y https://download.docker.com/linux/centos/7/x86_64/stable/Packages/docker-ce-18.06.3.ce-3.el7.x86_64.rpm
+ yum install -y "kernel-devel-$(uname -r)"
+ systemctl enable docker
+ systemctl start docker
+}
+
+function disableFirewalld() {
+ echo "Disabling firewald...."
+ systemctl stop firewalld
+ systemctl disable firewalld
+}
+
+function installMiniKube() {
+ curl -s -Lo minikube "https://storage.googleapis.com/minikube/releases/${MINIKUBE_VERSION}/minikube-linux-amd64"
+ chmod +x minikube
+ mv minikube "${ROOT_LOCAL_PATH}"
+}
+
+function installKubectl() {
+ curl -s -Lo kubectl "https://storage.googleapis.com/kubernetes-release/release/${KUBERNETES_VERSION}/bin/linux/amd64/kubectl"
+ chmod +x kubectl
+ mv kubectl "${ROOT_LOCAL_PATH}"
+}
+
+function installJq() {
+ curl -o jq -L https://github.com/stedolan/jq/releases/download/jq-1.6/jq-linux64
+ chmod +x jq
+ mv jq "${ROOT_LOCAL_PATH}"
+}
+
+function installDeps() {
+ set +e
+
+ cat << EOF > /etc/sysctl.d/k8s.conf
+ net.bridge.bridge-nf-call-ip6tables = 1
+ net.bridge.bridge-nf-call-iptables = 1
+EOF
+ modprobe br_netfilter
+ sysctl --system
+
+ source /etc/os-release
+ case $ID in
+ ubuntu)
+ installUbuntuDeps
+ if [[ ! $VERSION_CODENAME =~ ^(bionic|xenial)$ ]]; then
+ logError "ubuntu version: $VERSION_CODENAME is not supported"
+ exit 1
+ fi
+ ;;
+ debian)
+ installDebianDeps
+ if [[ ! $VERSION_CODENAME =~ ^(stretch|buster)$ ]]; then
+ logError "debian version: $VERSION_CODENAME is not supported"
+ exit 1
+ fi
+ ;;
+ centos | amzn)
+ if [[ $ID =~ ^(centos)$ ]] &&
+ [[ ! "$VERSION_ID" =~ ^(7|8) ]]; then
+ logError "$ID version: $VERSION_ID is not supported"
+ exit 1
+ fi
+ disableFirewalld
+ installCentOSDeps "$VERSION_ID"
+ ;;
+ *)
+ logError "unsupported platform $ID"
+ exit 1
+ ;;
+ esac
+ installJq
+ installMiniKube
+ installKubectl
+ set -e
+}
+
+function startDocker() {
+ systemctl enable docker
+ systemctl start docker
+ ip link set docker0 promisc on
+}
+
+function startMinikube() {
+ export MINIKUBE_HOME="/root"
+ export KUBECONFIG="/root/.kube/config"
+ minikube start --vm-driver=none --kubernetes-version=${KUBERNETES_VERSION}
+ systemctl enable kubelet
+ kubectl config use-context minikube
+ minikube update-context
+}
+
+function fixIptables() {
+ echo "Fixing iptables ..."
+ ### Install iptables rules because minikube locks out external access
+ iptables -I INPUT -t filter -p tcp --dport 443 -j ACCEPT
+ iptables -I INPUT -t filter -p tcp --dport 6443 -j ACCEPT
+ iptables -I INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
+}
+
+function pullImagesSysdigImages(){
+ #copy tests/resources to local
+ getSysdigImagesFromInstaller
+ #find images in resources
+ mapfile -t non_job_images < <(jq -r '.spec.template.spec.containers[]? | .image' \
+ resources/*/sysdig.json 2> /dev/null | sort -u | grep 'quay\|docker.io')
+ mapfile -t job_images < <(jq -r '.spec.jobTemplate.spec.template.spec.containers[]? | .image' \
+ resources/*/sysdig.json 2> /dev/null | sort -u | grep 'quay\|docker.io')
+ mapfile -t init_container_images < <(jq -r '.spec.template.spec.initContainers[]? | .image' \
+ resources/*/sysdig.json 2> /dev/null | sort -u | grep 'quay\|docker.io')
+ #collected images to images obj
+ local -a images=("${non_job_images[@]}")
+ images+=("${job_images[@]}")
+ images+=("${init_container_images[@]}")
+ #iterate and pull image if not present
+ for image in "${images[@]}"; do
+ if [[ -z $(docker images -q "$image") ]]; then
+ logger info "Pulling $image"
+ docker pull "$image"
+ else
+ echo "$image is present"
+ fi
+ done
+ #clean up resources
+ rm -rf resources
+}
+
+function getSysdigImagesFromInstaller(){
+ #get resources from sysdig-chart/tests
+ docker create --name installer_image ${INSTALLER_IMAGE}
+ docker cp installer_image:/sysdig-chart/tests/resources .
+ docker rm installer_image
+}
+
+function runInstaller() {
+ if [[ "${AIRGAP_INSTALL}" != "true" ]]; then
+ dockerLogin
+ fi
+ if [[ "${AIRGAP_BUILD}" == "true" ]]; then
+ docker pull "${INSTALLER_IMAGE}"
+ pullImagesSysdigImages
+ else
+ writeValuesYaml
+ docker run --net=host \
+ -e KUBECONFIG=/root/.kube/config \
+ -v /root/.kube:/root/.kube:Z \
+ -v /root/.minikube:/root/.minikube:Z \
+ -v "$(pwd)":/manifests:Z \
+ "${INSTALLER_IMAGE}"
+ fi
+}
+
+function __main() {
+ preFlight
+ askQuestions
+ if [[ "${AIRGAP_INSTALL}" != "true" ]]; then
+ installDeps
+ startDocker
+ fi
+ #minikube needs to run to set the correct context & ip during airgap run
+ startMinikube
+ if [[ "${AIRGAP_INSTALL}" != "true" ]]; then
+ fixIptables
+ fi
+ runInstaller
+}
+
+while [[ $# -gt 0 ]]
+do
+arguments="$1"
+
+case "${arguments}" in
+ -a|--airgap-build)
+ AIRGAP_BUILD="true"
+ LICENSE="installer.airgap.license"
+ DNSNAME="installer.airgap.dnsname"
+ shift # past argument
+ ;;
+ -i|--airgap-install)
+ AIRGAP_INSTALL="true"
+ LICENSE="installer.airgap.license"
+ DNSNAME="installer.airgap.dnsname"
+ shift # past argument
+ ;;
+ -q|--quaypullsecret)
+ QUAYPULLSECRET="$2"
+ shift # past argument
+ shift # past value
+ ;;
+ -h|--help)
+ echo "Help..."
+ echo "use -a|--airgap-builder to specify airgap builder"
+ echo "-q|--quaypullsecret followed by quaysecret to specify airgap builder"
+ shift # past argument
+ exit 0
+ ;;
+ *) # unknown option
+ shift # past argument
+ logError "unknown arg $1"
+ exit 1
+ ;;
+esac
+done
+
+__main
diff --git a/installer/single-node/README.md b/installer/single-node/README.md
new file mode 100644
index 00000000..87770056
--- /dev/null
+++ b/installer/single-node/README.md
@@ -0,0 +1,78 @@
+# Single node POV installer
+
+This script will install docker, minikube, jq, curl etc required to run Sysdig
+Platform, after installing all dependencies the script will create a
+values.yaml and run the installer using the created values.yaml file.
+
+## Download Installer
+Single Node script is integrated into installer. Download/Copy installer binary to get the single node installer script.
+
+Running "installer single-node" creates a install.sh file in current working directory.
+
+```bash
+sudo su
+#execute permissions for installer installer
+chmod u+x installer-linux-amd64
+#installer needs to be in PATH
+cp installer-linux-amd64 /usr/bin/installer
+#get single node installer script
+installer single-node
+```
+
+## Usage
+
+```bash
+sudo ./install.sh
+```
+
+## Help
+
+```bash
+sudo ./install.sh -h
+#prints help
+Help...
+-a | --airgap-builder to specify airgap builder
+-i | --airgap-install to run as airgap install mode
+-r | --run-installer to run the installer alone
+-q | --quaypullsecret followed by quaysecret to specify airgap builder
+-d | --delete-sysdig deletes sysdig namespace, persistent volumes and data from disk
+```
+
+This will prompt for quay pull secrets, sysdig license and domain name(in ec2
+this is the public hostname for the instance). It will install dependencies
+run the installer and create a sysdig platform. It also logs everything you
+see in your terminal to `/var/log/sysdig-installer.log` so this can be used
+for debugging a failed install.
+
+## Requirements.
+
+- An instance with at least 16 CPU cores, 32GB of RAM and 300GB of disk space.
+- Port 443 and 6443 granted network access (in AWS this is done with security
+groups)
+
+## Status
+
+Tested on:
+- ubuntu bionic
+
+Should work fine on:
+- amazon linux
+- centos 7
+- debian buster
+- debian stretch
+- ubuntu xenial
+
+The script will not work on any OS not in above lists.
+
+## Note
+
+To need to run `kubectl` as root on the host.
+
+## Future improvements
+
+- the script will be hosted in a public location so you can `curl | sudo bash`
+the script.
+
+# Airgapped pov installer (VMDK images)
+
+The VMDK image distribution was retired in May 2022.
diff --git a/installer/single-node/install.sh b/installer/single-node/install.sh
new file mode 100755
index 00000000..d82429da
--- /dev/null
+++ b/installer/single-node/install.sh
@@ -0,0 +1,587 @@
+#!/usr/bin/env bash
+
+set -euox pipefail
+
+# globals
+MINIMUM_CPUS=16
+MINIMUM_MEMORY_KB=31000000
+MINIMUM_DISK_IN_GB=59
+ADDITIONAL_IMAGES=(
+ "sysdig/falco_rules_installer:latest"
+)
+
+function logError() { echo "$@" 1>&2; }
+
+#log to file and stdout
+log_file="/var/log/sysdig-installer.log"
+exec &>> >(tee -a "$log_file")
+
+if [[ "$EUID" -ne 0 ]]; then
+ logError "This script needs to be run as root"
+ logError "Usage: sudo ./$0"
+ exit 1
+fi
+
+MINIKUBE_VERSION=v1.6.2
+KUBERNETES_VERSION=v1.16.0
+DOCKER_VERSION=18.06.3
+ROOT_LOCAL_PATH="/usr/bin"
+QUAYPULLSECRET="PLACEHOLDER"
+LICENSE="PLACEHOLDER"
+DNSNAME="PLACEHOLDER"
+AIRGAP_BUILD="false"
+AIRGAP_INSTALL="false"
+RUN_INSTALLER="false"
+IP_ADDRESS="PLACEHOLDER"
+GATEWAY="PLACEHOLDER"
+DELETE_SYSDIG="false"
+INSTALLER_BINARY="installer"
+
+function writeValuesYaml() {
+ cat << EOM > values.yaml
+size: small
+quaypullsecret: $QUAYPULLSECRET
+apps: monitor secure agent
+storageClassProvisioner: hostPath
+elasticsearch:
+ hostPathNodes:
+ - minikube
+hostPathCustomPaths:
+ cassandra: /var/lib/cassandra
+ elasticsearch: /var/lib/elasticsearch
+ postgresql: /var/lib/postgresql/data/pgdata
+sysdig:
+ postgresql:
+ hostPathNodes:
+ - minikube
+ cassandra:
+ jvmOptions: -Xmx500m -Xms500m
+ hostPathNodes:
+ - minikube
+ dnsName: $DNSNAME
+ admin:
+ username: pov@sysdig.com
+ license: $LICENSE
+ resources:
+ api:
+ requests:
+ cpu: 500m
+ memory: 1Gi
+ apiNginx:
+ requests:
+ cpu: 50m
+ memory: 100Mi
+ apiEmailRenderer:
+ requests:
+ cpu: 50m
+ memory: 100Mi
+ cassandra:
+ requests:
+ cpu: 500m
+ memory: 1Gi
+ collector:
+ requests:
+ cpu: 500m
+ memory: 1Gi
+ elasticsearch:
+ requests:
+ cpu: 500m
+ memory: 1Gi
+ worker:
+ requests:
+ cpu: 500m
+ memory: 1Gi
+ anchore-catalog:
+ requests:
+ cpu: 500m
+ memory: 500Mi
+ anchore-policy-engine:
+ requests:
+ cpu: 500m
+ memory: 500Mi
+ anchore-worker:
+ requests:
+ cpu: 500m
+ memory: 500Mi
+ scanning-api:
+ requests:
+ cpu: 500m
+ memory: 500Mi
+ scanningalertmgr:
+ requests:
+ cpu: 500m
+ memory: 500Mi
+ scanning-retention-mgr:
+ requests:
+ cpu: 500m
+ memory: 500Mi
+ secure-prometheus:
+ requests:
+ cpu: 500m
+ memory: 500Mi
+ netsec-api:
+ requests:
+ cpu: 300m
+ memory: 500Mi
+ netsec-ingest:
+ requests:
+ cpu: 500m
+ memory: 500Mi
+ policy-advisor:
+ requests:
+ cpu: 500m
+ memory: 500Mi
+ scanning-reporting-worker:
+ requests:
+ cpu: 500m
+ memory: 500Mi
+EOM
+
+#append airgap config to values.yaml - sets up feeds db & slim agent
+if [[ "$AIRGAP_INSTALL" == "true" ]]; then
+ cat << EOM >> values.yaml
+ secure:
+ scanning:
+ feedsEnabled: true
+agent:
+ useSlim: true
+EOM
+fi
+
+}
+
+function checkCPU() {
+ local -r cpus=$(grep -c processor /proc/cpuinfo)
+
+ if [[ $cpus -lt $MINIMUM_CPUS ]]; then
+ logError "The number of cpus '$cpus' is less than the required number of cpus: '$MINIMUM_CPUS'"
+ exit 1
+ fi
+
+ echo "Enough cpu ✓"
+}
+
+function checkMemory() {
+ local -r memory=$(grep MemTotal /proc/meminfo | awk '{print $2}')
+
+ if [[ $memory -lt $MINIMUM_MEMORY_KB ]]; then
+ logError "The amount of memory '$memory' is less than the required amount of memory in kilobytes '$MINIMUM_MEMORY_KB'"
+ exit 1
+ fi
+
+ echo "Enough memory ✓"
+}
+
+function checkDisk() {
+ local -r diskSizeHumanReadable=$(df -h /var | tail -n1 | awk '{print $2}')
+ local -r diskSize=${diskSizeHumanReadable%G}
+
+ if [[ $diskSize -lt $MINIMUM_DISK_IN_GB ]]; then
+ logError "The volume that holds the var directory needs a minimum of '$MINIMUM_DISK_IN_GB' but currently has '$diskSize'"
+ exit 1
+ fi
+
+ echo "Enough disk ✓"
+}
+
+function preFlight() {
+ echo "Running preFlight checks"
+ checkCPU
+ checkMemory
+ checkDisk
+}
+
+function askQuestions() {
+ if [[ "${AIRGAP_BUILD}" != "true" ]]; then
+ read -rp $'Provide quay pull secret: \n' QUAYPULLSECRET
+ printf "\n"
+ read -rp $'Provide sysdig license: \n' LICENSE
+ printf "\n"
+ read -rp $'Provide domain name, this domain name should resolve to this node on port 443 and 6443: \n' DNSNAME
+ printf "\n"
+ if [[ "${AIRGAP_INSTALL}" == "true" ]]; then
+ if systemctl is-active --quiet sysdig-networking; then
+ echo "skipping static ip section. sysdig-networking service is active"
+ else
+ read -rp $'Provide provide a static ip with mask (eg. 192.168.100.10/24) for this instance: \n' IP_ADDRESS
+ printf "\n"
+ read -rp $'Provide gateway address (eg. 192.168.100.254): \n' GATEWAY
+ printf "\n"
+ fi
+ fi
+ else
+ local -r quayPullSecret="${QUAYPULLSECRET}"
+ if [[ "$quayPullSecret" == "PLACEHOLDER" ]]; then
+ logError "-q|--quaypullsecret is needed for airgap build"
+ exit 1
+ fi
+ fi
+}
+
+function dockerLogin() {
+ local -r quayPullSecret=$QUAYPULLSECRET
+ if [[ "$quayPullSecret" != "PLACEHOLDER" ]]; then
+ local -r auth=$(echo "$quayPullSecret" | base64 --decode | jq -r '.auths."quay.io".auth' | base64 --decode)
+ local -r quay_username=${auth%:*}
+ local -r quay_password=${auth#*:}
+ docker login -u "$quay_username" -p "$quay_password" quay.io
+ else
+ logError "Please rerun the script and configure quay pull secret"
+ exit 1
+ fi
+}
+
+function installUbuntuDeps() {
+ apt-get remove -y docker docker-engine docker.io containerd runc > /dev/null 2>&1
+ apt-get update -qq
+ apt-get install -y apt-transport-https ca-certificates curl software-properties-common "linux-headers-$(uname -r)"
+ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
+ add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
+ apt-get update -qq
+ apt-get install -y --allow-unauthenticated docker-ce=${DOCKER_VERSION}~ce~3-0~ubuntu
+}
+
+function installDebianDeps() {
+ apt-get remove -y docker docker-engine docker.io containerd runc > /dev/null 2>&1
+ apt-get update -qq
+ apt-get install -y apt-transport-https ca-certificates curl gnupg2 software-properties-common "linux-headers-$(uname -r)"
+ curl -fsSL https://download.docker.com/linux/debian/gpg | apt-key add -
+ add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/debian $(lsb_release -cs) stable"
+ apt-get update -qq
+ apt-get install -y --allow-unauthenticated docker-ce=${DOCKER_VERSION}~ce~3-0~debian
+}
+
+function installCentOSDeps() {
+ yum remove -y docker docker-client docker-client-latest docker-common docker-latest docker-latest-logrotate docker-logrotate docker-engine
+ yum -y update
+ yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
+ yum install -y yum-utils device-mapper-persistent-data lvm2 curl
+ # Copied from https://github.com/kubernetes/kops/blob/b92babeda277df27b05236d852b5c0dc0803ce5d/nodeup/pkg/model/docker.go#L758-L764
+ yum install -y http://vault.centos.org/7.6.1810/extras/x86_64/Packages/container-selinux-2.68-1.el7.noarch.rpm
+ yum install -y https://download.docker.com/linux/centos/7/x86_64/stable/Packages/docker-ce-18.06.3.ce-3.el7.x86_64.rpm
+ yum install -y kernel-devel kernel-headers
+
+}
+
+function installRhelOSDeps() {
+ yum remove -y docker docker-client docker-client-latest docker-common docker-latest docker-latest-logrotate docker-logrotate docker-engine
+ yum -y update
+ yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
+ yum install -y yum-utils device-mapper-persistent-data lvm2 curl
+ # Copied from https://github.com/kubernetes/kops/blob/b92babeda277df27b05236d852b5c0dc0803ce5d/nodeup/pkg/model/docker.go#L758-L764
+ yum install -y http://vault.centos.org/7.6.1810/extras/x86_64/Packages/container-selinux-2.68-1.el7.noarch.rpm
+ yum install -y https://download.docker.com/linux/centos/7/x86_64/stable/Packages/docker-ce-18.06.3.ce-3.el7.x86_64.rpm
+}
+
+function disableFirewalld() {
+ echo "Disabling firewald...."
+ systemctl stop firewalld
+ systemctl disable firewalld
+}
+
+function installMiniKube() {
+ curl -s -Lo minikube "https://storage.googleapis.com/minikube/releases/${MINIKUBE_VERSION}/minikube-linux-amd64"
+ chmod +x minikube
+ mv minikube "${ROOT_LOCAL_PATH}"
+}
+
+function installKubectl() {
+ curl -s -Lo kubectl "https://storage.googleapis.com/kubernetes-release/release/${KUBERNETES_VERSION}/bin/linux/amd64/kubectl"
+ chmod +x kubectl
+ mv kubectl "${ROOT_LOCAL_PATH}"
+}
+
+function installJq() {
+ curl -o jq -L https://github.com/stedolan/jq/releases/download/jq-1.6/jq-linux64
+ chmod +x jq
+ mv jq "${ROOT_LOCAL_PATH}"
+}
+
+function installDeps() {
+ set +e
+
+ cat << EOF > /etc/sysctl.d/k8s.conf
+ net.bridge.bridge-nf-call-ip6tables = 1
+ net.bridge.bridge-nf-call-iptables = 1
+ net.ipv4.ip_forward = 1
+EOF
+ modprobe br_netfilter
+ swapoff -a
+ systemctl mask '*.swap'
+ sed -i.bak '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
+ sysctl --system
+
+ source /etc/os-release
+ case $ID in
+ ubuntu)
+ installUbuntuDeps
+ if [[ ! $VERSION_CODENAME =~ ^(bionic|xenial)$ ]]; then
+ logError "ubuntu version: $VERSION_CODENAME is not supported"
+ exit 1
+ fi
+ ;;
+ debian)
+ installDebianDeps
+ if [[ ! $VERSION_CODENAME =~ ^(stretch|buster)$ ]]; then
+ logError "debian version: $VERSION_CODENAME is not supported"
+ exit 1
+ fi
+ ;;
+ centos | amzn)
+ if [[ $ID =~ ^(centos)$ ]] &&
+ [[ ! "$VERSION_ID" =~ ^(7|8) ]]; then
+ logError "$ID version: $VERSION_ID is not supported"
+ exit 1
+ fi
+ disableFirewalld
+ installCentOSDeps "$VERSION_ID"
+ ;;
+ rhel)
+ if [[ $ID =~ ^(rhel)$ ]] &&
+ [[ ! "$VERSION_ID" =~ ^(7) ]]; then
+ echo "$ID version: $VERSION_ID is not supported"
+ exit 1
+ fi
+ disableFirewalld
+ installRhelOSDeps "$VERSION_ID"
+ ;;
+ *)
+ logError "unsupported platform $ID"
+ exit 1
+ ;;
+ esac
+ startDocker
+ installJq
+ installMiniKube
+ installKubectl
+ setSystemctlVmMaxMapCount
+ writeEtcHosts
+
+ set -e
+}
+
+function writeEtcHosts() {
+ if ! grep -q "127.0.0.1 ${DNSNAME}" /etc/hosts; then
+ #for sni agents to connect to collector via 127.0.0.1
+ echo -e "\n#setting hostname for agents to connect" >> /etc/hosts
+ echo -e "127.0.0.1 ${DNSNAME}" >> /etc/hosts
+ fi
+}
+
+function setSystemctlVmMaxMapCount() {
+ #set for running ElasticSearch as non-root
+ VM_MAX_MAP_COUNT=${VM_MAX_MAP_COUNT:-262144}
+ readonly VM_MAX_MAP_COUNT
+ sysctl -w vm.max_map_count="${VM_MAX_MAP_COUNT}" | tee -a /etc/sysctl.conf
+}
+
+function startDocker() {
+ systemctl enable docker
+ systemctl start docker
+}
+
+#There is a work around for a bug in minikube
+function setDocker0Promisc() {
+ mkdir -p /usr/lib/systemd/system/
+ cat << EOF > /usr/lib/systemd/system/docker0-promisc.service
+[Unit]
+Description=Setup promisc on docker0 interface
+Wants=docker.service
+After=docker.service
+[Service]
+Type=oneshot
+ExecStart=/sbin/ip link set docker0 promisc on
+RemainAfterExit=true
+StandardOutput=journal
+[Install]
+WantedBy=multi-user.target
+EOF
+ systemctl enable docker0-promisc
+ systemctl start docker0-promisc
+}
+
+function writeStaticIpScript(){
+ local -r interface=$1
+ cat << EOF > /usr/bin/setup-sysdig-networking.sh
+#!/bin/bash
+
+/sbin/ip link set ${interface} up
+/sbin/ip address add ${IP_ADDRESS} dev ${interface}
+/sbin/route add default gw ${GATEWAY} ${interface}
+EOF
+ chmod 755 /usr/bin/setup-sysdig-networking.sh
+}
+
+function setupSystemdUnit(){
+ local -r interface=$1
+ cat << EOF > /usr/lib/systemd/system/sysdig-networking.service
+[Unit]
+Description=Setup sysdig networking
+After=network.service
+Wants=network.service
+
+[Service]
+Type=oneshot
+ExecStart=/usr/bin/setup-sysdig-networking.sh
+ExecStop=/sbin/ip link set ${interface} down
+RemainAfterExit=true
+StandardOutput=journal
+
+[Install]
+WantedBy=multi-user.target
+EOF
+ systemctl enable sysdig-networking
+ systemctl start sysdig-networking
+}
+
+function setupStaticIp(){
+ local -r interface=$(grep -v -E "veth|lo|docker0" /proc/net/dev | tail -n+3 | cut -d ":" -f1)
+ writeStaticIpScript "${interface}"
+ setupSystemdUnit "${interface}"
+}
+
+function startMinikube() {
+ export MINIKUBE_HOME="/root"
+ export KUBECONFIG="/root/.kube/config"
+ minikube start --vm-driver=none --kubernetes-version=${KUBERNETES_VERSION}
+ systemctl enable kubelet
+ kubectl config use-context minikube
+ minikube update-context
+}
+
+function fixIptables() {
+ echo "Fixing iptables ..."
+ ### Install iptables rules because minikube locks out external access
+ iptables -I INPUT -t filter -p tcp --dport 443 -j ACCEPT -w 60
+ iptables -I INPUT -t filter -p tcp --dport 6443 -j ACCEPT -w 60
+ iptables -I INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT -w 60
+}
+
+function pullImagesSysdigImages() {
+ #find images in resources
+ mapfile -t non_job_images < <(jq -r '.spec.template.spec.containers[]? | .image' \
+ /opt/sysdig-chart/resources/*/sysdig.json 2> /dev/null | sort -u | grep 'quay\|docker.io')
+ mapfile -t job_images < <(jq -r '.spec.jobTemplate.spec.template.spec.containers[]? | .image' \
+ /opt/sysdig-chart/resources/*/sysdig.json 2> /dev/null | sort -u | grep 'quay\|docker.io')
+ mapfile -t init_container_images < <(jq -r '.spec.template.spec.initContainers[]? | .image' \
+ /opt/sysdig-chart/resources/*/sysdig.json 2> /dev/null | sort -u | grep 'quay\|docker.io')
+ #collected images to images obj
+ local -a images=("${non_job_images[@]}")
+ images+=("${ADDITIONAL_IMAGES[@]}")
+ images+=("${job_images[@]}")
+ images+=("${init_container_images[@]}")
+ #iterate and pull image if not present
+ for image in "${images[@]}"; do
+ if [[ -z $(docker images -q "$image") ]]; then
+ logger info "Pulling $image"
+ docker pull "$image" || true
+ else
+ echo "$image is present"
+ fi
+ done
+ #clean up resources
+ rm -rf /opt/sysdig-chart
+}
+
+function runInstaller() {
+ if [[ "${AIRGAP_BUILD}" == "true" ]]; then
+ dockerLogin
+ pullImagesSysdigImages
+ yum install -y python-pip
+ pip install yq
+ else
+ writeValuesYaml
+ ${INSTALLER_BINARY} deploy
+ fi
+}
+
+function __main() {
+
+ if [[ "${DELETE_SYSDIG}" == "true" ]]; then
+ data_directories=$(kubectl get pv -o json | jq -r '.items[].spec.hostPath.path')
+ kubectl delete ns sysdig || true
+ kubectl delete ns agent || true
+ kubectl delete pv --all || true
+ for data_directory in ${data_directories}
+ do
+ echo "deleting ${data_directory}"
+ rm -rf "${data_directory}"
+ done
+ exit 0
+ fi
+
+ if [[ "${RUN_INSTALLER}" == "true" ]]; then
+ #single node installer just runs installer and returns early
+ ${INSTALLER_BINARY} deploy
+ exit 0
+ fi
+ preFlight
+ askQuestions
+ if [[ "${AIRGAP_INSTALL}" != "true" ]]; then
+ installDeps
+ setDocker0Promisc
+ fi
+ #use user provided answers to setup static ip
+ if [[ "${AIRGAP_INSTALL}" == "true" ]]; then
+ if systemctl is-active --quiet sysdig-networking; then
+ echo "sysdig-networking is active skipping setupStaticIp"
+ else
+ setupStaticIp
+ fi
+ fi
+ #minikube needs to run to set the correct context & ip during airgap run
+ startMinikube
+ if [[ "${AIRGAP_INSTALL}" != "true" ]]; then
+ fixIptables
+ fi
+ runInstaller
+}
+
+while [[ $# -gt 0 ]]; do
+ arguments="$1"
+
+ case "${arguments}" in
+ -a | --airgap-build)
+ AIRGAP_BUILD="true"
+ LICENSE="installer.airgap.license"
+ DNSNAME="installer.airgap.dnsname"
+ shift # past argument
+ ;;
+ -i | --airgap-install)
+ AIRGAP_INSTALL="true"
+ LICENSE="installer.airgap.license"
+ DNSNAME="installer.airgap.dnsname"
+ shift # past argument
+ ;;
+ -r | --run-installer)
+ RUN_INSTALLER="true"
+ shift # past value
+ ;;
+ -q | --quaypullsecret)
+ QUAYPULLSECRET="$2"
+ shift # past argument
+ shift # past value
+ ;;
+ -d | --delete-sysdig)
+ DELETE_SYSDIG="true"
+ shift # past value
+ ;;
+ -h | --help)
+ echo "Help..."
+ echo "-a | --airgap-builder to specify airgap builder"
+ echo "-i | --airgap-install to run as airgap install mode"
+ echo "-r | --run-installer to run the installer alone"
+ echo "-q | --quaypullsecret followed by quaysecret to specify airgap builder"
+ echo "-d | --delete-sysdig deletes sysdig namespace, persistent volumes and data from disk"
+ shift # past argument
+ exit 0
+ ;;
+ *) # unknown option
+ shift # past argument
+ logError "unknown arg $1"
+ exit 1
+ ;;
+ esac
+done
+
+__main
diff --git a/installer/single-node/vmx_template.vmx b/installer/single-node/vmx_template.vmx
new file mode 100644
index 00000000..92938222
--- /dev/null
+++ b/installer/single-node/vmx_template.vmx
@@ -0,0 +1,79 @@
+.encoding = "UTF-8"
+config.version = "8"
+virtualHW.version = "14"
+vmci0.present = "TRUE"
+floppy0.present = "FALSE"
+numvcpus = "16"
+memSize = "32768"
+bios.bootRetry.delay = "10"
+powerType.suspend = "soft"
+tools.upgrade.policy = "manual"
+sched.cpu.units = "mhz"
+sched.cpu.affinity = "all"
+vm.createDate = "1580953556813202"
+ethernet0.virtualDev = "vmxnet3"
+ethernet0.networkName = "VM Network"
+ethernet0.addressType = "generated"
+ethernet0.wakeOnPcktRcv = "FALSE"
+ethernet0.uptCompatibility = "TRUE"
+ethernet0.present = "TRUE"
+displayName = "sysdig-pov-image"
+guestOS = "debian9-64"
+toolScripts.afterPowerOn = "TRUE"
+toolScripts.afterResume = "TRUE"
+toolScripts.beforeSuspend = "TRUE"
+toolScripts.beforePowerOff = "TRUE"
+tools.syncTime = "FALSE"
+uuid.bios = "56 4d 1d cb 98 dd 56 88-5e de 80 c0 94 c0 81 8e"
+uuid.location = "56 4d 1d cb 98 dd 56 88-5e de 80 c0 94 c0 81 8e"
+vc.uuid = "52 5c 60 3a b0 fe 00 bc-8d f5 a5 74 d8 33 ba 04"
+sched.cpu.min = "0"
+sched.cpu.shares = "normal"
+sched.mem.min = "0"
+sched.mem.minSize = "0"
+sched.mem.shares = "normal"
+ethernet0.generatedAddress = "00:0c:29:c0:81:8e"
+vmci0.id = "-1799323250"
+cleanShutdown = "FALSE"
+nvme0.present = "TRUE"
+nvme0:0.fileName = "/tmp/ovf/sysdig-pov-image.vmdk"
+nvme0:0.present = "TRUE"
+sched.nvme0:0.shares = "normal"
+sched.nvme0:0.throughputCap = "off"
+numa.autosize.cookie = "80001"
+numa.autosize.vcpu.maxPerVirtualNode = "8"
+tools.guest.desktop.autolock = "FALSE"
+pciBridge0.present = "TRUE"
+svga.present = "TRUE"
+pciBridge4.present = "TRUE"
+pciBridge4.virtualDev = "pcieRootPort"
+pciBridge4.functions = "8"
+pciBridge5.present = "TRUE"
+pciBridge5.virtualDev = "pcieRootPort"
+pciBridge5.functions = "8"
+pciBridge6.present = "TRUE"
+pciBridge6.virtualDev = "pcieRootPort"
+pciBridge6.functions = "8"
+pciBridge7.present = "TRUE"
+pciBridge7.virtualDev = "pcieRootPort"
+pciBridge7.functions = "8"
+hpet0.present = "TRUE"
+RemoteDisplay.maxConnections = "-1"
+sched.cpu.latencySensitivity = "normal"
+svga.autodetect = "TRUE"
+pciBridge0.pciSlotNumber = "17"
+pciBridge4.pciSlotNumber = "21"
+pciBridge5.pciSlotNumber = "22"
+pciBridge6.pciSlotNumber = "23"
+pciBridge7.pciSlotNumber = "24"
+ethernet0.pciSlotNumber = "160"
+vmci0.pciSlotNumber = "32"
+sata1.pciSlotNumber = "-1"
+ethernet0.generatedAddressOffset = "0"
+monitor.phys_bits_used = "43"
+vmotion.checkpointFBSize = "4194304"
+vmotion.checkpointSVGAPrimarySize = "16777216"
+softPowerOff = "FALSE"
+svga.guestBackedPrimaryAware = "TRUE"
+nvme0.pciSlotNumber = "192"
+nvme0:0.redo = ""
diff --git a/installer/values.yaml b/installer/values.yaml
new file mode 100644
index 00000000..61466c00
--- /dev/null
+++ b/installer/values.yaml
@@ -0,0 +1,35 @@
+#This represents the schema version of this config, this version follows semver
+#and maintains semver guarantees around versioning.
+schema_version: 1.0.0
+#Size of the cluster. Takes [ small | medium | large ]
+#This defines CPU & Memory & Disk & Replicas
+#Replicas can be overwritten for medium , large in advanced config section
+size: medium
+#Set Quay.Io secrets
+quaypullsecret:
+#supports aws | gke | ibm | hostPath | local
+storageClassProvisioner: aws # TODO: this would be better as cloudProvisioner | hostPath | local, where cloudProvisioner differs to cloudProvider.name where used
+#Sysdig application config
+sysdig:
+# Sysdig Platform super admin user. This will be used for initial login to
+# the web interface. Make sure this is a valid email address that you can
+# receive emails at.
+ admin:
+ username:
+ #Set Sysdig license
+ license:
+ dnsName:
+ #supports hostnetwork | loadbalancer | nodeport
+ ingressNetworking: hostnetwork
+ ingressClassName: haproxy
+ # Uncomment the following two lines to enable Sysdig Platform Audit
+ #platformAuditTrail:
+ # enabled: true
+ # Uncomment the following lines to enable origin IP in Sysdig Platform Audit
+ #secure:
+ # events:
+ # audit:
+ # config:
+ # store:
+ # ip:
+ # enabled: true
diff --git a/installer/vmx_template.vmx b/installer/vmx_template.vmx
new file mode 100644
index 00000000..92938222
--- /dev/null
+++ b/installer/vmx_template.vmx
@@ -0,0 +1,79 @@
+.encoding = "UTF-8"
+config.version = "8"
+virtualHW.version = "14"
+vmci0.present = "TRUE"
+floppy0.present = "FALSE"
+numvcpus = "16"
+memSize = "32768"
+bios.bootRetry.delay = "10"
+powerType.suspend = "soft"
+tools.upgrade.policy = "manual"
+sched.cpu.units = "mhz"
+sched.cpu.affinity = "all"
+vm.createDate = "1580953556813202"
+ethernet0.virtualDev = "vmxnet3"
+ethernet0.networkName = "VM Network"
+ethernet0.addressType = "generated"
+ethernet0.wakeOnPcktRcv = "FALSE"
+ethernet0.uptCompatibility = "TRUE"
+ethernet0.present = "TRUE"
+displayName = "sysdig-pov-image"
+guestOS = "debian9-64"
+toolScripts.afterPowerOn = "TRUE"
+toolScripts.afterResume = "TRUE"
+toolScripts.beforeSuspend = "TRUE"
+toolScripts.beforePowerOff = "TRUE"
+tools.syncTime = "FALSE"
+uuid.bios = "56 4d 1d cb 98 dd 56 88-5e de 80 c0 94 c0 81 8e"
+uuid.location = "56 4d 1d cb 98 dd 56 88-5e de 80 c0 94 c0 81 8e"
+vc.uuid = "52 5c 60 3a b0 fe 00 bc-8d f5 a5 74 d8 33 ba 04"
+sched.cpu.min = "0"
+sched.cpu.shares = "normal"
+sched.mem.min = "0"
+sched.mem.minSize = "0"
+sched.mem.shares = "normal"
+ethernet0.generatedAddress = "00:0c:29:c0:81:8e"
+vmci0.id = "-1799323250"
+cleanShutdown = "FALSE"
+nvme0.present = "TRUE"
+nvme0:0.fileName = "/tmp/ovf/sysdig-pov-image.vmdk"
+nvme0:0.present = "TRUE"
+sched.nvme0:0.shares = "normal"
+sched.nvme0:0.throughputCap = "off"
+numa.autosize.cookie = "80001"
+numa.autosize.vcpu.maxPerVirtualNode = "8"
+tools.guest.desktop.autolock = "FALSE"
+pciBridge0.present = "TRUE"
+svga.present = "TRUE"
+pciBridge4.present = "TRUE"
+pciBridge4.virtualDev = "pcieRootPort"
+pciBridge4.functions = "8"
+pciBridge5.present = "TRUE"
+pciBridge5.virtualDev = "pcieRootPort"
+pciBridge5.functions = "8"
+pciBridge6.present = "TRUE"
+pciBridge6.virtualDev = "pcieRootPort"
+pciBridge6.functions = "8"
+pciBridge7.present = "TRUE"
+pciBridge7.virtualDev = "pcieRootPort"
+pciBridge7.functions = "8"
+hpet0.present = "TRUE"
+RemoteDisplay.maxConnections = "-1"
+sched.cpu.latencySensitivity = "normal"
+svga.autodetect = "TRUE"
+pciBridge0.pciSlotNumber = "17"
+pciBridge4.pciSlotNumber = "21"
+pciBridge5.pciSlotNumber = "22"
+pciBridge6.pciSlotNumber = "23"
+pciBridge7.pciSlotNumber = "24"
+ethernet0.pciSlotNumber = "160"
+vmci0.pciSlotNumber = "32"
+sata1.pciSlotNumber = "-1"
+ethernet0.generatedAddressOffset = "0"
+monitor.phys_bits_used = "43"
+vmotion.checkpointFBSize = "4194304"
+vmotion.checkpointSVGAPrimarySize = "16777216"
+softPowerOff = "FALSE"
+svga.guestBackedPrimaryAware = "TRUE"
+nvme0.pciSlotNumber = "192"
+nvme0:0.redo = ""