Skip to content

Commit

Permalink
Merge pull request #195 from octarinesec/CNS-3231-documentation-upgrade
Browse files Browse the repository at this point in the history
reordering documentation
  • Loading branch information
meori authored Oct 23, 2023
2 parents dfa2e97 + 268f2dc commit 24413a5
Show file tree
Hide file tree
Showing 11 changed files with 577 additions and 506 deletions.
462 changes: 14 additions & 448 deletions README.md

Large diffs are not rendered by default.

44 changes: 44 additions & 0 deletions docs/AgentDeployment.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
## Agent Deployment

### 1. Apply the Carbon Black Container Api Token Secret

```
kubectl create secret generic cbcontainers-access-token \
--namespace cbcontainers-dataplane --from-literal=accessToken=\
{API_Secret_Key}/{API_ID}
```
### TODO: extra secret
### 2. Apply the Carbon Black Container Agent Custom Resource

The operator implements controllers for the Carbon Black Container custom resources definitions

[Full Custom Resources Definitions Documentation](docs/crds.md)

#### 2.1 Apply the Carbon Black Container Agent CR

<u>cbcontainersagents.operator.containers.carbonblack.io</u>

This is the CR you'll need to deploy in order to trigger the operator to deploy the data plane components.

```sh
apiVersion: operator.containers.carbonblack.io/v1
kind: CBContainersAgent
metadata:
name: cbcontainers-agent
spec:
account: {ORG_KEY}
clusterName: {CLUSTER_GROUP}:{CLUSTER_NAME}
version: {AGENT_VERSION}
gateways:
apiGateway:
host: {API_HOST}
coreEventsGateway:
host: {CORE_EVENTS_HOST}
hardeningEventsGateway:
host: {HARDENING_EVENTS_HOST}
runtimeEventsGateway:
host: {RUNTIME_EVENTS_HOST}
```

* notice that without applying the api token secret, the operator will return the error:
`couldn't find access token secret k8s object`
50 changes: 50 additions & 0 deletions docs/ImageSources.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,50 @@
## Changing the source of the images

By default, all the images for the operator and agent deployment are going to be pulled from Docker Hub.

We understand that some companies might not want to pull images from Docker Hub and would prefer to mirror them into their internal repositories.

For that reason, we allow specifying the image yourself.
To do that modify the `CBContainersAgent` resource you're applying to your cluster.

Modify the following properties to specify the image for each service:

- monitor - `spec.components.basic.monitor.image`
- enforcer - `spec.components.basic.enforcer.image`
- state-reporter - `spec.components.basic.stateReporter.image`
- runtime-resolver - `spec.components.runtimeProtection.resolver.image`
- runtime-sensor - `spec.components.runtimeProtection.sensor.image`
- image-scanning-reporter - `spec.components.clusterScanning.imageScanningReporter.image`
- cluster-scanner - `spec.components.clusterScanning.clusterScanner.image`

The `image` object consists of 4 properties:

- `repository` - the repository of the image, e.g. `docker.io/my-org/monitor`
- `tag` - the version tag of the image, e.g. `1.0.0`, `latest`, etc.
- `pullPolicy` - the pull policy for that image, e.g. `IfNotPresent`, `Always`, or `Never`.
See [docs](https://kubernetes.io/docs/concepts/containers/images/#image-pull-policy).
- `pullSecrets` - the image pull secrets that are going to be used to pull the container images.
The secrets must already exist in the cluster.
See [docs](https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/).

A sample configuration can look like this:

```yaml
spec:
monitor:
image:
repository: docker.io/my-org/monitor
tag: 1.0.0
pullPolicy: Always
pullSecrets:
- my-pull-secret
```
This means that the operator will try to run the monitor service from the `docker.io/my-org/monitor:1.0.0` container image and the kubelet will be instruted to **always** pull the image, using the `my-pull-secret` secret.

### Using a shared secret for all images

If you want to use just one pull secret to pull all the custom images, you don't need to add it every single image configuration.
Instead you can specify it(them) under `spec.settings.imagePullSecrets`.

The secrets you put on that list will be added to the `imagePullSecrets` list of ALL agent workloads.
67 changes: 67 additions & 0 deletions docs/Main.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,67 @@
# VMware Carbon Black Cloud Container Operator
## Overview

The Carbon Black Cloud Container Operator runs within a Kubernetes cluster. The Container Operator is a set of controllers which deploy and manage the VMware Carbon Black Cloud Container components.

Capabilities
* Deploy and manage the Container Essentials product bundle (including the configuration and the image scanning for Kubernetes security)!
* Automatically fetch and deploy the Carbon Black Cloud Container private image registry secret
* Automatically register the Carbon Black Cloud Container cluster
* Manage the Container Essentials validating webhook - dynamically manage the admission control webhook to avoid possible downtime
* Monitor and report agent availability to the Carbon Black console

The Carbon Black Cloud Container Operator utilizes the operator-framework to create a GO operator, which is responsible for managing and monitoring the Cloud Container components deployment.

## Compatibility Matrix

| Operator version | Kubernetes Sensor Component Version | Minimum Kubernetes Version |
|------------------|-------------------------------------|----------------------------|
| v6.0.x | 2.10.0, 2.11.0, 2.12.0, 3.0.0 | 1.18 |
| v5.6.x | 2.10.0, 2.11.0, 2.12.0 | 1.16 |
| v5.5.x | 2.10.0, 2.11.0 | 1.16 |

## Install

First, you need to install the CBC operator on the cluster:

[Operator Deployment](OperatorDeployment.md)

Then you need to deploy the CBC Agent on top of the operator:

[Agent Deployment](AgentDeployment.md)



For OpenShift clusters, follow the OpenShift Deployment instructions:

[OpenShift Deployment and Uninstall](OpenshiftDeployment.md)


## Full Uninstall

### Uninstalling the Carbon Black Cloud Container Operator

```sh
export OPERATOR_VERSION=v6.0.2
export OPERATOR_SCRIPT_URL=https://setup.containers.carbonblack.io/$OPERATOR_VERSION/operator-apply.sh
curl -s $OPERATOR_SCRIPT_URL | bash -s -- -u
```

* Notice that the above command will delete the Carbon Black Container custom resources definitions and instances.

## Documentation
1. [Setting up Prometheus access](Prometheus.md)
2. [CRD Configuration](crds.md)
3. [Resource spec Configuration](Resources.md)
4. [Using HTTP proxy](Proxy.md)
5. [Configuring image sources](ImageSources.md)
6. [RBAC Configuration](rbac.md)

## Developers Guide
A developers guide for building and configuring the operator:

[Developers Guide](developers.md)

## Helm Charts Documentation
[VMware Carbon Black Cloud Container Helm Charts Documentation](../charts/README.md)

122 changes: 122 additions & 0 deletions docs/OpenshiftDeployment.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,122 @@
## Deploying on Openshift

The operator and its agent require elevated permissions to operate properly. However, this violates the default SecurityContextConstraints on most Openshift clusters, hence the components fail to start.
This can be fixed by applying the following custom security constraint configurations on the cluster (cluster admin priveleges required).

```yaml
kind: SecurityContextConstraints
apiVersion: security.openshift.io/v1
metadata:
name: scc-anyuid
runAsUser:
type: MustRunAsNonRoot
allowHostPID: false
allowHostPorts: false
allowHostNetwork: false
allowHostDirVolumePlugin: false
allowHostIPC: false
allowPrivilegedContainer: false
readOnlyRootFilesystem: true
seLinuxContext:
type: RunAsAny
fsGroup:
type: RunAsAny
supplementalGroups:
type: RunAsAny
users:
- system:serviceaccount:cbcontainers-dataplane:cbcontainers-operator
- system:serviceaccount:cbcontainers-dataplane:cbcontainers-enforcer
- system:serviceaccount:cbcontainers-dataplane:cbcontainers-state-reporter
- system:serviceaccount:cbcontainers-dataplane:cbcontainers-monitor
- system:serviceaccount:cbcontainers-dataplane:cbcontainers-runtime-resolver
---
kind: SecurityContextConstraints
apiVersion: security.openshift.io/v1
metadata:
name: scc-image-scanning # This probably needs to be fixed in the actual deployment
runAsUser:
type: RunAsAny
allowHostPID: false
allowHostPorts: false
allowHostNetwork: false
allowHostDirVolumePlugin: false
allowHostIPC: false
allowPrivilegedContainer: false
readOnlyRootFilesystem: false
seLinuxContext:
type: RunAsAny
fsGroup:
type: RunAsAny
supplementalGroups:
type: RunAsAny
allowedCapabilities:
- 'NET_BIND_SERVICE'
users:
- system:serviceaccount:cbcontainers-dataplane:cbcontainers-image-scanning
---
kind: SecurityContextConstraints
apiVersion: security.openshift.io/v1
metadata:
name: scc-node-agent
runAsUser:
type: RunAsAny
allowHostPID: true
allowHostPorts: false
allowHostNetwork: true
allowHostDirVolumePlugin: true
allowHostIPC: false
allowPrivilegedContainer: true
readOnlyRootFilesystem: false
seLinuxContext:
type: RunAsAny
fsGroup:
type: RunAsAny
supplementalGroups:
type: RunAsAny
volumes:
- configMap
- downwardAPI
- emptyDir
- hostPath
- persistentVolumeClaim
- projected
- secret
users:
- system:serviceaccount:cbcontainers-dataplane:cbcontainers-agent-node
```
### Uninstalling on Openshift
Add this SecurityContextConstraints
before running the operator uninstall command
```yaml
kind: SecurityContextConstraints
apiVersion: security.openshift.io/v1
metadata:
name: scc-edr-cleaner
runAsUser:
type: RunAsAny
allowHostPID: true
allowHostPorts: false
allowHostNetwork: true
allowHostDirVolumePlugin: true
allowHostIPC: false
allowPrivilegedContainer: true
readOnlyRootFilesystem: false
seLinuxContext:
type: RunAsAny
fsGroup:
type: RunAsAny
supplementalGroups:
type: RunAsAny
volumes:
- configMap
- downwardAPI
- emptyDir
- hostPath
- persistentVolumeClaim
- projected
- secret
users:
- system:serviceaccount:cbcontainers-edr-sensor-cleaners:cbcontainers-edr-sensor-cleaner
```
33 changes: 33 additions & 0 deletions docs/OperatorDeployment.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
## Operator Deployment

### Prerequisites
Kubernetes 1.18+ is supported.

### From script:
```
export OPERATOR_VERSION=v6.0.2
export OPERATOR_SCRIPT_URL=https://setup.containers.carbonblack.io/$OPERATOR_VERSION/operator-apply.sh
curl -s $OPERATOR_SCRIPT_URL | bash
```

{OPERATOR_VERSION} is of the format "v{VERSION}"

Versions list: [Releases](https://github.com/octarinesec/octarine-operator/releases)

### From Source Code
Clone the git project and deploy the operator from the source code

By default, the operator utilizes CustomResourceDefinitions v1, which requires Kubernetes 1.16+.
Deploying an operator with CustomResourceDefinitions v1beta1 (deprecated in Kubernetes 1.16, removed in Kubernetes 1.22) can be done - see the relevant section below.

#### Create the operator image
```
make docker-build docker-push IMG={IMAGE_NAME}
```

#### Deploy the operator resources
```
make deploy IMG={IMAGE_NAME}
```

* View [Developer Guide](docs/developers.md) to see how deploy the operator without using an image
52 changes: 52 additions & 0 deletions docs/Prometheus.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,52 @@
## Reading Metrics With Prometheus

The operator metrics are protected by kube-auth-proxy.

You will need to grant permissions to your Prometheus server to allow it to scrape the protected metrics.

You can create a ClusterRole and bind it with ClusterRoleBinding to the service account that your Prometheus server uses.

If you don't have such cluster role & cluster role binding configured, you can use the following:

Cluster Role:
```yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: cbcontainers-metrics-reader
rules:
- nonResourceURLs:
- /metrics
verbs:
- get
```
Cluster Role binding creation:
```sh
kubectl create clusterrolebinding metrics --clusterrole=cbcontainers-metrics-reader --serviceaccount=<prometheus-namespace>:<prometheus-service-account-name>
```

### When using Prometheus Operator

Use the following ServiceMonitor to start scraping metrics from the CBContainers operator:
* Make sure that your Prometheus custom resource service monitor selectors match it.
```yaml
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
labels:
control-plane: operator
name: cbcontainers-operator-metrics-monitor
namespace: cbcontainers-dataplane
spec:
endpoints:
- bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
path: /metrics
port: https
scheme: https
tlsConfig:
insecureSkipVerify: true
selector:
matchLabels:
control-plane: operator
```
Loading

0 comments on commit 24413a5

Please sign in to comment.