Skip to content

Commit

Permalink
Merge pull request #1210 from EnterpriseDB/release/2021-04-07
Browse files Browse the repository at this point in the history
Former-commit-id: 5363a19
  • Loading branch information
robert-stringer authored Apr 7, 2021
2 parents ac67806 + 6058063 commit 452039c
Show file tree
Hide file tree
Showing 4,230 changed files with 88,346 additions and 33,828 deletions.
The diff you're trying to view is too large. We only load the first 3000 changed files.
30 changes: 30 additions & 0 deletions .husky/_/husky.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
#!/bin/sh
if [ -z "$husky_skip_init" ]; then
debug () {
[ "$HUSKY_DEBUG" = "1" ] && echo "husky (debug) - $1"
}

readonly hook_name="$(basename "$0")"
debug "starting $hook_name..."

if [ "$HUSKY" = "0" ]; then
debug "HUSKY env variable is set to 0, skipping hook"
exit 0
fi

if [ -f ~/.huskyrc ]; then
debug "sourcing ~/.huskyrc"
. ~/.huskyrc
fi

export readonly husky_skip_init=1
sh -e "$0" "$@"
exitCode="$?"

if [ $exitCode != 0 ]; then
echo "husky - $hook_name hook exited with code $exitCode (error)"
exit $exitCode
fi

exit 0
fi
13 changes: 13 additions & 0 deletions advocacy_docs/kubernetes/cloud_native_postgresql/api_reference.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -36,6 +36,7 @@ Below you will find a description of the defined resources:
* [ClusterSpec](#clusterspec)
* [ClusterStatus](#clusterstatus)
* [DataBackupConfiguration](#databackupconfiguration)
* [MonitoringConfiguration](#monitoringconfiguration)
* [NodeMaintenanceWindow](#nodemaintenancewindow)
* [PostgresConfiguration](#postgresconfiguration)
* [RecoveryTarget](#recoverytarget)
Expand Down Expand Up @@ -212,6 +213,7 @@ ClusterSpec defines the desired state of Cluster
| backup | The configuration to be used for backups | *[BackupConfiguration](#backupconfiguration) | false |
| nodeMaintenanceWindow | Define a maintenance window for the Kubernetes nodes | *[NodeMaintenanceWindow](#nodemaintenancewindow) | false |
| licenseKey | The license key of the cluster. When empty, the cluster operates in trial mode and after the expiry date (default 30 days) the operator will cease any reconciliation attempt. For details, please refer to the license agreement that comes with the operator. | string | false |
| monitoring | The configuration of the monitoring infrastructure of this cluster | *[MonitoringConfiguration](#monitoringconfiguration) | false |


## ClusterStatus
Expand All @@ -229,6 +231,7 @@ ClusterStatus defines the observed state of Cluster
| pvcCount | How many PVCs have been created by this cluster | int32 | false |
| jobCount | How many Jobs have been created by this cluster | int32 | false |
| danglingPVC | List of all the PVCs created by this cluster and still available which are not attached to a Pod | []string | false |
| initializingPVC | List of all the PVCs that are being initialized by this cluster | []string | false |
| licenseStatus | Status of the license | licensekey.Status | false |
| writeService | Current write pod | string | false |
| readService | Current list of read pods | string | false |
Expand All @@ -248,6 +251,16 @@ DataBackupConfiguration is the configuration of the backup of the data directory
| jobs | The number of parallel jobs to be used to upload the backup, defaults to 2 | *int32 | false |


## MonitoringConfiguration

MonitoringConfiguration is the type containing all the monitoring configuration for a certain cluster

| Field | Description | Scheme | Required |
| -------------------- | ------------------------------ | -------------------- | -------- |
| customQueriesConfigMap | The list of config maps containing the custom queries | []corev1.ConfigMapKeySelector | false |
| customQueriesSecret | The list of secrets containing the custom queries | []corev1.SecretKeySelector | false |


## NodeMaintenanceWindow

NodeMaintenanceWindow contains information that the operator will use while upgrading the underlying node.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -39,16 +39,15 @@ purposes.
Applications must be aware of the limitations that [Hot Standby](https://www.postgresql.org/docs/current/hot-standby.html)
presents and familiar with the way PostgreSQL operates when dealing with these workloads.

Applications can access any PostgreSQL instance at any time through the `-r`
service made available by the operator at connection time.
Applications can access hot standby replicas through the `-ro` service made available
by the operator. This service enables the application to offload read-only queries from the
primary node.

The following diagram shows the architecture:

![Applications reading from any instance in round robin](./images/architecture-r.png)
![Applications reading from hot standby replicas in round robin](./images/architecture-read-only.png)

Applications can also access hot standby replicas through the `-ro` service made available
by the operator. This service enables the application to offload read-only queries from the
primary node.
Applications can also access any PostgreSQL instance at any time through the `-r` service at connection time.

## Application deployments

Expand Down
2 changes: 2 additions & 0 deletions advocacy_docs/kubernetes/cloud_native_postgresql/credits.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,9 @@ developed, and tested by the EnterpriseDB Cloud Native team:
- Niccolò Fei
- Jonathan Gonzalez
- Danish Khan
- Anand Nednur
- Marco Nenciarini
- Gabriele Quaresima
- Jitendra Wadle
- Adam Wright

Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
2 changes: 2 additions & 0 deletions advocacy_docs/kubernetes/cloud_native_postgresql/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -17,13 +17,15 @@ navigation:
- quickstart
- cloud_setup
- bootstrap
- resource_management
- security
- failure_modes
- rolling_update
- backup_recovery
- postgresql_conf
- storage
- samples
- monitoring
- expose_pg_services
- ssl_connections
- kubernetes_upgrade
Expand Down
16 changes: 14 additions & 2 deletions advocacy_docs/kubernetes/cloud_native_postgresql/installation.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -6,15 +6,17 @@ product: 'Cloud Native Operator'

## Installation on Kubernetes

### Directly using the operator manifest

The operator can be installed like any other resource in Kubernetes,
through a YAML manifest applied via `kubectl`.

You can install the [latest operator manifest](https://get.enterprisedb.io/cnp/postgresql-operator-1.1.0.yaml)
You can install the [latest operator manifest](https://get.enterprisedb.io/cnp/postgresql-operator-1.2.0.yaml)
as follows:

```sh
kubectl apply -f \
https://get.enterprisedb.io/cnp/postgresql-operator-1.1.0.yaml
https://get.enterprisedb.io/cnp/postgresql-operator-1.2.0.yaml
```

Once you have run the `kubectl` command, Cloud Native PostgreSQL will be installed in your Kubernetes cluster.
Expand All @@ -25,6 +27,16 @@ You can verify that with:
kubectl get deploy -n postgresql-operator-system postgresql-operator-controller-manager
```

### Using the Operator Lifecycle Manager (OLM)

OperatorHub is a community-sourced index of operators available via the
[Operator Lifecycle Manager](https://github.com/operator-framework/operator-lifecycle-manager),
which is a package managing system for operators.

You can install Cloud Native PostgreSQL using the metadata available in the
[Cloud Native PostgreSQL page](https://operatorhub.io/operator/cloud-native-postgresql)
from the [OperatorHub.io website](https://operatorhub.io), following the installation steps listed on that page.

## Installation on Openshift

### Via the web interface
Expand Down
60 changes: 54 additions & 6 deletions advocacy_docs/kubernetes/cloud_native_postgresql/license_keys.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -4,19 +4,65 @@ originalFilePath: 'src/license_keys.md'
product: 'Cloud Native Operator'
---

Each `Cluster` resource has a `licenseKey` parameter in its definition.

A `licenseKey` is always required for the operator to work.
A license key is always required for the operator to work.

The only exception is when you run the operator with Community PostgreSQL:
in this case, if the `licenseKey` parameter is unset, a cluster will be
started with the default trial license - which automatically expires after 30 days.
in this case, if the license key is unset, a cluster will be started with the default
trial license - which automatically expires after 30 days.

!!! Important
After the license expiration, the operator will cease any reconciliation attempt
on the cluster, effectively stopping to manage its status.
The pods and the data will still be available.

## Company level license keys

A license key allows you to create an unlimited number of PostgreSQL
clusters in your installation.

The license key needs to be available in a `ConfigMap` in the same
namespace where the operator is deployed.

In Kubernetes the operator is deployed by default in
the `postgresql-operator-system` namespace.
When instead OLM is used (i.e. on OpenShift), the operator is installed
by default in the `openshift-operators` namespace.

Given the namespace name, and the license key, you can create
the config map with the following command:

```
kubectl create configmap -n [NAMESPACE_NAME_HERE] \
postgresql-operator-controller-manager-config \
--from-literal=EDB_LICENSE_KEY=[LICENSE_KEY_HERE]
```

The following command can be used to reload the config map:

```
kubectl rollout restart deployment -n [NAMESPACE_NAME_HERE] \
postgresql-operator-controller-manager
```

The validity of the license key can be checked inside the cluster status.

```sh
kubectl get cluster cluster_example -o yaml
[...]
status:
[...]
licenseStatus:
licenseExpiration: "2021-11-06T09:36:02Z"
licenseStatus: Trial
valid: true
isImplicit: false
isTrial: true
[...]
```

## Cluster level license keys

Each `Cluster` resource has a `licenseKey` parameter in its definition.
You can find the expiration date, as well as more information about the license,
in the cluster status:

Expand All @@ -29,6 +75,8 @@ status:
licenseExpiration: "2021-11-06T09:36:02Z"
licenseStatus: Trial
valid: true
isImplicit: false
isTrial: true
[...]
```

Expand All @@ -38,4 +86,4 @@ the expiration date or move the cluster to a production license.
Cloud Native PostgreSQL is distributed under the EnterpriseDB Limited Usage License
Agreement, available at [enterprisedb.com/limited-use-license](https://www.enterprisedb.com/limited-use-license).

Cloud Native PostgreSQL: Copyright (C) 2019-2020 EnterpriseDB.
Cloud Native PostgreSQL: Copyright (C) 2019-2021 EnterpriseDB.
95 changes: 95 additions & 0 deletions advocacy_docs/kubernetes/cloud_native_postgresql/monitoring.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,95 @@
---
title: 'Monitoring'
originalFilePath: 'src/monitoring.md'
product: 'Cloud Native Operator'
---

For each PostgreSQL instance, the operator provides an exporter of metrics for
[Prometheus](https://prometheus.io/) via HTTP, on port 8000.
The operator comes with a predefined set of metrics, as well as a highly
configurable and customizable system to define additional queries via one or
more `ConfigMap` objects - and, future versions, `Secret` too.

The exporter can be accessed as follows:

```shell
curl http://<pod ip>:8000/metrics
```

All monitoring queries are:

- transactionally atomic (one transaction per query)
- executed with the `pg_monitor` role

Please refer to the
["Default roles" section in PostgreSQL documentation](https://www.postgresql.org/docs/current/default-roles.html)
for details on the `pg_monitor` role.

## User defined metrics

Users will be able to define metrics through the available interface
that the operator provides. This interface is currently in *beta* state and
only supports definition of custom queries as `ConfigMap` and `Secret` objects
using a YAML file that is inspired by the [queries.yaml file](https://github.com/prometheus-community/postgres_exporter/blob/main/queries.yaml)
of the PostgreSQL Prometheus Exporter.

Queries must be defined in a `ConfigMap` to be referenced in the `monitoring`
section of the `Cluster` definition, as in the following example:

```yaml
apiVersion: postgresql.k8s.enterprisedb.io/v1
kind: Cluster
metadata:
name: cluster-example
spec:
instances: 3

storage:
size: 1Gi

monitoring:
customQueriesConfigMap:
- name: example-monitoring
key: custom-queries
```
Specifically, the `monitoring` section looks for an array with the name
`customQueriesConfigMap`, which, as the name suggests, needs a list of
`ConfigMap` key references to be used as the source of custom queries.

For example:

```yaml
---
apiVersion: v1
kind: ConfigMap
metadata:
namespace: default
name: example-monitoring
data:
custom-queries: |
pg_replication:
query: "SELECT CASE WHEN NOT pg_is_in_recovery()
THEN 0
ELSE GREATEST (0,
EXTRACT(EPOCH FROM (now() - pg_last_xact_replay_timestamp())))
END AS lag"
primary: true
metrics:
- lag:
usage: "GAUGE"
description: "Replication lag behind primary in seconds"
```

The object must have a name and be in the same namespace as the `Cluster`.
Note that the above query will be executed on the `primary` node, with the
following output.

```text
# HELP custom_pg_replication_lag Replication lag behind primary in seconds
# TYPE custom_pg_replication_lag gauge
custom_pg_replication_lag 0
```

This framework enables the definition of custom metrics to monitor the database
or the application inside the PostgreSQL cluster.
Loading

0 comments on commit 452039c

Please sign in to comment.