Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Project delivery #2

Open
wants to merge 31 commits into
base: apa/delivery
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
31 commits
Select commit Hold shift + click to select a range
38f5cf6
Added gitignore
dpstart Nov 27, 2019
3cce47d
Modified deployment
dpstart Nov 27, 2019
011753d
Added servicemonitor + fixed configs
dpstart Dec 2, 2019
678ab71
Add hpa and useful cmds
dpstart Dec 4, 2019
b020e13
Create README.md
dpstart Dec 13, 2019
514f590
Update README.md
dpstart Dec 13, 2019
1ff1094
Update custom_hpa.yaml
dpstart Dec 13, 2019
e4f5060
Delete cmd.txt
dpstart Dec 13, 2019
24a84fd
Update README.md
dpstart Dec 13, 2019
6f27b0d
Delete .DS_Store
dpstart Jan 16, 2020
fd5da23
Update README.md
dpstart Jan 16, 2020
9d1cbb9
Update servicemonitor.yaml
dpstart Jan 16, 2020
5629537
Update README.md
dpstart Jan 16, 2020
06fd463
Improve README
dpstart Jan 22, 2020
f8d9cae
Comment YAML files
dpstart Jan 22, 2020
73f5ce9
Minor improvements to docs
dpstart Jan 22, 2020
d423cc7
Clean up repo + update gitignore
dpstart Feb 10, 2020
f70602c
Merge branch 'master' of https://github.com/netgroup-polito/VPNaaS
dpstart Feb 10, 2020
6f92c27
Add initContainer to set ip forwarding in pod
dpstart Feb 10, 2020
ab532bb
Change default ports, avoid default tls/https port
dpstart Feb 10, 2020
f44a8d1
Improve documentation
dpstart Feb 11, 2020
ce96207
Improve documentation + add architectural scheme
dpstart Feb 11, 2020
0b48b4f
Added more detailed info aout certificate
dpstart Feb 11, 2020
f7e50ee
Add loadbalancer info
dpstart Feb 11, 2020
36ffb80
Improve documentation + add architectural scheme
dpstart Feb 11, 2020
8a08616
Added more subsection + general view on installation
dpstart Feb 11, 2020
533c221
Add compatibility with helm v3
dpstart Feb 14, 2020
1700b6a
Update Notes to add -c option when running commands in container.
dpstart Feb 14, 2020
cff6d93
Add note about setting ns in service monitor
dpstart Feb 14, 2020
e10ba30
Add Adapter paragraph
dpstart Feb 14, 2020
6b706bd
VPNaaS final presentation
frisso Mar 27, 2020
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Binary file removed .DS_Store
Binary file not shown.
5 changes: 5 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
.DS_Store
*.ovpn
cmd.txt
servicemonitor_dcota.yaml
openvpn-chart/.DS_Store/
168 changes: 168 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,168 @@
# HPA with Custom Metrics: the VPN-as-a-service use case.

Provision an OpenVPN installation on k8s that can autoscale against custom metrics.
frisso marked this conversation as resolved.
Show resolved Hide resolved

## Architecture

This project contains a full OpenVPN deployment for k8s, which is coupled with an OpenVPN metrics exporter and exposed through a LoadBalancer service.

The exporters harvests metrics from the OpenVPN instance and exposes them for Prometheus (note that the an instance of the [Prometheus Operator](https://github.com/coreos/prometheus-operator) needs to be running on the cluster).

These metrics are then fed to the [Prometheus Adapter](https://github.com/helm/charts/tree/master/stable/prometheus-adapter), which implements the k8s [Custom Metrics API](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-metrics-apis). The Adapter is reponsible for exposing the metrics through the k8s API, so that they can be queried by an HPA instance for autoscaling.

An high-level view of the components and their interactions is showed in the picture.

![](img/scheme.png)


## Prerequisites

Everything was tested with:

* [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) version v1.12.0+.
* Kubernetes v1.6+ cluster.
* [helm](https://helm.sh/docs/intro/install/) v2.16+ and v3.

## Installation
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Say here how many components (and which ones they are) you have to install, so that reader is aware of what it needs to do (at high-level). Then, structure the doc in such a way that there's a sub-sub-section for each component.
Note: the "installation" section should focus on installation. However, we need another section such as "architecture", which explains the logical blocks you're using, what they are used for, and how they interact to each other.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added the subsections + a general view on the components to install.


We will first focus on provisioning the OpenVPN installation on top of Kubernetes. Once this is done, we will add the components that allow us to expose the metrics through Prometheus.
As we've seen, these metrics are then processed by the adapter and exposed through the k8s metrics API.

After that, we can deploy HPA instances that autoscale against these new metrics.

### OpenVPN

The Helm OpenVPN chart is derived from the [official one](https://github.com/helm/charts/tree/master/stable/openvpn). This fork includes new shared volumes that are used to share OpenVPN metrics, and a sidecar container that exports these metrics for Prometheus.

To install from the chart directory, run
```helm install --name <release_name> --tiller-namespace <tiller_namespace> .```
dpstart marked this conversation as resolved.
Show resolved Hide resolved

As an example, to install the chart in the `johndoe` namespace, you might do
```helm install --name openvpn_v01 --tiller-namespace johndoe .```


The metrics exporter, which is taken from [this project](https://github.com/kumina/openvpn_exporter), is deployed as a sidecar container in the OpenVPN pod, and it exposes metrics on port 9176. This is shown in the following code snippet, where the exporter image is used, and the commands for exporting the metrics are run.

```YAML
dpstart marked this conversation as resolved.
Show resolved Hide resolved
...

containers:
- name: exporter
image: kumina/openvpn-exporter
command: ["/bin/openvpn_exporter"]
args: ["-openvpn.status_paths", "/etc/openvpn-exporter/openvpn/openvpn-status.log"]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What about if this path does not exist on the target machine?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The first part is the volume mount I created, while the openvpn/openvpn-status.log refers to a file that is present in the OpenVPN installation.

volumeMounts:
- name: openvpn-status
mountPath: /etc/openvpn-exporter/openvpn

...
```

Docs for the exporter are available [here](https://github.com/kumina/openvpn_exporter).
frisso marked this conversation as resolved.
Show resolved Hide resolved

This chart also contains some additional modifications:
* `status-version 2` option is added in the OpenVPN configuration file for compatibility with the exporter.
* An option to set ip forwarding in the container is added. If the option is set, the deployment spawns an initContainer in privileged mode that runs the proper commands on initialization.
* The default ports are changed, to avoid using port 443 which is often already in use. All of these options can be easily changed from the [values.yaml](https://github.com/netgroup-polito/VPNaaS/blob/master/openvpn-chart/values.yaml) configuration file.



After the chart is deployed and the pod is ready, an OpenVPN certificate for a new user can be generated.
The certificate will allow a user to connect to the VPN using any OpenVPN client available. The certificate contains information such as client configuration options, the IP of the VPN gateway and the client's private key and X.509 certificates.

Certificates can be generated using the following commands:

```bash
POD_NAME=$(kubectl get pods --namespace <namespace> -l "app=openvpn,release=<your_release>" -o jsonpath='{ .items[0].metadata.name }')
SERVICE_NAME=$(kubectl get svc --namespace <namespace> -l "app=openvpn,release=<your_release>" -o jsonpath='{ .items[0].metadata.name }')
SERVICE_IP=$(kubectl get svc --namespace <namespace> "$SERVICE_NAME" -o go-template='{{ range $k, $v := (index .status.loadBalancer.ingress 0)}}{{ $v }}{{end}}')
KEY_NAME=<key_name>
kubectl --namespace <namespace> exec -it "$POD_NAME" -c openvpn /etc/openvpn/setup/newClientCert.sh "$KEY_NAME" "$SERVICE_IP"
kubectl --namespace <namespace> exec -it "$POD_NAME" -c openvpn cat "/etc/openvpn/certs/pki/$KEY_NAME.ovpn" > "$KEY_NAME.ovpn"
```

Here, the `KEY_NAME` option should be a unique identifier of the VPN user, such as an email or university ID numer. This value is going to appear in the *Subject* value of the client certificate, and can be used to revoke it.


Clients certificates can be revoked in this manner:
frisso marked this conversation as resolved.
Show resolved Hide resolved

```bash
KEY_NAME=<key_name>
POD_NAME=$(kubectl get pods -n <namespace> -l "app=openvpn,release=<your_release>" -o jsonpath='{.items[0].metadata.name}')
kubectl -n <namespace> exec -it "$POD_NAME" /etc/openvpn/setup/revokeClientCert.sh $KEY_NAME
```

To take a look at the metrics, you can use port-forwarding.

Run `kubectl port-forward <pod_name> 9176:9176` and then connect to [http://localhost:9176/metrics](http://localhost:9176/metrics).

You should now be able to see some Prometheus metrics of your OpenVPN instance:

```
# HELP openvpn_openvpn_server_connected_clients Number Of Connected Clients
# TYPE openvpn_openvpn_server_connected_clients gauge
openvpn_openvpn_server_connected_clients{status_path="/etc/openvpn-exporter/openvpn/openvpn-status.log"} 1
# HELP openvpn_server_client_received_bytes_total Amount of data received over a connection on the VPN server, in bytes.
# TYPE openvpn_server_client_received_bytes_total counter
openvpn_server_client_received_bytes_total{common_name="CC2",connection_time="1576248156",real_address="10.244.0.0:25878",status_path="/etc/openvpn-exporter/openvpn/openvpn-status.log",username="UNDEF",virtual_address="10.240.0.6"} 17762
# HELP openvpn_server_client_sent_bytes_total Amount of data sent over a connection on the VPN server, in bytes.
# TYPE openvpn_server_client_sent_bytes_total counter
openvpn_server_client_sent_bytes_total{common_name="CC2",connection_time="1576248156",real_address="10.244.0.0:25878",status_path="/etc/openvpn-exporter/openvpn/openvpn-status.log",username="UNDEF",virtual_address="10.240.0.6"} 19047
```

At this point, you should have a working OpenVPN installation that runs on Kubernetes. The following steps will allow you to expose metrics through the [Custom Metrics API](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-metrics-apis), that allows us to autoscale against OpenVPN metrics.

### Prometheus Adapter

The Prometheus Adapter has to be installed in the cluster in order to implement the Custom Metrics API using Prometheus data. The adapter, along with installation instructions and walkthroughs, can be found [here](https://github.com/DirectXMan12/k8s-prometheus-adapter).

### Prometheus Service Monitor

We first need to expose the exporter through a service, so that the Prometheus operator can access it, by running `kubectl apply -f exporter_service.yaml`. This is a very simple service that sits in front of our OpenVPN pods, and that defines a port through which we expose the metrics.

Running `kubectl apply -f servicemonitor.yaml` will now deploy the service monitor that is used by Prometheus to harvest our metrics. Remember to set the appropriate namespace in [servicemonitor.yaml](https://github.com/netgroup-polito/VPNaaS/blob/master/servicemonitor.yaml).
A service monitor is a Prometheus operator custom resource which declaratevly specifies how groups of services should be monitored.

### HPA

Once everything is up and running, we are now ready to autoscale against our custom metrics.
The following YAML snippet shows a HPA that scales against the number of users currently connected to the VPN:

```YAML
kind: HorizontalPodAutoscaler
apiVersion: autoscaling/v2beta1
metadata:
name: openvpn
spec:
scaleTargetRef:
# point the HPA at the sample application
# you created above
apiVersion: apps/v1
kind: Deployment
name: <your_openvpn_deployment>
dpstart marked this conversation as resolved.
Show resolved Hide resolved
# autoscale between 1 and 10 replicas
minReplicas: 1
maxReplicas: 10
metrics:
# use a "Pods" metric, which takes the average of the
# given metric across all pods controlled by the autoscaling target
- type: Pods
pods:
metricName: openvpn_openvpn_server_connected_clients
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is the name "openvpn" repeated twice here?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It looks like this is in the golang implementation of the exporter.

targetAverageValue: 3
```

Where `<your_openvpn_deployment>` should be replaced with the name of your OpenVPN deployment.

## Troubleshooting


### Internet traffic through VPN

You can avoid routing all traffic through the VPN by setting `redirectGateway: false`. The `redirect-gateway`option changes client routing table so that all traffic is directed through the server.

For a detailed discussion on OpenVPN routing, you can look at [this guide](https://community.openvpn.net/openvpn/wiki/BridgingAndRouting?__cf_chl_jschl_tk__=3594c84025c56b4a1b5b5ab4b8a09795f5dffde6-1581421660-0-AWvPPmNOQbCMn6yvKYVynFeagfHjTv3MIRLp1RjRbUpBry5iiU97HnZR4XUZTwIb9wczHJmkjrf-aOHY2xoQDUzYBNgqAiBLSqmZppVcqXBw1zpDYhOxMbk0MHbaqQLJluu0WEE-bEzUMWipoXMEpx5EbHQg_Xm3rbZLhvL3Dy5pF7_LvCPiAHoKdC1g0_T_-YjqVn858go5QQoXJghBLjcSIrNYydpljPUkil5rejI3vt0jp6VdrsXqHLVtAXWADDP8VlnYB0n0VyfdntSp9incx5-440aU7WAjHCOFLmc1eQcx7MiSTDwtr9FcJnxAZw)

## TODO
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am definitely missing how we can create users that will use this VPN service. It is definitely a very important point given that, without that information, your service looks pretty much useless.

Copy link
Collaborator Author

@dpstart dpstart Jan 22, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It was state in the part that says

After the chart is deployed and the pod is ready, an OpenVPN certificate can be generated using the following commands:

which also contains the commands to run for generating the certificates for a new user.


Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I cannot find any explanation about how traffic is routed within the service. Do we have to enable the IP forwarding on the Pod? In either case (either YES or NO), why?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, ip forwarding needs to be set, otherwise traffic stops at the gateway. I added the functionality of setting it automatically when deploying (I point it out in the README section about the list of modification I made).
I have also added a pointer to an explanation of OpenVPN routing, if you think it can be useful I can elaborate a bit on OpenVPN routing in general.

* Manage certificate persistence across replicas.
Binary file added VPNaaS-final-presentation.pptx
Binary file not shown.
22 changes: 22 additions & 0 deletions custom_hpa.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
# An HPA instance that works on the openvpn deployment and scales against custom OpenVPN metrics.
kind: HorizontalPodAutoscaler
dpstart marked this conversation as resolved.
Show resolved Hide resolved
apiVersion: autoscaling/v2beta1
metadata:
name: openvpn
spec:
scaleTargetRef:
# point the HPA at the sample application
# you created above
apiVersion: apps/v1
kind: Deployment
name: <deployment_name>
# autoscale between 1 and 10 replicas
minReplicas: 1
maxReplicas: 10
metrics:
# use a "Pods" metric, which takes the average of the
# given metric across all pods controlled by the autoscaling target
- type: Pods
pods:
metricName: openvpn_openvpn_server_connected_clients
targetAverageValue: 3
16 changes: 16 additions & 0 deletions exporter_service.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
# A simple service that target the openvpn pod, which is selected by
# the service montitor for harvesting the metrics.
apiVersion: v1
kind: Service
metadata:
name: exporter-service
labels:
app: openvpn
spec:
ports:
- port: 9176
targetPort: 9176
protocol: TCP
name: metrics
selector:
app: openvpn
Binary file added img/scheme.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file removed openvpn-chart/.DS_Store
Binary file not shown.
6 changes: 3 additions & 3 deletions openvpn-chart/templates/NOTES.txt
Original file line number Diff line number Diff line change
Expand Up @@ -18,12 +18,12 @@ Once the external IP is available and all the server certificates are generated
SERVICE_NAME=$(kubectl get svc --namespace "{{ .Release.Namespace }}" -l "app={{ template "openvpn.name" . }},release={{ .Release.Name }}" -o jsonpath='{ .items[0].metadata.name }')
SERVICE_IP=$(kubectl get svc --namespace "{{ .Release.Namespace }}" "$SERVICE_NAME" {{"-o go-template='{{ range $k, $v := (index .status.loadBalancer.ingress 0)}}{{ $v }}{{end}}'"}})
KEY_NAME=kubeVPN
kubectl --namespace "{{ .Release.Namespace }}" exec -it "$POD_NAME" /etc/openvpn/setup/newClientCert.sh "$KEY_NAME" "$SERVICE_IP"
kubectl --namespace "{{ .Release.Namespace }}" exec -it "$POD_NAME" cat "/etc/openvpn/certs/pki/$KEY_NAME.ovpn" > "$KEY_NAME.ovpn"
kubectl --namespace "{{ .Release.Namespace }}" exec -it "$POD_NAME" -c openvpn /etc/openvpn/setup/newClientCert.sh "$KEY_NAME" "$SERVICE_IP"
kubectl --namespace "{{ .Release.Namespace }}" exec -it "$POD_NAME" -c openvpn cat "/etc/openvpn/certs/pki/$KEY_NAME.ovpn" > "$KEY_NAME.ovpn"

Revoking certificates works just as easy:
KEY_NAME=<name>
POD_NAME=$(kubectl get pods -n "{{ .Release.Namespace }}" -l "app=openvpn,release={{ .Release.Name }}" -o jsonpath='{.items[0].metadata.name}')
kubectl -n "{{ .Release.Namespace }}" exec -it "$POD_NAME" /etc/openvpn/setup/revokeClientCert.sh $KEY_NAME
kubectl -n "{{ .Release.Namespace }}" exec -it "$POD_NAME" -c openvpn /etc/openvpn/setup/revokeClientCert.sh $KEY_NAME

Copy the resulting $KEY_NAME.ovpn file to your open vpn client (ex: in tunnelblick, just double click on the file). Do this for each user that needs to connect to the VPN. Change KEY_NAME for each additional user.
3 changes: 3 additions & 0 deletions openvpn-chart/templates/config-openvpn.yaml
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
# ConfigMap for the OpenVPN deployment.
# It contains the certificate scripts and the openvpn configuration scripts and files.
apiVersion: v1
kind: ConfigMap
metadata:
Expand Down Expand Up @@ -168,6 +170,7 @@ data:
openvpn.conf: |-
server {{ .Values.openvpn.OVPN_NETWORK }} {{ .Values.openvpn.OVPN_SUBNET }}
verb 3
status-version 2
{{ if .Values.openvpn.useCrl }}
crl-verify /etc/openvpn/certs/crl.pem
{{ end }}
Expand Down
35 changes: 26 additions & 9 deletions openvpn-chart/templates/openvpn-deployment.yaml
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
# Main OpenVPN Deployment with a sidecar container for exporting the metrics.
apiVersion: apps/v1
kind: Deployment
metadata:
Expand Down Expand Up @@ -28,19 +29,31 @@ spec:
{{ toYaml .Values.podAnnotations | indent 8 }}
{{- end }}
spec:
{{- if .Values.ipForwardInitContainer }}
initContainers:
- args:
- -c
- sysctl -w net.ipv4.ip_forward=1
command:
- /bin/sh
image: busybox:1.29
imagePullPolicy: IfNotPresent
name: sysctl
resources:
requests:
cpu: 5m
memory: 1Mi
securityContext:
privileged: true
{{- end }}
containers:
- name: exporter
image: kumina/openvpn-exporter
readinessProbe:
exec:
command:
- cat
- /tmp/healthy
initialDelaySeconds: 5
periodSeconds: 5
command: ["/bin/openvpn_exporter"]
args: ["-openvpn.status_paths", "/etc/openvpn-exporter/openvpn/openvpn-status.log"]
volumeMounts:
- name: openvpn
mountPath: /etc/openvpn-exporter/server.status
- name: openvpn-status
mountPath: /etc/openvpn-exporter/openvpn
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
Expand Down Expand Up @@ -79,13 +92,17 @@ spec:
- mountPath: /etc/openvpn/setup
name: openvpn
readOnly: false
- mountPath: /tmp
name: openvpn-status
readOnly: false
- mountPath: /etc/openvpn/certs
{{- if .Values.persistence.subPath }}
subPath: {{ .Values.persistence.subPath }}
{{- end }}
name: certs
readOnly: {{ if .Values.openvpn.keystoreSecret }}true{{ else }}false{{ end }}
volumes:
- name: openvpn-status
- name: openvpn
configMap:
name: {{ template "openvpn.fullname" . }}
Expand Down
1 change: 1 addition & 0 deletions openvpn-chart/templates/openvpn-service.yaml
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
# Main service that sits in from of the OpenVPN instance.
apiVersion: v1
kind: Service
metadata:
Expand Down
8 changes: 6 additions & 2 deletions openvpn-chart/values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -15,8 +15,8 @@ image:
pullPolicy: IfNotPresent
service:
type: LoadBalancer
externalPort: 443
internalPort: 443
externalPort: 9914
internalPort: 9914
# hostPort: 443
externalIPs: []
nodePort: 32085
Expand All @@ -32,6 +32,10 @@ service:
# podAnnotations:
# backup.ark.heptio.com/backup-volumes: certs
podAnnotations: {}

# Add privileged init container to enable IPv4 forwarding
ipForwardInitContainer: true

resources:
limits:
cpu: 300m
Expand Down
13 changes: 13 additions & 0 deletions servicemonitor.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
# A Prometheus operator service monitor, which describes the set of targets to be monitored by Prometheus.
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: openvpn
namespace: default
spec:
endpoints:
- interval: 15s
port: metrics
selector:
matchLabels:
app: openvpn