Skip to content

Commit

Permalink
Merge pull request #225 from Boomatang/maintenance/docs
Browse files Browse the repository at this point in the history
Maintenance/docs
  • Loading branch information
Boomatang authored Dec 4, 2023
2 parents 856ec31 + 66dfdac commit adff831
Show file tree
Hide file tree
Showing 3 changed files with 21 additions and 32 deletions.
2 changes: 1 addition & 1 deletion doc/topologies.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ the accuracy loss is going to be negligible.
#### Set up

In order to try active-active replication, you can follow this [tutorial from
RedisLabs](https://docs.redislabs.com/latest/rs/getting-started/getting-started-active-active/).
RedisLabs](https://docs.redislabs.com/latest/rs/databases/active-active/get-started/).

## Disk

Expand Down
11 changes: 5 additions & 6 deletions limitador-server/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
[![Docker Repository on Quay](https://quay.io/repository/kuadrant/limitador/status
"Docker Repository on Quay")](https://quay.io/repository/kuadrant/limitador)

By default, Limitador starts the HTTP server in `localhost:8080` and the grpc
By default, Limitador starts the HTTP server in `localhost:8080`, and the grpc
service that implements the Envoy Rate Limit protocol in `localhost:8081`. That
can be configured with these ENVs: `ENVOY_RLS_HOST`, `ENVOY_RLS_PORT`,
`HTTP_API_HOST`, and `HTTP_API_PORT`.
Expand Down Expand Up @@ -54,10 +54,9 @@ each of the storages.
The OpenAPI spec of the HTTP service is
[here](docs/http_server_spec.json).

Limitador has to be started with a YAML file that has some limits defined. There's an [example
file](examples/limits.yaml) that allows 10 requests per minute
and per `user_id` when the HTTP method is `"GET"` and 5 when it is a `"POST"`. You can
run it with Docker (replace `latest` with the version you want):
Limitador has to be started with a YAML file that has some limits defined.
There's an [example file](https://github.com/Kuadrant/limitador/blob/main/limitador-server/examples/limits.yaml) that allows 10 requests per minute and per `user_id` when the HTTP method is `"GET"` and 5 when it is a `"POST"`.
You can run it with Docker (replace `latest` with the version you want):
```bash
docker run --rm --net=host -it -v $(pwd)/examples/limits.yaml:/home/limitador/my_limits.yaml:ro quay.io/kuadrant/limitador:latest limitador-server /home/limitador/my_limits.yaml
```
Expand All @@ -68,7 +67,7 @@ cargo run --release --bin limitador-server ./examples/limits.yaml
```

If you want to use Limitador with Envoy, there's a minimal Envoy config for
testing purposes [here](examples/envoy.yaml). The config
testing purposes [here](https://github.com/Kuadrant/limitador/blob/main/limitador-server/examples/envoy.yaml). The config
forwards the "userid" header and the request method to Limitador. It assumes
that there's an upstream API deployed on port 1323. You can use
[echo](https://github.com/labstack/echo), for example.
Expand Down
40 changes: 15 additions & 25 deletions limitador-server/kubernetes/README.md
Original file line number Diff line number Diff line change
@@ -1,9 +1,9 @@
<!-- omit in toc -->
# Kubernetes

The purpose of this documentation is to deploy a sample application published via AWS ELB, that will be ratelimited at infrastructure level, thanks to the use the envoyproxy sidecar container, that will be in charge of contacting to a ratelimit service (limitador), that will allow the request (or not) if it is within the permitted limits.

There are mainly two recommended way of using limitador in kubernetes:

1. There is an ingress based on envoyproxy that contacts with limitador ratelimit service before forwarding (or not) the request to the application
1. There is an envoyproxy sidecar container living in the application pod that contacts with limitador ratelimit service before forwarding (or not) the request to the main application container in the same pod

Expand All @@ -16,17 +16,6 @@ This is the network diagram of the deployed example:

![Ratelimit](ratelimit.svg)

<!-- omit in toc -->
# Table of Contents
- [Components](#components)
- [Mandatory](#mandatory)
- [Optional](#optional)
- [K8s deployment](#k8s-deployment)
- [Monitoring](#monitoring)
- [Prometheus](#prometheus)
- [Grafana dashboard](#grafana-dashboard)
- [Benchmarking](#benchmarking)

## Components

In order to that that ratelimit test, you need to deploy a few components. Some of them are mandatory, and a few are optional:
Expand Down Expand Up @@ -203,18 +192,19 @@ Status code distribution:
[200] 60046 responses
[429] 11932 responses
```
* We can see that:
- Client could send 1192.2171rps (about 1200rps)
- 60046 requests (about 60000) were OK (HTTP 200)
- 11932 requests (about 12000) were limited (HTTP 429)
- Average latency (since the request goes out from the client to AWS ELB, k8s node, envoyproxy container, limitador+redis, kuar app container) is 10ms

* In addition, if we do a longer test with 5 minutes traffic for example, you can check with the grafana dashboard how these requests are processed by envoyproxy sidecar container of kuard pods and limitador pods:
- **Kuard Envoyproxy Sidecar Metrics**:
- Globally it handles around 1200rps: it permits around 1krps and limits around 200rps
- Each envoyproxy sidecar of each kuard pod handles around half of the traffic: it permits around 500rps and limits around 100rps. The balance between pods is not 100% perfect, caused by random iptables forwarding when using a k8s service

- We can see that:
- Client could send 1192.2171rps (about 1200rps)
- 60046 requests (about 60000) were OK (HTTP 200)
- 11932 requests (about 12000) were limited (HTTP 429)
- Average latency (since the request goes out from the client to AWS ELB, k8s node, envoyproxy container, limitador+redis, kuar app container) is 10ms

- In addition, if we do a longer test with 5 minutes traffic for example, you can check with the grafana dashboard how these requests are processed by envoyproxy sidecar container of kuard pods and limitador pods:
- **Kuard Envoyproxy Sidecar Metrics**:
- Globally it handles around 1200rps: it permits around 1krps and limits around 200rps
- Each envoyproxy sidecar of each kuard pod handles around half of the traffic: it permits around 500rps and limits around 100rps. The balance between pods is not 100% perfect, caused by random iptables forwarding when using a k8s service
![Kuard Envoyproxy Sidecar Metrics](kuard-envoyproxy-sidecar-metrics-dashboard-screenshot.png)
- **Limitador Metrics**:
- Globally it handles around 1200rps: it permits around 1krps and limits around 200rps
- Each limitador pod handles around half of the traffic: it permits around 500rps and limits around 100rps. The balance between pods is perfect thanks to using a headless service with GRPC connections
- **Limitador Metrics**:
- Globally it handles around 1200rps: it permits around 1krps and limits around 200rps
- Each limitador pod handles around half of the traffic: it permits around 500rps and limits around 100rps. The balance between pods is perfect thanks to using a headless service with GRPC connections
![Limitador Metrics](limitador-metrics-dashboard-screenshot.png)

0 comments on commit adff831

Please sign in to comment.