From 7a11b7db40e6033ef96a49201c7b3b5d1cfb0575 Mon Sep 17 00:00:00 2001 From: Jim Fitzpatrick Date: Wed, 8 Nov 2023 11:05:51 +0000 Subject: [PATCH 1/3] Fix broken links The links for fixed rendering on docs.kuadrant.io --- limitador-server/README.md | 11 +++++------ 1 file changed, 5 insertions(+), 6 deletions(-) diff --git a/limitador-server/README.md b/limitador-server/README.md index 9ada49de..0fda9add 100644 --- a/limitador-server/README.md +++ b/limitador-server/README.md @@ -3,7 +3,7 @@ [![Docker Repository on Quay](https://quay.io/repository/kuadrant/limitador/status "Docker Repository on Quay")](https://quay.io/repository/kuadrant/limitador) -By default, Limitador starts the HTTP server in `localhost:8080` and the grpc +By default, Limitador starts the HTTP server in `localhost:8080`, and the grpc service that implements the Envoy Rate Limit protocol in `localhost:8081`. That can be configured with these ENVs: `ENVOY_RLS_HOST`, `ENVOY_RLS_PORT`, `HTTP_API_HOST`, and `HTTP_API_PORT`. @@ -54,10 +54,9 @@ each of the storages. The OpenAPI spec of the HTTP service is [here](docs/http_server_spec.json). -Limitador has to be started with a YAML file that has some limits defined. There's an [example -file](examples/limits.yaml) that allows 10 requests per minute -and per `user_id` when the HTTP method is `"GET"` and 5 when it is a `"POST"`. You can -run it with Docker (replace `latest` with the version you want): +Limitador has to be started with a YAML file that has some limits defined. +There's an [example file](https://github.com/Kuadrant/limitador/blob/main/limitador-server/examples/limits.yaml) that allows 10 requests per minute and per `user_id` when the HTTP method is `"GET"` and 5 when it is a `"POST"`. +You can run it with Docker (replace `latest` with the version you want): ```bash docker run --rm --net=host -it -v $(pwd)/examples/limits.yaml:/home/limitador/my_limits.yaml:ro quay.io/kuadrant/limitador:latest limitador-server /home/limitador/my_limits.yaml ``` @@ -68,7 +67,7 @@ cargo run --release --bin limitador-server ./examples/limits.yaml ``` If you want to use Limitador with Envoy, there's a minimal Envoy config for -testing purposes [here](examples/envoy.yaml). The config +testing purposes [here](https://github.com/Kuadrant/limitador/blob/main/limitador-server/examples/envoy.yaml). The config forwards the "userid" header and the request method to Limitador. It assumes that there's an upstream API deployed on port 1323. You can use [echo](https://github.com/labstack/echo), for example. From ff267febbd4d1fb71a2b158c88b26c471401fa40 Mon Sep 17 00:00:00 2001 From: Jim Fitzpatrick Date: Wed, 8 Nov 2023 11:31:32 +0000 Subject: [PATCH 2/3] Add Kudernetes guide This guide was missing form docs.kuadrant.io Some edits done to get page to render correctly --- limitador-server/kubernetes/README.md | 40 ++++++++++----------------- 1 file changed, 15 insertions(+), 25 deletions(-) diff --git a/limitador-server/kubernetes/README.md b/limitador-server/kubernetes/README.md index 2bcbaab7..a920869c 100644 --- a/limitador-server/kubernetes/README.md +++ b/limitador-server/kubernetes/README.md @@ -1,9 +1,9 @@ - # Kubernetes The purpose of this documentation is to deploy a sample application published via AWS ELB, that will be ratelimited at infrastructure level, thanks to the use the envoyproxy sidecar container, that will be in charge of contacting to a ratelimit service (limitador), that will allow the request (or not) if it is within the permitted limits. There are mainly two recommended way of using limitador in kubernetes: + 1. There is an ingress based on envoyproxy that contacts with limitador ratelimit service before forwarding (or not) the request to the application 1. There is an envoyproxy sidecar container living in the application pod that contacts with limitador ratelimit service before forwarding (or not) the request to the main application container in the same pod @@ -16,17 +16,6 @@ This is the network diagram of the deployed example: ![Ratelimit](ratelimit.svg) - -# Table of Contents -- [Components](#components) - - [Mandatory](#mandatory) - - [Optional](#optional) -- [K8s deployment](#k8s-deployment) -- [Monitoring](#monitoring) - - [Prometheus](#prometheus) - - [Grafana dashboard](#grafana-dashboard) -- [Benchmarking](#benchmarking) - ## Components In order to that that ratelimit test, you need to deploy a few components. Some of them are mandatory, and a few are optional: @@ -203,18 +192,19 @@ Status code distribution: [200] 60046 responses [429] 11932 responses ``` -* We can see that: - - Client could send 1192.2171rps (about 1200rps) - - 60046 requests (about 60000) were OK (HTTP 200) - - 11932 requests (about 12000) were limited (HTTP 429) - - Average latency (since the request goes out from the client to AWS ELB, k8s node, envoyproxy container, limitador+redis, kuar app container) is 10ms - -* In addition, if we do a longer test with 5 minutes traffic for example, you can check with the grafana dashboard how these requests are processed by envoyproxy sidecar container of kuard pods and limitador pods: - - **Kuard Envoyproxy Sidecar Metrics**: - - Globally it handles around 1200rps: it permits around 1krps and limits around 200rps - - Each envoyproxy sidecar of each kuard pod handles around half of the traffic: it permits around 500rps and limits around 100rps. The balance between pods is not 100% perfect, caused by random iptables forwarding when using a k8s service + +- We can see that: + - Client could send 1192.2171rps (about 1200rps) + - 60046 requests (about 60000) were OK (HTTP 200) + - 11932 requests (about 12000) were limited (HTTP 429) + - Average latency (since the request goes out from the client to AWS ELB, k8s node, envoyproxy container, limitador+redis, kuar app container) is 10ms + +- In addition, if we do a longer test with 5 minutes traffic for example, you can check with the grafana dashboard how these requests are processed by envoyproxy sidecar container of kuard pods and limitador pods: + - **Kuard Envoyproxy Sidecar Metrics**: + - Globally it handles around 1200rps: it permits around 1krps and limits around 200rps + - Each envoyproxy sidecar of each kuard pod handles around half of the traffic: it permits around 500rps and limits around 100rps. The balance between pods is not 100% perfect, caused by random iptables forwarding when using a k8s service ![Kuard Envoyproxy Sidecar Metrics](kuard-envoyproxy-sidecar-metrics-dashboard-screenshot.png) - - **Limitador Metrics**: - - Globally it handles around 1200rps: it permits around 1krps and limits around 200rps - - Each limitador pod handles around half of the traffic: it permits around 500rps and limits around 100rps. The balance between pods is perfect thanks to using a headless service with GRPC connections + - **Limitador Metrics**: + - Globally it handles around 1200rps: it permits around 1krps and limits around 200rps + - Each limitador pod handles around half of the traffic: it permits around 500rps and limits around 100rps. The balance between pods is perfect thanks to using a headless service with GRPC connections ![Limitador Metrics](limitador-metrics-dashboard-screenshot.png) From 66dfdac200018986878191b9244981dc6101af56 Mon Sep 17 00:00:00 2001 From: Jim Fitzpatrick Date: Mon, 4 Dec 2023 11:02:27 +0000 Subject: [PATCH 3/3] Update link from PR comments --- doc/topologies.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/doc/topologies.md b/doc/topologies.md index 9eec2cca..325d5d89 100644 --- a/doc/topologies.md +++ b/doc/topologies.md @@ -30,7 +30,7 @@ the accuracy loss is going to be negligible. #### Set up In order to try active-active replication, you can follow this [tutorial from -RedisLabs](https://docs.redislabs.com/latest/rs/getting-started/getting-started-active-active/). +RedisLabs](https://docs.redislabs.com/latest/rs/databases/active-active/get-started/). ## Disk