Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Storage config #48

Merged
merged 8 commits into from
Sep 27, 2022
Merged

Storage config #48

merged 8 commits into from
Sep 27, 2022

Conversation

didierofrivia
Copy link
Member

@didierofrivia didierofrivia commented Sep 14, 2022

Closes #7

This PR makes it possible to set up the storage options for the Limitador instance to store the limits counters.

The storage config options, in this case, for both Redis and Redis cached should be stored in a K8s Secret data field URL . It's meant to be kept this way because the URL may include a user/password pair. Then the secret object reference should be included in Limitador config.

Verification Steps

For a redis option

  1. Setup the local cluster
make local-setup
  1. Create a k8s Secret holding the following:
kubectl apply -f -<<EOF
apiVersion: v1
kind: Secret
metadata:
  name: redisconfig
stringData:
  URL: redis://127.0.0.1/a
type: Opaque
EOF
  1. Create a Limitador CR with the following config:
 k apply -f - <<EOF
apiVersion: limitador.kuadrant.io/v1alpha1
kind: Limitador
metadata:
  name: limitador-sample
spec:
  storage:
    redis
      configSecretRef:
        name: redisconfig
        namespace: default
  limits:
    - conditions: ["get-toy == yes"]
      max_value: 2
      namespace: toystore-app
      seconds: 30
      variables: []
EOF
  1. Wait until all deployments are ready:
kubectl wait --timeout=300s --for=condition=Available deployments --all
  1. Inspect the Limitador deployment Container for the right commands passed:
k get deployment/limitador-sample  -o yaml | yq '.spec.template.spec.containers[0].command'


- limitador-server
- /home/limitador/etc/limitador-config.yaml
- redis
- redis://127.0.0.1/a
  1. Profit!

Notes: If Spec.Storage is not set, the default option is memory

controllers/limitador_controller.go Outdated Show resolved Hide resolved
pkg/limitador/k8s_objects.go Outdated Show resolved Hide resolved
pkg/limitador/k8s_objects.go Outdated Show resolved Hide resolved
@didierofrivia didierofrivia force-pushed the storage-config branch 2 times, most recently from d728cdf to 5768d4c Compare September 16, 2022 15:45
@didierofrivia didierofrivia marked this pull request as ready for review September 16, 2022 16:41
controllers/limitador_controller.go Outdated Show resolved Hide resolved
pkg/limitador/k8s_objects.go Outdated Show resolved Hide resolved
@alexsnaps
Copy link
Member

As a side note these is redis_cached as well, which has further configuration options, but I don't know whether I'd add it:

USAGE:
    limitador-server <LIMITS_FILE> redis_cached [OPTIONS] <URL>

ARGS:
    <URL>    Redis URL to use

OPTIONS:
        --ttl <TTL>               TTL for cached counters in milliseconds [default: 5000]
        --ratio <ratio>           Ratio to apply to the TTL from Redis on cached counters [default:
                                  10]
        --flush-period <flush>    Flushing period for counters in milliseconds [default: 1000]
        --max-cached <max>        Maximum amount of counters cached [default: 10000]

@didierofrivia didierofrivia force-pushed the storage-config branch 2 times, most recently from 367a1a7 to c90597d Compare September 19, 2022 16:40
api/v1alpha1/limitador_types.go Outdated Show resolved Hide resolved
api/v1alpha1/limitador_types.go Show resolved Hide resolved
controllers/limitador_controller.go Outdated Show resolved Hide resolved
@eguzki
Copy link
Contributor

eguzki commented Sep 23, 2022

Looking good. Missing some doc including example of redis storage CR and redis-cached storatge CR

params = append(
params,
fmt.Sprintf(
"--%s %d",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you are assuming that they will always be integers. What if we add a string?

I let you decide. I am ok with this implementation, but I let you decide what to do.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yup, only because they're all defined as type int, if in the future we add another option type, we could consider type checking

return s.RedisCached.ConfigSecretRef
}

func (s *Storage) Config(url string) []string {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

minor thing: what about having one method returning []string per each type?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought of this too, but in terms of how the consumers of the "Storage API" goes, this way it's simplified, only interacting with "Storage" and not the underlying implementation (Redis, RedisCached, Memory, etc). The controller needs only to request the storage config to the Storage API. Does it make sense?

Copy link
Contributor

@eguzki eguzki left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am sorry to say I found two isues:

  1. you need to grant permissions to manage secrets. I would add all type of permissions, although to be fair, watch, list and get should be enough
git diff
diff --git a/bundle/manifests/limitador-operator.clusterserviceversion.yaml b/bundle/manifests/limitador-operator.clusterserviceversion.yaml
index 1e3be6a..2747e3c 100644
--- a/bundle/manifests/limitador-operator.clusterserviceversion.yaml
+++ b/bundle/manifests/limitador-operator.clusterserviceversion.yaml
@@ -63,6 +63,7 @@ spec:
           - ""
           resources:
           - configmaps
+          - secrets
           verbs:
           - create
           - delete
diff --git a/config/rbac/role.yaml b/config/rbac/role.yaml
index 0eed802..43b46d3 100644
--- a/config/rbac/role.yaml
+++ b/config/rbac/role.yaml
@@ -10,6 +10,7 @@ rules:
   - ""
   resources:
   - configmaps
+  - secrets
   verbs:
   - create
   - delete
diff --git a/controllers/limitador_controller.go b/controllers/limitador_controller.go
index 4fd68da..dd7611b 100644
--- a/controllers/limitador_controller.go
+++ b/controllers/limitador_controller.go
@@ -19,9 +19,10 @@ package controllers
 import (
        "context"
        "fmt"
-       "k8s.io/apimachinery/pkg/types"
        "reflect"
 
+       "k8s.io/apimachinery/pkg/types"
+
        v1 "k8s.io/api/core/v1"
        metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
        "sigs.k8s.io/yaml"
@@ -47,7 +48,7 @@ type LimitadorReconciler struct {
 //+kubebuilder:rbac:groups=limitador.kuadrant.io,resources=limitadors/finalizers,verbs=update
 //+kubebuilder:rbac:groups=core,resources=services,verbs=get;list;watch;create;update;delete
 //+kubebuilder:rbac:groups=apps,resources=deployments,verbs=get;list;watch;create;update;delete
-//+kubebuilder:rbac:groups="",resources=configmaps,verbs=get;list;watch;create;update;delete
+//+kubebuilder:rbac:groups="",resources=configmaps;secrets,verbs=get;list;watch;create;update;delete
 
 func (r *LimitadorReconciler) Reconcile(eventCtx context.Context, req ctrl.Request) (ctrl.Result, error) {
        logger := r.Logger().WithValues("limitador", req.NamespacedName)
  1. it panics when redis-cached is being defined
kubectl apply -f - <<EOF
---
apiVersion: v1
kind: Secret
metadata:
  name: redisconfig
stringData:
  URL: redis://127.0.0.1/a
type: Opaque
---
apiVersion: limitador.kuadrant.io/v1alpha1
kind: Limitador
metadata:
  name: limitador
spec:
  storage:
    redis-cached:
      configSecretRef:
        name: redisconfig
      options:
        ttl: 0
        ratio: 1001
  listener:
    http:
      port: 8080
    grpc:
      port: 8081
  limits:
    - conditions: ["get_toy == 'yes'"]
      max_value: 2
      namespace: toystore-app
      seconds: 30
      variables: []
EOF

The controller panics saying:

2022-09-27T08:01:15.502Z	DEBUG	limitador	Reconciling Limitador	{"limitador": "default/limitador"}
2022-09-27T08:01:15.602Z	INFO	limitador	create object	{"limitador": "default/limitador", "GKV": "/v1, Kind=Service", "name": "limitador-limitador", "namespace": "default"}
2022-09-27T08:01:15.608Z	DEBUG	limitador	reconcile service	{"limitador": "default/limitador", "error": null}
panic: reflect: call of reflect.Value.NumField on ptr Value

goroutine 279 [running]:
reflect.flag.mustBe(...)
	/usr/local/go/src/reflect/value.go:221
reflect.Value.NumField(0x1647de0, 0xc000308800, 0x16, 0x1a21e98)
	/usr/local/go/src/reflect/value.go:1369 +0xc5
github.com/kuadrant/limitador-operator/api/v1alpha1.(*Storage).Config(0xc00049ab70, 0xc0007180c0, 0x13, 0xc0007180c0, 0x13, 0xc0003f4000)
	/workspace/api/v1alpha1/limitador_types.go:157 +0x1dc
github.com/kuadrant/limitador-operator/pkg/limitador.storageConfig(...)
	/workspace/pkg/limitador/k8s_objects.go:209
github.com/kuadrant/limitador-operator/pkg/limitador.deploymentContainerCommand(0xc00049ab70, 0xc0004ec8c0, 0x17fa991, 0x3, 0xc0001cd708)
	/workspace/pkg/limitador/k8s_objects.go:202 +0x2cd
github.com/kuadrant/limitador-operator/pkg/limitador.LimitadorDeployment(0xc0002eb800, 0xc0004ec8c0, 0x1a15e68)
	/workspace/pkg/limitador/k8s_objects.go:94 +0x2e5
github.com/kuadrant/limitador-operator/controllers.(*LimitadorReconciler).Reconcile(0xc0002ae150, 0x1a04878, 0xc0003a0660, 0xc00067e929, 0x7, 0xc00067e920, 0x9, 0xc0003a0660, 0xc0003a05d0, 0xc000733db0, ...)
	/workspace/controllers/limitador_controller.go:95 +0x50d
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile(0xc0002cca00, 0x1a04878, 0xc0003a05d0, 0xc00067e929, 0x7, 0xc00067e920, 0x9, 0xc0003a0500, 0x0, 0x0, ...)
	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:114 +0x247
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler(0xc0002cca00, 0x1a047d0, 0xc000446380, 0x1690f60, 0xc000308700)
	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:311 +0x305
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem(0xc0002cca00, 0x1a047d0, 0xc000446380, 0xc000113400)
	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:266 +0x205
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2(0xc0007226c0, 0xc0002cca00, 0x1a047d0, 0xc000446380)
	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:227 +0x6b
created by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2
	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:223 +0x425

* No longer passing empty strings
* Adding Redis Cached and its options
* Removing Infinispan
* Behaviour changed, no Storage config means in "memory"
Copy link
Contributor

@eguzki eguzki left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

verification steps working for me.

Tried with redis-cached storage instead

apiVersion: limitador.kuadrant.io/v1alpha1
kind: Limitador
metadata:
  name: limitador
spec:
  storage:
    redis-cached:
      configSecretRef:
        name: redisconfig
      options:
        ttl: 0
        ratio: 1001
  listener:
    http:
      port: 8080
    grpc:
      port: 8081
  limits:
    - conditions: ["get_toy == 'yes'"]
      max_value: 2
      namespace: toystore-app
      seconds: 30
      variables: []

@didierofrivia didierofrivia merged commit 31b453e into main Sep 27, 2022
@didierofrivia didierofrivia deleted the storage-config branch September 27, 2022 16:26
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Allow to configure the limits storage in Limitador
3 participants