Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is there a plan to add the ability to manually adjust the number of replicas on nginx controllers in app-routing-system? #177

Open
sohwaje opened this issue Mar 14, 2024 · 8 comments

Comments

@sohwaje
Copy link

sohwaje commented Mar 14, 2024

I modified the "aks-app-routing-operator" deployment to adjust the number of replica, but it was impossible.

apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "1"
  creationTimestamp: "2024-03-12T08:23:16Z"
  generation: 15
  labels:
    app.kubernetes.io/component: ingress-controller
    app.kubernetes.io/managed-by: aks-app-routing-operator
    app.kubernetes.io/name: nginx
  name: nginx
  namespace: app-routing-system
  ownerReferences:
  - apiVersion: approuting.kubernetes.azure.com/v1alpha1
    controller: true
    kind: NginxIngressController
    name: default
    uid: 6ea3e476-059e-44b3-9f8d-86655ee2a5dc
  resourceVersion: "113876214"
  uid: 37ea9332-baaf-47cf-aef4-f69d0ff2abef
spec:
  progressDeadlineSeconds: 600
  replicas: 2     <=============== I want to revise this

Currently, two nging controllers seem to be the default values.

kubectl get po -n app-routing-system
NAME                     READY   STATUS    RESTARTS   AGE
nginx-5d4cbcf56b-nrt46   1/1     Running   0          40h
nginx-5d4cbcf56b-swq58   1/1     Running   0          40h

Is there a plan to add the ability to manually adjust the number of replica on the nginx controller?

@MXClyde
Copy link

MXClyde commented Mar 14, 2024

I have the seem need for a current project. In addition to being able to specify the resource requests on the pods (esp. CPU, which has a rather high default).

@OliverMKing
Copy link
Collaborator

OliverMKing commented Mar 14, 2024

Yes, you will be able to adjust the min/max replicas. You will also have a mechanism to adjust how aggressively the HPA scales. Expect this very soon.

I have the seem need for a current project. In addition to being able to specify the resource requests on the pods (esp. CPU, which has a rather high default).

Could you help us understand why you want to tweak the resource requests?

@MXClyde
Copy link

MXClyde commented Mar 14, 2024

Yes, you will be able to adjust the min/max replicas. You will also have a mechanism to adjust how aggressively the HPA. Expect this very soon.

This is great!

Could you help us understand why you want to tweak the resource requests?

We are building an app platform on top of an autoscaling AKS cluster. In it's smallest -most cost-effective- incarnation we like to offer our customers the ability to run an app on a single-node 2-core cluster. Because the hardcoded nginx CPU request is 0.5 core, it is difficult to make our runtime fit in combination with 2 nginx replicas and the standard AKS containers. However on second thought, having the possibility of 1 nginx replica will likely suffice already. I can try that first when it is released.

@Duske
Copy link

Duske commented Jul 16, 2024

Hey @OliverMKing, is there a roadmap when this feature will be rolled out?
We're currently want to set up a minimal environment for testing various pull requests, but when you choose a small VM with 2 vCPUs those 2 fixed replicas of the controller will claim 2 x 500m CPU for scheduling leaving only 1000m for all the other remaining pods. I didn't manage to override this setting, the operator always changes it back ♻️

@Duske
Copy link

Duske commented Jul 17, 2024

Nevermind, just found out:

It's released you just need to upgrade to Kubernetes version 1.30 (or higher) to use it. Addon release policies mean we release new features behind new preview K8s versions first.

#244 (comment)

@polarn
Copy link

polarn commented Sep 27, 2024

@Duske was the solution to just update the NginxIngressController CR? Or is it possible to provision this using the UI / az / terraform?

@Yarkane
Copy link

Yarkane commented Dec 16, 2024

Hello there,
I understand that this is now possible to manually setup the number of replicas of app routing controllers.

However I agree with @MXClyde : 500m for each controller is huge, for example for a little development cluster. We would like to stay on app routing to keep the same environment as the production, and simply tweak the resources used by the controller.

@OliverMKing Is there anything in preview or in development to tweak the CPU and memory requests ?
An example in our dev cluster :

  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
  app-routing-system          external-dns-787d4788dd-7dlqf                           100m (5%)     100m (5%)   250Mi (4%)       250Mi (4%)     2d10h
  app-routing-system          nginx-85497484cc-8fnvl                                  500m (26%)    100m (5%)   127Mi (2%)       60Mi (1%)      2d10h

@Duske
Copy link

Duske commented Dec 16, 2024

@Duske was the solution to just update the NginxIngressController CR? Or is it possible to provision this using the UI / az / terraform?

Sorry, I missed your message. We did it by applying a patched CRD like this

apiVersion: approuting.kubernetes.azure.com/v1alpha1
kind: NginxIngressController
metadata:
  name: default
spec:
  controllerNamePrefix: nginx
  ingressClassName: webapprouting.kubernetes.azure.com
  scaling:
    maxReplicas: 1
    minReplicas: 1

As long as you can somehow apply these settings via UI, it should also work. With terraform this should definitely work as it's configuration-driven.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants