☂️ Gardener Horizontal & Vertical Pod Autoscaler, a.k.a. HVPA (v2) #30
Labels
kind/enhancement
Enhancement, improvement, extension
kind/epic
Large multi-story topic
kind/roadmap
Roadmap BLI
lifecycle/rotten
Nobody worked on this for 12 months (final aging stage)
Milestone
Feature (What you would like to be added):
Summarise the roadmap for HVPA with links to the corresponding issues.
Motivation (Why is this needed?):
A central place to collect the roadmap as well as the progress.
Approach/Hint to the implement solution (optional):
General Principles
kube-apiserver
or ingress etc.)targetRefs
. The VPA approach duplicates/overrides the update mechanism (such as rolling updates etc.) that the upstreamtargetRefs
might have implemented.etcd
, but to a lesser extentkube-apiserver
as well forWATCH
requests). This could be as an alternative or complementary to the stabilisation window mentioned above.Off
/Auto
/ScaleUp
).ScaleUp
would only apply scale up and not scale down (vertically or horizontally). This is again from the perspective of components which experience disruption while scaling (mainly,etcd
, but to a lesser extentkube-apiserver
as well forWATCH
requests). For such components, aScaleUp
update policy will ensure that the component can scale up (with some disruption) automatically to meet the workload requirement but not scale down to avoid unnecessary disruption. This would mean over-provisioning for workloads that experience a short upsurge.targetRef
.targetRefs
.Tasks
Auto
update policy for HPA updates. HPA takes care of both recommendation and updates for horizontal scaling. This implementation ofAuto
update policy is temporary pending Evaluate options for controlling HPA-based scaling #7.Off
,Auto
andScaleUp
update policy for VPA updates. VPA is used only for recommendation and not for updates. Fixed with HVPA now supports UpdateMode "off" for HPA and VPA #19.Off
update policy for HPA updates. Implemented by not deploying/deleting HPA resource. This implementation ifOff
update policy is temporary temporary pending Evaluate options for controlling HPA-based scaling #7.0
and100
for VPA weight. Fixed with HVPA now supports UpdateMode "off" for HPA and VPA #19.0
or100
for HPA weight.Auto
update policy (i.e. enable scale down) forkube-server
to reduce cost implication.ScaleUp
update policy would continue for etcd for the time being because it could be disruptive. Prio 1.OOMKill
or CPU overload happens, override stabilisation window as well as HPA weight to apply the weighted VPA recommendation. Prio 1.targetRef
. Prio 2.Scale
subresource in HVPA CRD to control HPA update fully and us HPA only for recommendation. Pending Evaluate options for controlling HPA-based scaling #7. Prio 3.ScaleUp
update policy for HPA updates. Prio 3.Off
update policy implementation for HPA to deploy/reconcile HPA resource even in theOff
mode. Retain the recommendations but block the updates. Prio 3.0
to100
as weight for HPA. Prio 3.Resources
subresource (per container) along the lines of theScale
subresource. This can then be used to implement the support for custom resources astargetRef
. Prio 4.targetRef
. Prio 5.targetRef
to avoid crash. Prio 5.targetRef
. If the KEP forResources
subresources is not yet accepted, then this could be implemented using annotations to supply the desired metadata. Prio 6.The text was updated successfully, but these errors were encountered: