☂️ Gardener ETCD Operator a.k.a. ETCD Druid #2
Labels
kind/epic
Large multi-story topic
kind/roadmap
Roadmap BLI
lifecycle/rotten
Nobody worked on this for 12 months (final aging stage)
priority/2
Priority (lower number equals higher priority)
Milestone
Feature (What you would like to be added):
Summarise the roadmap for
etcd-druid
with links to the corresponding issues.Motivation (Why is this needed?):
A central place to collect the roadmap as well as the progress.
Approach/Hint to the implement solution (optional):
StatefulSet
(withreplicas: 1
) with the containers foretcd
andetcd-backup-restore
the same way it is being done now.etcd
defragmentation schedule from the CRD toetcd-backup-restore
sidecar container.etcd
clusteretcd
nodes within the same Kubernetes cluster.etcd
nodes in the same Kubernetes cluster/namespace as the CRD instance.Scale
sub-resource implementation for the current CRDetcd
learners/members during scale up, including quorum adjustment.etcd
members during scale down, including quorum adjustment.etcd
clusteretcd
nodes distributed across availability zones in the hosting Kubernetes clusteretcd
node in a different Kubernetes cluster.etcd
node will be provisioned via a separate CRD instance in a different Kubernetes cluster but these nodes will be configured to find each other to form anetcd
cluster.etcd
cluster.etcd
learners/members during scale up, including quorum adjustment.etcd
members during scale down, including quorum adjustment.etcd
clusterVerticalPodAutoscaler
supports multiple update policies includingrecreate
,initial
andoff
.recreate
policy is clearly not suitable for a single-nodeetcd
instances because of the implications on frequent, unpredictable and unmanaged down-time.initial
policy does not make sense foretcd
considering the longer database verification time for non-graceful shutdown.etcd
instance, vertical scaling via theVerticalPodAutoscaler
would always be disruptive because of the way scaling is done by VPA. It gives no opportunity to take action before theetcd
pod(s)
are disrupted for scaling.etcd
-specific steps to mitigate the disruption during (vertical) scaling if an alternative way is used to vertically scale a CRD instead of the individualpods
directly.etcd
instance, updates would be disruptive.etcd
-specific steps to mitigate the disruption during updates.etcd
instance which might mean that the memory requirement for database restoration is almost certain to be proportionate to the database size. However, the memory requirement for backup (full and delta) need not be proportionate to the database size at all. In fact, it is very realistic to expect that the memory requirement for backup be more or less independent of the database size.The text was updated successfully, but these errors were encountered: