Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

release-22.1: ccl/sqlproxyccl: invoke rebalancing logic during RUNNING pod events #81790

Merged
merged 2 commits into from
May 25, 2022

Conversation

blathers-crl[bot]
Copy link

@blathers-crl blathers-crl bot commented May 24, 2022

Backport 2/2 commits from #81177 on behalf of @jaylim-crl.

/cc @cockroachdb/release


ccl/sqlproxyccl: invoke rebalancing logic during RUNNING pod events

This commit invokes the rebalancing logic during RUNNING pod events as part of
the pod watcher. Since the rebalancing logic depends on the tenant directory,
the pod watcher will now only emit events once the directory has been updated.
This is done for better responsiveness, i.e. the moment a new SQL pod gets
added, we would like to rebalance all connections to the tenant.

Note that the Watch endpoint on the tenant directory server currently emits
events in multiple cases: changes to load, and changes to pod (added/modified/
deleted). The plan is to update the tenant directory server to only emit events
for pod updates. The next commit will rate limit the number of times the
rebalancing logic for a given tenant can be called.

At the same time, we introduce a new test static directory server which does
not automatically spin up tenants for us (i.e. SQL pods for tenants can now
be managed manually, giving more control to tests).

ccl/sqlproxyccl: rate limit the number of rebalances per tenant

This commit rate limits the number of rebalances per tenant to once every
15 seconds (i.e. 1/2 of the rebalance loop interval). The main purpose of
this is to prevent a burst of pod events for the same tenant causing multiple
rebalances, which may move a lot of connections around.

Release note: None


Release justification: sqlproxy only change.

This commit invokes the rebalancing logic during RUNNING pod events as part of
the pod watcher. Since the rebalancing logic depends on the tenant directory,
the pod watcher will now only emit events once the directory has been updated.
This is done for better responsiveness, i.e. the moment a new SQL pod gets
added, we would like to rebalance all connections to the tenant.

Note that the Watch endpoint on the tenant directory server currently emits
events in multiple cases: changes to load, and changes to pod (added/modified/
deleted). The plan is to update the tenant directory server to only emit events
for pod updates. The next commit will rate limit the number of times the
rebalancing logic for a given tenant can be called.

At the same time, we introduce a new test static directory server which does
not automatically spin up tenants for us (i.e. SQL pods for tenants can now
be managed manually, giving more control to tests).

Release note: None
This commit rate limits the number of rebalances per tenant to once every
15 seconds (i.e. 1/2 of the rebalance loop interval). The main purpose of
this is to prevent a burst of pod events for the same tenant causing multiple
rebalances, which may move a lot of connections around.

Release note: None
@blathers-crl blathers-crl bot requested review from a team as code owners May 24, 2022 22:08
@blathers-crl blathers-crl bot force-pushed the blathers/backport-release-22.1-81177 branch from 3ea5383 to 4d88612 Compare May 24, 2022 22:08
@blathers-crl
Copy link
Author

blathers-crl bot commented May 24, 2022

Thanks for opening a backport.

Please check the backport criteria before merging:

  • Patches should only be created for serious issues or test-only changes.
  • Patches should not break backwards-compatibility.
  • Patches should change as little code as possible.
  • Patches should not change on-disk formats or node communication protocols.
  • Patches should not add new functionality.
  • Patches must not add, edit, or otherwise modify cluster versions; or add version gates.
If some of the basic criteria cannot be satisfied, ensure that the exceptional criteria are satisfied within.
  • There is a high priority need for the functionality that cannot wait until the next release and is difficult to address in another way.
  • The new functionality is additive-only and only runs for clusters which have specifically “opted in” to it (e.g. by a cluster setting).
  • New code is protected by a conditional check that is trivial to verify and ensures that it only runs for opt-in clusters.
  • The PM and TL on the team that owns the changed code have signed off that the change obeys the above rules.

Add a brief release justification to the body of your PR to justify this backport.

Some other things to consider:

  • What did we do to ensure that a user that doesn’t know & care about this backport, has no idea that it happened?
  • Will this work in a cluster of mixed patch versions? Did we test that?
  • If a user upgrades a patch version, uses this feature, and then downgrades, what happens?

@blathers-crl blathers-crl bot added blathers-backport This is a backport that Blathers created automatically. O-robot Originated from a bot. labels May 24, 2022
@cockroach-teamcity
Copy link
Member

This change is Reviewable

Copy link
Collaborator

@jeffswenson jeffswenson left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@jaylim-crl jaylim-crl removed request for a team May 25, 2022 00:08
@jaylim-crl
Copy link
Collaborator

TFTR!

@jaylim-crl jaylim-crl merged commit 2e6c158 into release-22.1 May 25, 2022
@jaylim-crl jaylim-crl deleted the blathers/backport-release-22.1-81177 branch May 25, 2022 12:13
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
blathers-backport This is a backport that Blathers created automatically. O-robot Originated from a bot.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants