Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore: Implement DEX rate limiter retry mechanism #2096

Closed
2 tasks
squadgazzz opened this issue Nov 28, 2023 · 3 comments
Closed
2 tasks

chore: Implement DEX rate limiter retry mechanism #2096

squadgazzz opened this issue Nov 28, 2023 · 3 comments

Comments

@squadgazzz
Copy link
Contributor

Background

The DEX rate limiter skips the rate-limited order, which could be avoided with a retry mechanism.

Details

The suggestion is to implement a configurable retry mechanism for the rate-limited orders instead of just skipping them. A config with only the max_retries param should be sufficient.

Acceptance criteria

  • A rate-limited order is to be processed multiple times before skipping it.
  • A thread with the retry action can be canceled by the auction deadline.

Depends on #2071

@fleupold
Copy link
Contributor

This only affects the first order in the auction (as others will already wait for the back_off interval before attempting to execute), right?
Also orders are "automatically" retried in the next auction. I'd argue that for the sophistication we are aiming to provide to the baseline dex solvers that behavior is sufficient.

@MartinquaXD
Copy link
Contributor

MartinquaXD commented Dec 4, 2023

This only affects the first order in the auction (as others will already wait for the back_off interval before attempting to execute), right?

Theoretically this can happen to any order in the auction. The issue is that the backoff interval configured in the rate limiting strategy might not be in sync with whatever rate limiting strategy the external API actually enforces.
Let's say we are optimistic about backing off and start out with a few ms and double it every successive 429. Since our nginx enforces rate limits every second AFAIK we would run into multiple 429 until our backoff time is big enough to actually succeed the nginx 1 second limit.
If we then use so many requests that we trigger another rate limit we'll again skip a handful of orders in the same auction.

One easier way out would be to configure our min_back_off = 1s since we know all our requests go through the nginx that enforces that.

However, over all I think having the retry would be nice but I would not implement it under all circumstances. If the implementation ends up verbose and hard to understand I would probably prefer not to introduce the complexity since you already mentioned that orders get retried in the next auction.

@squadgazzz
Copy link
Contributor Author

One easier way out would be to configure our min_back_off = 1s since we know all our requests go through the nginx that enforces that.

That's the current default value:

fn default_min_back_off() -> u64 {
1
}

Closing the issue since orders will be retried in the next auctions. Adding complexity here doesn't seem to be profitable.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants