Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement exponential backoff for Remote #463

Closed
sudsy opened this issue Apr 29, 2019 · 4 comments
Closed

Implement exponential backoff for Remote #463

sudsy opened this issue Apr 29, 2019 · 4 comments

Comments

@sudsy
Copy link
Contributor

sudsy commented Apr 29, 2019

I have implemented a design for this in my own branch but don't want to commit it until I am sure it is right.

  1. When a remote connection fails it should backoff the connection retries eventually failing if the connection is not available after a configured number of retries.

  2. If a new message is received for that particular remote after the retry process is completed, the whole connection routine should be reset with a new exponential backoff retry upon failure.

What I am not clear on is what to do with the messages that accumulate between 1. and 2. My thinking is that once 1. is completed, all the queued messages for that remote should be dumped to the dead letter process. Does anyone have other suggestions?

@potterdai
Copy link
Contributor

potterdai commented Apr 30, 2019

@sudsy I agree with you that the message should just go to dead letter process.

@rogeralsing
Copy link
Contributor

rogeralsing commented May 2, 2019

Question is what to do with the remote endpoint over time.

e.g. lets say we reach backoff limit, messages go to deadletter.
x time pass, you send a message again. should it retry to connect to the remote endpoint?

that is, should we have a circuit breaker here?
or should we consider the endpoint dead forever (likely a bad choice)

@sudsy
Copy link
Contributor Author

sudsy commented May 2, 2019

Question is what to do with the remote endpoint over time.

e.g. lets say we reach backoff limit, messages go to deadletter.
x time pass, you send a message again. should it retry to connect to the remote endpoint?

I think that is a good pattern. If the server is still down, messages will accumulate but we won't be wasting cycles connecting to it due to exponential backoff. More importantly, if the connection is failing due to the remote or network being overloaded, the exponential backoff will contribute as little as possible to that overload.

that is, should we have a circuit breaker here?
or should we consider the endpoint dead forever (likely a bad choice)

Dead forever does seem like a bad choice. The connection problems could have been due to some sort of network or remote overload. We would want the connection to recover when the conditions improve.

sudsy added a commit to sudsy/protoactor-dotnet that referenced this issue Dec 6, 2019
raskolnikoov referenced this issue Dec 27, 2019
add exponential backoff described in AsynkronIT#463
@alexeyzimarev
Copy link
Member

I found out that the new version of the endpoint writer has a lot of additional logging with string interpolation. I am quite sceptical about it. String interpolation is called before it reaches the check if debug level logging is enabled, it is a slow operation and requires allocation. I don't think we should be doing that. Logging is cool but we need to ensure that debug log is not adding any burden if it is not enabled.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants