Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Synchronous events are delivered asynchronously #31

Closed
paveliak opened this issue Apr 30, 2014 · 3 comments
Closed

Synchronous events are delivered asynchronously #31

paveliak opened this issue Apr 30, 2014 · 3 comments

Comments

@paveliak
Copy link
Contributor

ConnectionMultiplexer may deliver notifications about connection failure (and some other infrastructure errors). In some scenarios it might be important to process these notifications synchronously or client may blindly start working with the different Redis instance (e.g. if crashed server was automatically restarted).

Internally StackExchange tries to deliver "connection failed" synchronously by invoking ConnectionMultiplexer.TryCompleteHandler(,,,. false /* isAsync*/). However it looks like isAsync is not handled correctly inside TryCompleteHandler(). In fact "isAsync == true" does synchronous processing while isAsync == false just returns control to the caller hence triggering asynchronous event processing.

Should it be "if (NOT isAsync)" here?

        if (isAsync)
        {
            foreach (EventHandler<T> sub in handler.GetInvocationList())
            {
                try
                { sub.Invoke(sender, args); }
                catch
                { }
            }
            return true;
        }
        return false;
@mgravell
Copy link
Collaborator

Yes, that is correct. The intent is to avoid blocking the IO threads.
Basically, the completion API invites code to complete synchronously
(passing isAsync=false the first time). If the code can do that safely
without exposing external code (which could lead to unexpected delays),
then it returns true. If it cannot guarantee that, it returns false. This
causes the completion to be placed on a completion queue, and processed a
second time, this time passing isAsync=true. On this second attempt, we are
on a completion worker, and are not blocking any IO, so it is entirely
reasonable to call external code such as events.

So yes, it is correct as written.

Can you provide more information about the scenario that you feel is
problematic?

On 30 April 2014 15:43, paveliak [email protected] wrote:

ConnectionMultiplexer may deliver notifications about connection failure
(and some other infrastructure errors). In some scenarios it might be
important to process these notifications synchronously or client may
blindly start working with the different Redis instance (e.g. if crashed
server was automatically restarted).

Internally StackExchange tries to deliver "connection failed"
synchronously by invoking ConnectionMultiplexer.TryCompleteHandler(,,,.
false /* isAsync*/). However it looks like isAsync is not handled correctly
inside TryCompleteHandler(). In fact "isAsync == true" does synchronous
processing while isAsync == false just returns control to the caller hence
triggering asynchronous event processing.

Should it be "if (NOT isAsync)" here?

    if (isAsync)
    {
        foreach (EventHandler<T> sub in handler.GetInvocationList())
        {
            try
            { sub.Invoke(sender, args); }
            catch
            { }
        }
        return true;
    }
    return false;


Reply to this email directly or view it on GitHubhttps://github.com//issues/31
.

Regards,

Marc

@paveliak
Copy link
Contributor Author

Suppose I have two clients that need to coordinate access to some data in Redis. Clients use Redis-based locking to synchronize access (because for some reasons I cannot go with Redis transactions). Let's assume that locking is done by checking whether particular key exists and clients need to execute two operations under lock (as a transaction).

Here is the sequence of events:

  1. First client enters lock, second clients waits on lock periodically polling the lock (whether key exists)
  2. First client executes operation ConnectionFailed isn't raised when single server node goes down #1 and here Redis server is restarted, ConnectionFailed notification is queued on the worker thread but not yet processed
  3. Reconnection happens before notification is delivered so first client performs operation Fix command #2 with restarted Redis instance (which is already bad since results of operation ConnectionFailed isn't raised when single server node goes down #1 are lost)
  4. Second client enters lock within the restarted instance and performs both operations there

At the end second operation from the first client collides with operations performed by the second client.

Even if I run all operations within the Redis transaction (just to queue them all and fire at once) I still can have issues if restart happened right between the entering the lock and starting transaction.

If ConnectionFailed notification was delivered synchronously I could abort transaction in the first client before invoking operation #2 in the restarted Redis instance.
Alternative solution could be to disable automatic reconnect. Then invoking operation #2 would throw exception which helps first client to abort transaction and then manually reconnect to the restarted Redis instance.

@NickCraver
Copy link
Collaborator

@paveliak I think I understand your scenario, so let's break it down. This all comes down to dependencies; at a basic level if you're restarting redis in the middle of transactions and you're losing your atomic locking mechanism and there's really no sanely handling that no matter how many safeguards we put in place.

If you were using transactions, in the transaction itself you can make sure you own the lock. That'd be the proper point here - as close to the atomicity as possible you want the most assurance of atomic behavior.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants