Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

IConnection.Request performs slower than manual request/reply #299

Closed
CrossBound opened this issue Oct 9, 2019 · 11 comments
Closed

IConnection.Request performs slower than manual request/reply #299

CrossBound opened this issue Oct 9, 2019 · 11 comments

Comments

@CrossBound
Copy link

So based on the comment here nats-io/nats.net#297 (comment) I tried using the request api. Heres some interesting results. If I use RequestAsync like this:

        public Task GuaranteedDeliveryAsync(object Payload, CancellationToken CancellationToken = default)
        {
            if (_isDisposed)
            {
                throw new ObjectDisposedException(this.GetType().Name);
            }

            _counter++;
            return _connection.RequestAsync(_queue
                , JsonSerializer.SerializeToUtf8Bytes(new Payload<object>(_counter, Payload)), 150);
        }

I run about ~3,300+ msgs/sec with RTT of 1.12 ms /ea P99. My VS Analysis tools however show some interesting CPU and GC data.

image

However, if I change my code to look like this

        public Task GuaranteedDeliveryAsync(object Payload, CancellationToken CancellationToken = default)
        {
            if (_isDisposed)
            {
                throw new ObjectDisposedException(this.GetType().Name);
            }

            _counter++;
            byte[] data = JsonSerializer.SerializeToUtf8Bytes(new Payload<object>(_counter, Payload));
            bool acknowledged = false;
            for (int i = 0; i < 3; i++)
            {
                _connection.Publish(_queue, _inbox, data);
                _connection.Flush();

                try
                {
                    while (true)
                    {
                        var response = _responseSubscription.NextMessage(100);
                        var ackID = BitConverter.ToUInt64(response.Data, 0);
                        if (ackID == _counter)
                        {
                            acknowledged = true;
                            break;
                        }
                    }
                }
                catch (NATSTimeoutException)
                {
                    // try again up to 3 times
                }

                if(acknowledged)
                {
                    break;
                }
            }

            if(acknowledged == false)
            {
                throw new Exception("Failed to successfully publish message");
            }

            return Task.CompletedTask;
        }

my performance improves significantly.... notice the number of messages per second as well as the GC collections and CPU data.

image

@CrossBound
Copy link
Author

I guess I'm comparing apples and oranges in that example... here's a little bit closer to home example.

instead of RequestAsync I'll keep it synchronous and use just Request like this

        public Task GuaranteedDeliveryAsync(object Payload, CancellationToken CancellationToken = default)
        {
            if (_isDisposed)
            {
                throw new ObjectDisposedException(this.GetType().Name);
            }

            _counter++;
            byte[] data = JsonSerializer.SerializeToUtf8Bytes(new Payload<object>(_counter, Payload));
            _connection.Request(_queue, data);

            return Task.CompletedTask;
        }

now my results are a little closer, but notice the number of msgs/sec is still slower and my CPU usage is a little less steady. It seems to peak out around 6,400 msgs/sec
image

@CrossBound
Copy link
Author

Very interesting. As soon as I add in a timeout like so

_connection.Request(_queue, data, 150);

the GC goes through the roof.
image

@CrossBound CrossBound changed the title IConnection.RequestAsync performs slower than manual request/reply IConnection.Request performs slower than manual request/reply Oct 9, 2019
@ColinSullivan1
Copy link
Member

ColinSullivan1 commented Oct 10, 2019

Thanks for raising this, and really appreciate the info! We do some mapping of subject names to requests - a map that could grow quickly but I wouldn't expect this much from the GC. This tells us we need to take a pass through and see if we can re-use some objects to prevent the GC churn.

@watfordgnf
Copy link
Collaborator

Adding a timeout involves the creation of a CancellationTokenSource per request.

@CrossBound
Copy link
Author

Unless I'm missing something I believe CancellationTokenSource per request can be avoided. As long as the token source has not been cancelled it can be reused as many times as necessary. It's only when the token source has been 'tripped' that a new one has to be created.

@watfordgnf
Copy link
Collaborator

The CTS performs the timeout (via CancelAfter), and would not be able to be reused.

@danielwertheim
Copy link
Contributor

@CrossBound I've just provided a small fix but even with this in you will get GC clean/work unless usage of ConfigureAwait(false) in a Windows form. At least that's what I've noticed. The console sample isn't affected by that. Also, I'm not sure how the serialization in your sample is done and how it affects pressure on the GC. If the intentions are to "test" NATS performance, perhaps that should be removed when testing?

@CrossBound
Copy link
Author

I don't believe the serialization is impacting this much because otherwise I'd have the same problem when doing the manual request / reply.

I understand your comment (and @watfordgnf 's above), however, I guess my question is: Is it necessary to use CancellationTokenSource.CancelAfter as the method of timing out or is there a better (more efficient) way to accomplish the same thing. Notice in my code I have no such call and it works much more efficiently.

@danielwertheim
Copy link
Contributor

I will continue to look into improving the request op. Just wanted to fix the memory issue first. Will look into the use of tokens, sources etc.

@ColinSullivan1
Copy link
Member

This should be resolved by #339 and is now in pre-release: https://www.nuget.org/packages/NATS.Client/0.10.1-pre1

If you test it we'd certainly like to hear of your results.

@scottf
Copy link
Collaborator

scottf commented Jun 15, 2022

Fixed, never closed, closing now.

@scottf scottf closed this as completed Jun 15, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants