-
Notifications
You must be signed in to change notification settings - Fork 139
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[CHANGED] sub benchmark tuned #777
Conversation
levb
commented
Aug 5, 2024
- Added CI steps to compare to the base branch
- Dialed down # messages, tuned the matrix
- Added locking in the bench to clear sanitize=thread
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #777 +/- ##
==========================================
+ Coverage 68.71% 68.83% +0.12%
==========================================
Files 39 39
Lines 15207 15251 +44
Branches 3143 3157 +14
==========================================
+ Hits 10449 10498 +49
+ Misses 1700 1682 -18
- Partials 3058 3071 +13 ☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The number of messages that we print out seem a bit weird. Also posted some comments about the variability of 2 runs of the same branch.
@@ -222,6 +230,7 @@ static void _benchMatrix(threadConfig *threadsVector, int lent, int *subsVector, | |||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks like you write the number of messages as numPubMessages * numSubs
, where numPubMessages
is NMessages / numSubs
. So you should not simply write NMessages
? Because of rounding of the first division, the number of messages may not be what you expect after the multiplication.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is so that each sub gets the same number of messages, since we average/best/worst across subs. The output includes the actual total number of messages received.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But again, we have:
int numPubMessages = NMessages / numSubs;
...
printf("\t{\"subs\":%d, \"threads\":%d, \"messages\":%d, \"best\":%d, \"average\":%d, \"worst\":%d}%s\n",
numSubs, env->threads.useGlobalDelivery ? env->threads.max : 0, numPubMessages * numSubs, best, average, worst, comma);
This is for a given _benchMatrix
run. So we divide the number of messages (NMessages) by the number of subs, but then print out that number * numSubs. So to me, it looks like using NMessages should be used?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh right, sorry I missed the question. "messages"
is included just to be part of the key, it is not used in any calculations. For the purpose of the key (combined with NSubs) it's as good as NMessages, but it was useful for me to see it to visually confirm the correctness of how it was calculated.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok.
@kozlovic I answered the questions, not sure there are any changes to make? |
@kozlovic I apologize if I confused by mentioning it in the "child" PR, it's this one I need your LGTM on, otherwise I'd need to bypass the branch protection, which I'd rather not. |
@levb But on that one, I still had a comment that I don't think you addressed: https://github.com/nats-io/nats.c/pull/777/files#r1706002494? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM