-
Notifications
You must be signed in to change notification settings - Fork 63
Add batching to queued sender implementations #361
Add batching to queued sender implementations #361
Conversation
Testing Done: unit tests
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suggestion: maybe split to multiple PRs for updates to each exporter. This PR is a bit too large.
…-service into add-generic-queueing
@songy23 Makes sense, split out the exporter changes (this will just add to the current queued senders), and will make a followup PR containing the changes to allow the other exporters to be batched/queued. Ultimately with the discussion on friday, I can see if we just want to forgo any of these smaller config changes, and just refactor the entire configuration scheme in one go |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @sjkaris, LGTM. Just a few Qs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Am still reviewing this PR but providing feedback as we go on.
// atomic.LoadPointer only takes unsafe.Pointer interfaces. We do not use unsafe | ||
// to skirt around the golang type system. | ||
b = (*batch)(atomic.LoadPointer((*unsafe.Pointer)(unsafe.Pointer(&nb.currBatch)))) | ||
cut, closed = b.add(spans) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So am assuming cut
here means performedBatching
? If so, please use a clearer term like performedBatching
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
used cutBatch
instead (mirroring the function name) lmk what you think
|
||
closed uint32 | ||
growMu sync.Mutex | ||
pending int32 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using atomics might fail due to the arrangement of these fields due to potential alignment issue
Field | 64 bit size(bytes) | 32 bit size(bytes) |
---|---|---|
items | 16 | 8 |
currCap | 4 | 4 |
nextEmptyItem | 4 | 4 |
sendItemsSize | 4 | 4 |
close | 4 | 4 |
growMu | 8 | 8 |
pending | 4 | 4 |
Please examine and adjust accordingly, you might want to ensure that all the mutex is first, then integers and then items.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
or rather, integers first, then items and then lastly the mutex but just make sure that anything that gets loaded or stored is either on a 4 or 8 alignment boundary.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Discussed offline -- using 32 bit ints is safe for both 32 and 64 bit systems since golang keeps stuff word-aligned. Moved lastSent
to top for alignment since it is 64 bits and therefore needs to be 64-bit aligned for 32 bit systems.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM generally and thank you @sjkaris!
Please take a look at my suggestions to ensure atomics don't crash due to alignment.
travis sometimes hangs on testing exporters, created #375 |
TickTime *time.Duration `mapstructure:"tick-time,omitempty"` | ||
// RemoveAfterTicks is the number of ticks that must pass without a span arriving | ||
// from a node after which the batcher for that node will be deleted. This is an | ||
// advanved configuration option. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
advanced
…#361) Adds batching (as a feature of queuing) to the current queued sender implementations. Testing Done: unit tests
Adds batching (as a feature of queuing) to the current queued sender implementations.