Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: add BatchSpanProcessor #238

Merged
merged 9 commits into from
Sep 25, 2019

Conversation

mayurkale22
Copy link
Member

Which problem is this PR solving?

Short description of the changes

Copy link
Member

@OlivierAlbertini OlivierAlbertini left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

}

if (this._bufferTimeoutInProgress) {
this._resetBufferTimeout();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

shouldn't it be race (max buffer size or timeout)? If we reset the buffer always and we add one span at every 19999ms we won't report for 100 * 19999ms = ~33 mins

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah this could be an annoying issue, i think the buffer timeout should be more like a flush ticker, so we end up with behavior where the tracer flushes every N seconds AND flushes when the buffer is at capacity.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 to this. Is there a scenario where we would ever want to reset the timer? We would send every N seconds, regardless of when the last span was added to the buffer. If the buffer becomes size M, we flush and clearTimeout.

Copy link
Member Author

@mayurkale22 mayurkale22 Sep 11, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you suggest to remove the reset functionality? If we do that, every span end we are creating a setting new timer. Let me know if you have a better approach to handle this.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could a timer only be started on the first span to enter the current batch?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What about this:

  • Every time a span is written we set a variable for "lastSpanWrite" time. Also, if writing a span would fill the buffer, we flush it
  • We have a regular timer with setInterval that runs and checks if lastSpanWrite was above some time threshold, and if so, it does a write of all spans in the buffer

It's somewhat imperfect as the actual wait time for a span to be written could vary (but is still bounded by timer frequency + allowed span age), but it avoids the issue of re-creating timers for every span write

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have made the changes in 8b362fb, ptal

}

if (this._bufferTimeoutInProgress) {
this._resetBufferTimeout();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah this could be an annoying issue, i think the buffer timeout should be more like a flush ticker, so we end up with behavior where the tracer flushes every N seconds AND flushes when the buffer is at capacity.

}

if (this._bufferTimeoutInProgress) {
this._resetBufferTimeout();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What about this:

  • Every time a span is written we set a variable for "lastSpanWrite" time. Also, if writing a span would fill the buffer, we flush it
  • We have a regular timer with setInterval that runs and checks if lastSpanWrite was above some time threshold, and if so, it does a write of all spans in the buffer

It's somewhat imperfect as the actual wait time for a span to be written could vary (but is still bounded by timer frequency + allowed span age), but it avoids the issue of re-creating timers for every span write

}
}, this._bufferTimeout);
// Don't let this timer be the only thing keeping the process alive
unrefTimer(timer);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What happens if a server is shut down gracefully via a drain signal (to finish requests it has but not take any new ones)? Is there a way to guarantee that it flushes out all its spans? (Maybe just a TODO item for now)

@codecov-io
Copy link

Codecov Report

Merging #238 into master will increase coverage by 0.02%.
The diff coverage is 98.03%.

@@            Coverage Diff            @@
##           master    #238      +/-   ##
=========================================
+ Coverage   98.87%   98.9%   +0.02%     
=========================================
  Files          67      70       +3     
  Lines        2569    2728     +159     
  Branches      172     181       +9     
=========================================
+ Hits         2540    2698     +158     
- Misses         29      30       +1
Impacted Files Coverage Δ
...etry-basic-tracer/src/export/BatchSpanProcessor.ts 100% <100%> (ø)
...ages/opentelemetry-core/src/platform/node/index.ts 100% <100%> (ø) ⬆️
...asic-tracer/test/export/BatchSpanProcessor.test.ts 100% <100%> (ø)
...opentelemetry-core/src/platform/node/timer-util.ts 33.33% <33.33%> (ø)
.../opentelemetry-exporter-jaeger/test/jaeger.test.ts 100% <0%> (ø) ⬆️
...entelemetry-exporter-jaeger/test/transform.test.ts 100% <0%> (ø) ⬆️
...metry-node-tracer/src/instrumentation/constants.ts 100% <0%> (ø) ⬆️
...pe-async-hooks/test/AsyncHooksScopeManager.test.ts 100% <0%> (ø) ⬆️
...e-tracer/test/instrumentation/PluginLoader.test.ts 100% <0%> (ø) ⬆️
... and 3 more


/** Add a span in the buffer. */
private _addToBuffer(span: ReadableSpan) {
this._lastSpanWrite = Date.now();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we have this here? I was thinking we would only set this value in flush. Maybe call it this._lastSpanFlush.

Basically we only want to flush spans if it's been a while since the last span was flushed so that a program that only generates a few spans will still get them flushed out within 2x of the buffer flush timeout.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1


it('should force flush when timeout exceeded', done => {
const processor = new BatchSpanProcessor(exporter, defaultBufferConfig);
for (let i = 0; i < defaultBufferConfig.bufferSize; i++) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we do something like .bufferSize - 1 to explicitly show we are putting in less than the full buffer of spans and then seeing that it still gets flushed?

Copy link
Member

@bg451 bg451 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

besides Dave's comment about lastSpan write, lgtm


/** Add a span in the buffer. */
private _addToBuffer(span: ReadableSpan) {
this._lastSpanWrite = Date.now();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1

@mayurkale22 mayurkale22 merged commit cb035ce into open-telemetry:master Sep 25, 2019
@mayurkale22 mayurkale22 deleted the batch_span_processor branch September 25, 2019 18:09
pichlermarc pushed a commit to dynatrace-oss-contrib/opentelemetry-js that referenced this pull request Dec 15, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants