-
Notifications
You must be signed in to change notification settings - Fork 378
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
change: add "collect" fn, make scrape fns async, ... #361
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks pretty reasonable to me. A couple minor suggestions on the docs.
|
||
To avoid dependencies in this module, GC stats are kept outside of it. If you | ||
want GC stats, you can use https://github.com/SimenB/node-prometheus-gc-stats | ||
To avoid native dependencies in this module, GC statistics for bytes reclaimed |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Isn't the data provided by that module now provided using the builtin APIs?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Everything except "bytes reclaimed" is now via builtin APIs. You might actually know the answer to this, or know who to ask... gc-stats
just calls GetHeapStatistics
in the GC prologue and epilogue and diffs them to calculate "bytes reclaimed". But since v8's GC is concurrent, I don't think that's an accurate method. Do you know if that's true? If so, we should remove this section of the readme.
Thanks for the review, agree with all the other changes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It might be accurate, but I'm not sure its useful, I suspect it might have been emitted just because it seemed possible. If many bytes are reclaimed, is that good? If only a few are reclaimed, is that good? The total usage and how it varies with time strike me as the useful metrics, and also allow diff scrape-to-scrape.
Great changes! Hope they're gonna be merged soon :) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I really like this approach 👍
c14b300
to
ec986e1
Compare
Changes made and rebased on master. |
@siimon unless I hear otherwise from you, I'll merge this in a few days. |
* Adds a `collect` fn, which may be async, to each metric type. This provides a cleaner interface than registering collectors, and allows the collector to do asynchronous work. (Previously it would be one scrape behind if it had to do async work.) * In turn, makes `registry.metrics()` and `registry.get...` async, and removes the `collectors`/`registerCollector` functions. * Fixes `process_max_fds` and `process_start_time_seconds` so that they have values after the registry is reset (same vein as siimon#238) * Cleans up the readme a bit (removes some out-of-date info)
Finally, this is the semver-major change to the interface I'm proposing per discussion in #337. What do you guys think? @sam-github I can't seem to request a review from you, but would love your thoughts.
(This is also the first non-"mechanical" entry in the changelog, FWIW.)
Adds a
collect
fn, which may be async, to each metric type. This provides a cleaner interface than registering collectors, and allows the collector to do asynchronous work. (Previously it would be one scrape behind if it had to do async work.)In turn, makes
registry.metrics()
andregistry.get...
async, and removes thecollectors
/registerCollector
functions.Fixes
process_max_fds
andprocess_start_time_seconds
so that they have values after the registry is reset (same vein as nodejs_version_info empty after registry reset #238)Cleans up the readme a bit (removes some out-of-date info)
I don't suspect this will be a problem, but this does increase the number of Promises created on each scrape. I could decrease that by one per metric fairly easily.
I also forgot to update the benchmark code. Will do if y'all are +1s on this.