-
Notifications
You must be signed in to change notification settings - Fork 322
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feature: Prometheus node_exporter compatible output #24
Comments
@codesenberg , I was thinking about creating a PR for this. Do you have any view on this one? |
I was thinking about adding two flags {
"attack": {
// attack parameters, i.e url, number of conns., etc.
},
"requests_per_second": {
"avg": 1000,
"stdev": 200,
"max": 2000
},
"latency": {
// durations are in microseconds
"avg": 20000.2,
"stdev": 5000.0,
"max": 100000.0,
// if --latencies flag is used
"percentiles": {
"50": 19200.0,
"75": 21000.0,
"90": 23000.0,
"99": 31000.0,
},
},
// throughput is in bytes per second
"throughput": 10000.0,
"codes": { /* HTTP status codes */ },
// some other things I could forget
} With this you could transform said JSON into the format you need. Sounds good? (python -c "import sys,uuid; sys.stdout.write("test_uuid: " + uuid.uuid4().hex + '\n')" && bombardier --print=result --format=json <url> | transform-bombardier-output-to-prometheus-format) > bombardier.prom |
Just to clarify a little bit |
@greenpau, you know, we could actually step a bit further and just add |
vot eto razgovor 👍 |
This commit adds --print flag that allows users to specify what to output. Flag accepts a list of comma-separated values. Allowed values are: intro (short: i), progress (p), result (r). Examples: --print=i,p,r # outputs everything --print=intro,progress,output # same as above --print=i,r # intro & result only Closes #25, updates #24.
Hey, @greenpau, how important that UUID thing for your use case? I plan to implement user-defined templates, so if it's somewhat important (granted I will in fact implement them), I could add some |
@codesenberg , it is pretty important, because it allows to store that transaction in other systems (and later correlate it), e.g. |
The I'll probably add helpers for all five versions of it from this library. |
That's sounds great! 👍 |
This commit adds --format flag, which allows users to specify output format. --format flag accepts some format known to bombardier as a string or a path to a user-defined template (prefixed with 'path:'). I also added detailed documentation on user-defined templates to help users write their own templates. Closes #26, updates #24.
Ok, user-defined templates landed on master just now. Let me know if everything is in place to generate the output you want. |
@codesenberg , 👍 |
Hello @codesenberg, |
@tkanos, well, I don't know if there is some reasonable default output format for prometheus that we could use. Mainly, because I'm not too familiar with prometheus in general, but if there is one, then sure, we could add it. |
By the way. @greenpau, have you succeeded in your endeavor to generate prometheus compatible output by using user-defined output templates? |
@codesenberg, And the guy that wants to have it, will have to configure Prometheus to scrap the endpoint that we will have to provide. And if he configure it to scrap each 15 sec, and do a benchmark of 5s, He will not have any information, and then after the benchmark he will have to remove his config. It could have sense in a benchmark Suite, where all benchmark are managed by a web application, that runs all benchmark of all projects, and provide the info to prometheus, but it's not the goal of Bombardier. Or do what @greenpau is doing, a prom generated, that we can generate during the CI/CD and provide it to prometheus to alert if we have an issue. |
@codesenberg , unfortunately, I am tied doing other things at the moment. I think the way you did is with templates is great, because then, using that same templating technique, you could generate Elasticsearch inserts. Once I get back to app testing, I will post in here. |
@tkanos, let's leave things as they are then. (At least for the time being.) |
I'll close this one, since more than two years passed already. Unlikely that we'll hear any feedback 😄 |
The output produced by
bombardier
cannot be consumed by Prometheus.It would be nice to have metrics like this:
Prometheus' node_exporter has
textfile
collector. The collector scans directory for.prom
files and adds the metrics found in the file to its own metrics set.This way, I can run
bombardier
every minute, output results tobombardier.prom
file and the metrics will be picked up by Prometheus server.I would like to come up with a number of metrics and have an option to output results as a Prometheus metric.
For example, the following output:
would result in the following metrics:
bombardier_http_request_rate_avg
bombardier_http_request_rate_max
bombardier_http_request_rate_stdev
bombardier_http_latency_avg
bombardier_http_latency_stdev
bombardier_http_latency_max
Each metric could have one or more labels associated with it.
For example, the following command:
would result in the following metrics:
Further, it would be helpful to output UUID associated with each test.
The text was updated successfully, but these errors were encountered: