Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Requests Per Second Plot Breaks When There are too Many Unique URLs #1059

Closed
williamlhunter opened this issue Aug 5, 2019 · 8 comments
Closed

Comments

@williamlhunter
Copy link
Contributor

Description of issue

Plotting requests per second breaks when locust has visited around 500 unique URLs

Expected behavior

Total requests per second works as long as needed

Actual behavior

It seems to break once you hit too many unique URLs. I believe this is because locust gets the stats for every URL every time it updates the web ui.

Environment settings

  • OS: Windows
  • Python version: 3.7.3
  • Locust version: 0.10.0

Steps to reproduce (for bug reports)

I am using locust to test a retail point of sale rest api. The api uses document SIDs in the url, so everytime I make a new transaction, I get 3-4 new urls. Not only does this make analyzing the results harder, as I need to group the urls together with regular expressions in pandas, but i believe that it is breaking the plotting of requests per second as well.
I can hammer the api with requests that don't use the document SID in the url forever, but I can only run a few hundred transactions before the plotting breaks.
To make sure, I made 15 new receipts per second for about 30 mins and everything worked correctly. Afterword, I changed my locust file to make a blank receipt and then request it's contents (generating a request on a new url) and was only able to run the load test for 2 mins before plotting broke. (at around 500 unique urls)

Possible fix

I would love to be able to set the field that requests are grouped by to a string. Something like:
response = self.client.post("/v1/rest/document" headers=self.header, json=payload, tag="creating a document")
but I'm not incredibly experienced, so this may not be as easy to implement as I'm hoping.
Also this implies that I'm correct as to why the requests per second plot is breaking. Furthermore, this would save me (and people doing similar testing) from having to group api calls after running a test.

@cgoldberg
Copy link
Member

before the plotting breaks

what about it is broken?

@williamlhunter
Copy link
Contributor Author

williamlhunter commented Aug 6, 2019

Sorry, I was in a bit of a rush and totally forgot to add that. Here's an example:

screenshot

As you can see, the plot shows no data for each point in the second half. However, the rps count at the top is perfectly functional.

@cgoldberg
Copy link
Member

please give a clear description of the issue, including details of what you are expecting and what is actually happening. Please exclude all other information, conjectures, or ideas for possible fixes. This issue report is packed with information that is completely irrelevant... which makes it difficult to follow.

(For example... I can't even tell if this is a bug report, or a feature request for tagging responses ... it seems like both)

@williamlhunter
Copy link
Contributor Author

Fair, I'm a bit new to this if you can't tell.

Primarily, I want to fix the issue, but I believe that tagging would conveniently solve the bug as well. Here is a revised bug report:

Description

Requests per second stops plotting under certain circumstances.

Expected behavior

The reported RPS in the locust web client should be graphed accurately in the total requests per second chart.

Actual behavior

Total Requests per second is not graphing properly under certain circumstances.

Generating Unique URLs:

Chart
screenshot

JSON
screenshot

Repeatedly hitting the same URL:

Chart
screenshot

JSON
screenshot

Environment Settings

  • OS: Windows and Arch Linux (running in a vm)
  • Python version: 3.7.3
  • Locust Version: 0.11.0

Steps to reproduce

Write a locust file that makes requests on a large number of unique URLs. Once the number of requested URLs hits about 500, the graph breaks.

@williamlhunter
Copy link
Contributor Author

At risk of making more conjecture, here is where I believe the bug originates:

screenshot

I believe the red bit is being truncated. This is in web.py, starting at line 104

@RyanW89
Copy link

RyanW89 commented Aug 12, 2019

Have you tried adding name="string" to your request? ie
response = self.client.post("/v1/rest/document" headers=self.header, json=payload, name="creating a document")

unless I am mistaken, this is what you are after.

@williamlhunter
Copy link
Contributor Author

Wow, thanks for pointing that out. That's exactly the feature I was looking for.

However, I'm going to leave this issue open because the total requests per second plot still breaks above 500 urls.

@heyman
Copy link
Member

heyman commented Oct 21, 2019

Fixed by #1060 if I'm not mistaken.

@heyman heyman closed this as completed Oct 21, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants