Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add benchmarks #1103

Merged
merged 3 commits into from
Apr 5, 2016
Merged

Add benchmarks #1103

merged 3 commits into from
Apr 5, 2016

Conversation

tobiasdiez
Copy link
Member

This PR adds some basic benchmarks for parsing and writing a bib file. The results are as following for a database consisting of 1000 entries.
Benchmark Score Error Units
Benchmarks.parse 49736.582 ± 788.879 ops/s
Benchmarks.write 0.706 ± 0.012 ops/s
Benchmarks.search 258.838 ± 5.604 ops/s
Benchmarks.inferBibDatabaseMode 1297.622 ± 22.910 ops/s

As one can see the parse operation is by many orders of magnitudes quicker then writing. I had a closer look at the write operation and it turned out that 66% of the time is spent in Database.getMode(). Some small changes improved the situation by a factor of 10
Benchmark Score Error Units
Benchmarks.parse 42031.971 ± 8188.833 ops/s
Benchmarks.write 8.299 ± 0.304 ops/s
Benchmarks.search 248.093 ± 7.573 ops/s
Benchmarks.inferBibDatabaseMode 20759.711 ± 397.031 ops/s

I suspect the changes in #1100 improve the situation even more (since there the database mode is cached).

(By the way, gradle jmh runs the benchmarks. So its pretty simple to use.)

  • Change in CHANGELOG.md described
  • Tests created for changes
  • Screenshots added (for bigger UI changes)

@tobiasdiez tobiasdiez added the status: ready-for-review Pull Requests that are ready to be reviewed by the maintainers label Apr 5, 2016
@tobiasdiez tobiasdiez changed the title [WIP] Add benchmarks Add benchmarks Apr 5, 2016
@tobiasdiez
Copy link
Member Author

Ready for review.


Random randomizer = new Random();
for(int i = 0; i < 1000; i++)
{
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

formatting

I would use more elements, e.g., 10k as this should be more helpful for finding the issues.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The writing operation takes per iteration over 2 minutes with 10k items. So this would result in > 30 min benchmark time.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hm, ok. Then leave it as is.

@simonharrer
Copy link
Contributor

Really like this. However, we cannot easily benchmark the GUI performance. But we can benchmark the MainTableDataModel from #1100 which does all the heavy lifting regarding sorting, filtering, etc.

It would be awesome if we could track the progress of these benchmarks, but that would require something like a jenkins. Probably something for the future.

@tobiasdiez
Copy link
Member Author

I changed the code accordingly to your comments.

Yes it would be nice to have an overview of the performance for each PR. Since JMH writes the results to a text file in build\reports\jmh, it shouldn't be to hard to get the numbers.

@simonharrer
Copy link
Contributor

👍 LGTM

@tobiasdiez tobiasdiez merged commit 97e1293 into JabRef:master Apr 5, 2016
@koppor koppor deleted the benchmark branch April 26, 2016 06:02
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
status: ready-for-review Pull Requests that are ready to be reviewed by the maintainers
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants