-
Notifications
You must be signed in to change notification settings - Fork 189
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Particle benchmarks #2381
Particle benchmarks #2381
Conversation
Benchmarking suite to measure the integrator execution time under various conditions: build features, struct Particle implementation, number of cores, number of particles, interaction types.
Codecov Report
@@ Coverage Diff @@
## python #2381 +/- ##
======================================
Coverage 72% 72%
======================================
Files 393 393
Lines 18483 18483
======================================
Hits 13434 13434
Misses 5049 5049 Continue to review full report at Codecov.
|
In the lj script, you're setting a force cap (20) and never remove it. Please switch the command line handling to argparse in both test scripts. Please also simplify the p3m script:
Typical Bjerrum lengths would be on the order of ~4 sigma The runner script should be written to time the currently checked out version. |
runner.sh runs benchmarks on the currently checked out version suite.sh runs runner.sh on different commits
Benchmarks are now created in CMake using add_test(). Failing benchmarks no longer break the runner.sh and suite.sh scripts. Error messages are still visible.
Good idea. Could you please have a look at the output of the LJ script? For volume fractions below 0.5 without force cap, the script sometimes crashes with this error message:
You may have to test different seed values at line 88 in lj.py until the error appears (cmd: |
Please add calls to tune_skin before and after p3m setup. The optimal skin is density dependent. |
Can be merged. |
Are you sure? The LJ benchmark still fails on my machine depending on the parameters. |
i think we planned to buy CI machines. we need a dedicated one for such benchmarks. |
Sure.
That's difficult. Our runners are configured for multiple concurrent jobs. We would need to do something hacky that reconfigures the runner for a single job, via a cron job or something.
That depends on whether the university administration gives us the money for it.
Not necessarily, we just need one that doesn't do other stuff at the same time. |
Are you sure? The LJ benchmark still fails on my machine depending on the parameters.
What parameters precisely. I ran both of them, but did not vary parameters extensively.
|
Try |
I can't reproduce this. Can you please post the error msg? |
|
I think I fixed the remaining issues. I'll merge this for now, so we can make use of it. |
LJ and P3M benchmarks discussed in #2239 and extracted from #2296.