-
Notifications
You must be signed in to change notification settings - Fork 188
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Tutorial 1 - Lennard-Jones #3881
Conversation
now use to zero-based indexing in the loop, see espressomd#3818
Check out this pull request on Review Jupyter notebook visual diffs & provide feedback on notebooks. Powered by ReviewNB |
This follows the steps provided in https://github.com/espressomd/espresso/wiki/Documentation#tutorials
Breaks tests: According to the documentation of Exercise2 (https://github.com/ipython-contrib/jupyter_contrib_nbextensions/tree/master/src/jupyter_contrib_nbextensions/nbextensions/exercise2) alternative solutions are consecutive cells in a solution block. However, converting using our CMake infrastructure *all* resulting code blocks are exectuded.
We will not be able to support alternative solutions. It would require running the test several times. The one without the loop is better practice (more declarative), so let's keep that. |
I added learning objectives to the tutor's notes |
Agreed and pushed in 7562149 |
@jngrad I accidentally deleted my comments in the started review discussion. Only thing left from there is a possibility to check all the links in the notebooks using the CMake infrastructure. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for writing links to the user guide, this will be very helpful to new users!
Other points to address, that are not part of the diff:
- The MSD correlator explanation is no longer accurate. The first two columns don't store the lag time and sample size anymore, instead one has to call
msd_corr.lag_times()
explicitly to get the lag times. This regression was introduced in a previous PR. - The introduction paragraph needs some cleanup. The core is no longer written in the C language.
Co-authored-by: Jean-Noël Grad <[email protected]>
Co-authored-by: Jean-Noël Grad <[email protected]>
Co-authored-by: Jean-Noël Grad <[email protected]>
Co-authored-by: Jean-Noël Grad <[email protected]>
@schlaicha please use batch commits, we have limited CI resources :) |
Sorry, was too nice to just click on the |
click "Add suggestion to batch" (GitHub docs) |
This also requires to shift the initialization of the thermostat to after the steepest descent.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For the most part, minor fixes. If the overlap removal would become somewhat simpler, that would be great for a beginner's tutorial.
I agree that this reads complicated. But the problem is that in steepest descent we minimize energies whereas the MD relies on forces only,
I’m confused. The steepest descent code in espresso (to my knowledge) is force based. It moves particles by a certain amount along the force vectors acting on them.
It terminates, when the largest force observed dudring the last step is less than f_max.
See https://github.com/espressomd/espresso/blob/python/src/core/integrators/steepest_descent.cpp
Isn’t that exactly what’s needed?
Maybe I’m missing something and we should discuss it off-line
|
I'm sorry, of course you're right. On a Sunday evening I missed that grad x E is equal to the force. And indeed my argumentation is exactly opposite for MD/MC.
Yes, only that in general I would not know what value fmax is good enough for a specific system / timestep. My suggestion was to have a relative convergence criterion directly in (which would simplify the typical simulation script/tutorial). But maybe that is also not needed.
So removing the energy convergence criterion is simple enough? I would not see how to make it much simpler otherwise... |
Yes, a force only criterion is what I’d use.
The totally clean way to do it would be comparing acceleration `F/m` to `1/dt`, That is, what determines integrator stability, independent of the system.
But let’s sort that out an other time.
|
Co-authored-by: Jean-Noël Grad <[email protected]>
Co-authored-by: Jean-Noël Grad <[email protected]>
Co-authored-by: Jean-Noël Grad <[email protected]>
Triggers an error message if the exact same python interpreter build cannot be found on the computer, with a message prompt to select one of the available python interpreters.
"## Comparison with the literature\n", | ||
"\n", | ||
"Empirical radial distribution functions have been determined for pure fluids <a href='#[5]'>[5]</a>, mixtures <a href='#[6]'>[6]</a> and confined fluids <a href='#[7]'>[7]</a>. We will compare our distribution $g(r)$ to the theoretical distribution $g(r^*, \\rho^*, T^*)$ of a pure fluid <a href='#[5]'>[5]</a>." | ||
"From the roughly exponential decay of the correlation function it dawns that only every second sample should be used to calculate statistically independent averages." |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My lecture on error estimation advises against this practice because it can lead to an overestimation of the standard error of the mean. I think it is better practice to carry out binning analysis, or better, autocorrelation function integration.
Refactor and improve the LJ tutorial.
Fixes #3818
Description of changes: