-
Notifications
You must be signed in to change notification settings - Fork 188
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Refactor LB exception mechanism #4355
Conversation
Intermediate Cython functions need to be marked as `except +` to forward a C++ exception from the core. If the intermediate Cython function has a return type, then the returned value is not a Python type and the exception cannot be wrapped in the returned object, instead a sentinel value is used for C++ types that support it, while void functions need to be marked as `except *` since they don't return a value, but this comes with an overhead in the form of a function call to `PyErr_Occurred()`. For simplicity, the return types of all intermediate Cython functions were removed in the LB interface.
@@ -104,7 +104,7 @@ def test_averages(self): | |||
# ... globally, | |||
self.assert_allclose_matrix( | |||
np.mean(self.p_global, axis=0), | |||
p_avg_expected, atol_diag=c_s_lb**2 / 6, atol_offdiag=c_s_lb**2 / 9) | |||
p_avg_expected, atol_diag=c_s_lb**2 / 5, atol_offdiag=c_s_lb**2 / 9) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why is this tolerance increased?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The old lb.pyx
file used to do unit conversions with grid
and tau
as Python floats, while now it is done with C++ doubles, which don't have the same precision. In very long simulations like this statistical tests, a tiny numerical error in the properties of the fluid can lead to large numerical fluctuations after 5000 steps. If we look at the current python branch, the diagonal of the pressure tensor deviates from the expectation value by 0.0600
, and the absolute tolerance is c_s_lb**2 / 6 = 0.0555
. The test still passes because the diagonal has a large value (50006.38
), and thus the relative tolerance is 50006.38 * 1e-07 = 0.0050
, which brings the total tolerance to 0.605
, which is larger than the deviation, but not by much. In fact, this was an issue in the walberla branch, where I already had to adjust this tolerance a few weeks ago when the pressure tensor code was added. With the new absolute tolerance of c_s_lb**2 / 5 = 0.0666
we are no longer in the danger zone.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't really see why Python floats in CPython should have a different precision than C++ doubles but since the relative tolerance is that large this should not matter.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For example with the LB density setter, we have to calculate p_dens * agrid * agrid * agrid
in the unit conversion. The original lb.pxd
would do __pyx_PyFloat_AsFloat(__pyx_v_p_dens)
and call PyNumber_Multiply(__pyx_t_5, __pyx_v_agrid)
three times, the new lb.pxd
simply does __pyx_v_dens * pow(__pyx_v_agrid, 3.0)
. This difference is probably enough to introduce an error of the order of std::numeric_limits<double>::epsilon
.
Generated C++ code
Here is the original lb.pxd
code:
/* "espressomd/lb.pxd":127
* # get pointers
* if isinstance(p_dens, float) or isinstance(p_dens, int):
* c_dens = <float > p_dens * agrid * agrid * agrid # <<<<<<<<<<<<<<
* else:
* c_dens = p_dens * agrid * agrid * agrid
*/
__pyx_t_4 = __pyx_PyFloat_AsFloat(__pyx_v_p_dens); if (unlikely((__pyx_t_4 == (float)-1) && PyErr_Occurred())) __PYX_ERR(0, 127, __pyx_L1_error)
__pyx_t_5 = PyFloat_FromDouble(((float)__pyx_t_4)); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 127, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_5);
__pyx_t_6 = PyNumber_Multiply(__pyx_t_5, __pyx_v_agrid); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 127, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_6);
__Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
__pyx_t_5 = PyNumber_Multiply(__pyx_t_6, __pyx_v_agrid); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 127, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_5);
__Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;
__pyx_t_6 = PyNumber_Multiply(__pyx_t_5, __pyx_v_agrid); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 127, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_6);
__Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
__pyx_t_7 = __pyx_PyFloat_AsDouble(__pyx_t_6); if (unlikely((__pyx_t_7 == (double)-1) && PyErr_Occurred())) __PYX_ERR(0, 127, __pyx_L1_error)
__Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;
__pyx_v_c_dens = __pyx_t_7;
and here is the new lb.pxd
code:
/* "espressomd/lb.pxd":125
*
* cdef inline python_lbfluid_set_density(double dens, double agrid) except +:
* lb_lbfluid_set_density(dens * agrid**3) # <<<<<<<<<<<<<<
*
* cdef inline python_lbfluid_set_viscosity(double visc, double agrid, double tau) except +:
*/
try {
lb_lbfluid_set_density((__pyx_v_dens * pow(__pyx_v_agrid, 3.0)));
} catch(...) {
__Pyx_CppExn2PyErr();
__PYX_ERR(3, 125, __pyx_L1_error)
}
self.check_raise_if_not_active(self.lb_class, False) | ||
self.check_raise_if_not_active(MockLBFluid, True) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note: this mocked LB class was introduced to help testing exceptions thrown from core getters. This works around the function assert_agrid_tau_set()
that intercepts some of the calls to core functions if agrid or tau is not set. Since this is a function from a compiled Cython file, it's not possible to redefine it from the Python interpreter. The mocked class solves the issue without too much code. In the walberla branch, we don't need this workaround.
Fixes #4354
Description of changes:
lb.pxd
NoLBActive()
now raises a Python exception