You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We have preference for all our default functions in device code (sin, cos etc.), which determines how to treat integral type arguments (those device functions only support float and double arguments, we implemented wrapper functions to convert integral types to floating point types, #45). This preference decides if integral types are converted to float or double. With floats we get fast single precision arithmetics, but might loose precision (we are warning in that case). With double we get correct precision but loose performance.
With the support for float32 types as default types everywhere, I feel like this the conversion preference becomes obsolete. Why should one only in default functions cast to float and use double elswhere? And when the default type is float, then the result form the default functions will be cast to float anyways, even if it was double before. Get rid of this preference all together? Need to check for the correct warnings in that case.
The text was updated successfully, but these errors were encountered:
brian2.tests.test_neurongroup.test_semantics_floor_division is currently failing since it checks for no warnings being printed. But in that test we print a warning because we encounter int64 to double convertion, which we can't avoid (since cuda doesn't support long double, still the case?).
We have preference for all our default functions in device code (
sin
,cos
etc.), which determines how to treat integral type arguments (those device functions only supportfloat
anddouble
arguments, we implemented wrapper functions to convert integral types to floating point types, #45). This preference decides if integral types are converted tofloat
ordouble
. Withfloats
we get fast single precision arithmetics, but might loose precision (we are warning in that case). Withdouble
we get correct precision but loose performance.With the support for
float32
types as default types everywhere, I feel like this the conversion preference becomes obsolete. Why should one only in default functions cast tofloat
and usedouble
elswhere? And when the default type isfloat
, then the result form the default functions will be cast tofloat
anyways, even if it wasdouble
before. Get rid of this preference all together? Need to check for the correct warnings in that case.The text was updated successfully, but these errors were encountered: