You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is there a motivation for hardcoding the scaling in the loss penalties as Float64? If not, would a more generic definition allow for multiple types of regression outputs (Float64, 32, 16)?
The text was updated successfully, but these errors were encountered:
I see that I overlooked the section in the readme that specifies "All computations are assumed to be done in Float64." and of course the issue is more than the simple hardcoding in the loss, my bad.
I can start looking into a way to generalize this in the coming weeks
Is there a motivation for hardcoding the scaling in the loss penalties as
Float64
? If not, would a more generic definition allow for multiple types of regression outputs (Float64, 32, 16)?The text was updated successfully, but these errors were encountered: