Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Hardcoding of Float64 in loss #156

Open
MartinuzziFrancesco opened this issue Feb 28, 2024 · 1 comment
Open

Hardcoding of Float64 in loss #156

MartinuzziFrancesco opened this issue Feb 28, 2024 · 1 comment

Comments

@MartinuzziFrancesco
Copy link

Is there a motivation for hardcoding the scaling in the loss penalties as Float64? If not, would a more generic definition allow for multiple types of regression outputs (Float64, 32, 16)?

@MartinuzziFrancesco
Copy link
Author

I see that I overlooked the section in the readme that specifies "All computations are assumed to be done in Float64." and of course the issue is more than the simple hardcoding in the loss, my bad.

I can start looking into a way to generalize this in the coming weeks

@github-project-automation github-project-automation bot moved this to priority low / involved in General Aug 30, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: priority low / involved
Development

No branches or pull requests

1 participant