-
Notifications
You must be signed in to change notification settings - Fork 34
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Performance regression in Normed
-> Float
conversions on Julia v1.3.0
#144
Comments
I think those types are very niche. I'm not that worried. |
I agree, but my concern is the cause rather than the result. The investigation may help improve other methods (e.g. Benchmarkjulia> versioninfo()
Julia Version 1.3.0
Commit 46ce4d7933 (2019-11-26 06:09 UTC)
Platform Info:
OS: Windows (x86_64-w64-mingw32)
CPU: Intel(R) Core(TM) i7-8565U CPU @ 1.80GHz
WORD_SIZE: 64
LIBM: libopenlibm
LLVM: libLLVM-6.0.1 (ORCJIT, skylake) Matrix of Vec4 (unit: μs)
|
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
I have confirmed that Julia v1.2.0 and v1.3.0 give almost similar results on
Normed
->Float
conversions (#129, #138). However, I found the performance regression (~2x - 3x slower) on x84_64 machines in the following cases:Vec4{N0f32}
->Vec4{Float32}
Vec4{N0f64}
->Vec4{Float32}
Vec4{N0f64}
->Vec4{Float64}
(cf. #129 (comment))
I'm not going to rush to investigate the cause or fix this problem. I submit this issue as a placeholder in case any useful information is found.
The text was updated successfully, but these errors were encountered: