-
Notifications
You must be signed in to change notification settings - Fork 78
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Type promotion error: Float32 * (1 - Bool) yields Float64 #484
Comments
That's just how broadcast works. I'm not sure what the issue is here, as it's expected that some broadcast operations will promote to Float64, and it's equally expected that some back-ends (like Metal) do not support Float64. If you can demonstrate we're doing something different than Base's broadcast, then this is a bug. If not, the caller should take care not to perform a broadcast that promotes to Float64. |
@maleadt I don't really understand why there is a discrepancy, but I get Float32 for the overall computation when I'm not using the G = Float32[1.0373293, 1.0380119]
t = Bool[true, false]
f = Float32[true, false]
sum(G .* (1 .- t))
julia> typeof(sum(G .* (1 .- t)))
Float32
julia> typeof(sum((1 .- t)))
Int64 |
I don't see a discrepancy? julia> G = jl(G)
2-element JLArray{Float32, 1}:
1.0373293
1.0380119
julia> t = jl(t)
2-element JLArray{Bool, 1}:
1
0
julia> f = jl(t)
2-element JLArray{Bool, 1}:
1
0
julia> sum(G .* (1 .- t))
1.0380119f0
julia> typeof(ans)
Float32 |
But where is the Metal.jl Float64 type coming from then? |
Came up with a MWE which is more precise, turns out this is an issue solely related to Zygote. |
Thanks for looking into it! |
I think the issue is with this function:
GPUArrays.jl/src/host/broadcast.jl
Line 32 in ff0018f
In that for certain linear combinations of
Float32
andBool
, it yields aFloat64
type. MWE below:Yields the error:
The text was updated successfully, but these errors were encountered: