Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Type promotion error: Float32 * (1 - Bool) yields Float64 #484

Closed
jeremiahpslewis opened this issue Aug 5, 2023 · 6 comments
Closed

Type promotion error: Float32 * (1 - Bool) yields Float64 #484

jeremiahpslewis opened this issue Aug 5, 2023 · 6 comments

Comments

@jeremiahpslewis
Copy link

I think the issue is with this function:

ElType = Broadcast.combine_eltypes(bc.f, bc.args)

In that for certain linear combinations of Float32 and Bool, it yields a Float64 type. MWE below:

using Flux
using Metal

G = Flux.gpu(Float32[1.0373293, 1.0380119])

t = Flux.gpu(Bool[true, false])
f = Flux.gpu(Float32[true, false])

Flux.gradient(Q) do m #works
    sum(G .* (1 .- f))
end

Flux.gradient(Q) do m #broken
    sum(G .* (1 .- t))
end

Yields the error:

julia> Flux.gradient(Q) do m
           sum(G .* (1 .- t))
       end
ERROR: Metal does not support Float64 values, try using Float32 instead
Stacktrace:
  [1] error(s::String)
    @ Base ./error.jl:35
  [2] MtlVector{Float64, Metal.MTL.MTLResourceStorageModePrivate}(::UndefInitializer, dims::Tuple{Int64})
    @ Metal ~/.julia/packages/Metal/qeZqc/src/array.jl:45
  [3] (MtlVector{Float64})(::UndefInitializer, dims::Tuple{Int64})
    @ Metal ~/.julia/packages/Metal/qeZqc/src/array.jl:94
  [4] (MtlArray{Float64})(::UndefInitializer, dims::Tuple{Int64})
    @ Metal ~/.julia/packages/Metal/qeZqc/src/array.jl:106
  [5] similar(::Type{MtlArray{Float64}}, dims::Tuple{Int64})
    @ Base ./abstractarray.jl:874
  [6] similar(::Type{MtlArray{Float64}}, shape::Tuple{Base.OneTo{Int64}})
    @ Base ./abstractarray.jl:873
  [7] similar(bc::Base.Broadcast.Broadcasted{…}, ::Type{…})
    @ Metal ~/.julia/packages/Metal/qeZqc/src/broadcast.jl:11
  [8] copy
    @ GPUArrays ~/.julia/packages/GPUArrays/5XhED/src/host/broadcast.jl:37 [inlined]
  [9] materialize
    @ GPUArrays ./broadcast.jl:903 [inlined]
 [10] map(::ChainRulesCore.ProjectTo{Float64, @NamedTuple{}}, ::MtlVector{Float32, Metal.MTL.MTLResourceStorageModePrivate})
    @ GPUArrays ~/.julia/packages/GPUArrays/5XhED/src/host/broadcast.jl:84
 [11] (::ChainRulesCore.ProjectTo{…})(dx::MtlVector{…})
    @ ChainRulesCore ~/.julia/packages/ChainRulesCore/0t04l/src/projection.jl:236
 [12] _project
    @ Zygote ~/.julia/packages/Zygote/JeHtr/src/compiler/chainrules.jl:189 [inlined]
 [13] unbroadcast(x::MtlVector{…}, x̄::MtlVector{…})
    @ Zygote ~/.julia/packages/Zygote/JeHtr/src/lib/broadcast.jl:59
 [14] (::Zygote.var"#1197#1200"{MtlVector{}, MtlVector{}})(Δ::MtlVector{Float32, Metal.MTL.MTLResourceStorageModePrivate})
    @ Zygote ~/.julia/packages/Zygote/JeHtr/src/lib/broadcast.jl:93
 [15] #3798#back
    @ Zygote ~/.julia/packages/ZygoteRules/OgCVT/src/adjoint.jl:71 [inlined]
 [16] #17
    @ Zygote ./REPL[36]:2 [inlined]
 [17] (::Zygote.Pullback{Tuple{…}, Tuple{…}})(Δ::Float32)
    @ Zygote ~/.julia/packages/Zygote/JeHtr/src/compiler/interface2.jl:0
 [18] (::Zygote.var"#75#76"{Zygote.Pullback{Tuple{}, Tuple{}}})(Δ::Float32)
    @ Zygote ~/.julia/packages/Zygote/JeHtr/src/compiler/interface.jl:45
 [19] gradient(f::Function, args::Function)
    @ Zygote ~/.julia/packages/Zygote/JeHtr/src/compiler/interface.jl:97
 [20] top-level scope
    @ REPL[36]:1
 [21] top-level scope
    @ ~/.julia/packages/Metal/qeZqc/src/initialization.jl:51
Some type information was truncated. Use `show(err)` to see complete types.
Julia Version 1.10.0-beta1
Commit 6616549950e (2023-07-25 17:43 UTC)
Platform Info:
  OS: macOS (arm64-apple-darwin22.4.0)
  CPU: 10 × Apple M1 Pro
  WORD_SIZE: 64
  LIBM: libopenlibm
  LLVM: libLLVM-15.0.7 (ORCJIT, apple-m1)
  Threads: 7 on 8 virtual cores
@jeremiahpslewis jeremiahpslewis changed the title Type promotion error with Bool elements Type promotion error: Float32 * (1 - Bool) yields Float64 Aug 5, 2023
@maleadt
Copy link
Member

maleadt commented Aug 8, 2023

I think the issue is with this function:

ElType = Broadcast.combine_eltypes(bc.f, bc.args)

That's just how broadcast works.

I'm not sure what the issue is here, as it's expected that some broadcast operations will promote to Float64, and it's equally expected that some back-ends (like Metal) do not support Float64. If you can demonstrate we're doing something different than Base's broadcast, then this is a bug. If not, the caller should take care not to perform a broadcast that promotes to Float64.

@jeremiahpslewis
Copy link
Author

jeremiahpslewis commented Aug 8, 2023

@maleadt I don't really understand why there is a discrepancy, but I get Float32 for the overall computation when I'm not using the GPUArrays and the sub-component with Bool yields an Int64.

G = Float32[1.0373293, 1.0380119]

t = Bool[true, false]
f = Float32[true, false]

sum(G .* (1 .- t))

julia> typeof(sum(G .* (1 .- t)))
Float32

julia> typeof(sum((1 .- t)))
Int64

@maleadt
Copy link
Member

maleadt commented Aug 8, 2023

I don't see a discrepancy?

julia> G = jl(G)
2-element JLArray{Float32, 1}:
 1.0373293
 1.0380119

julia> t = jl(t)
2-element JLArray{Bool, 1}:
 1
 0

julia> f = jl(t)
2-element JLArray{Bool, 1}:
 1
 0

julia> sum(G .* (1 .- t))
1.0380119f0

julia> typeof(ans)
Float32

@jeremiahpslewis
Copy link
Author

But where is the Metal.jl Float64 type coming from then?

@jeremiahpslewis
Copy link
Author

Came up with a MWE which is more precise, turns out this is an issue solely related to Zygote.

@maleadt
Copy link
Member

maleadt commented Aug 9, 2023

Thanks for looking into it!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants