Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RNN and LSTM break when using GPU #100

Closed
gabrevaya opened this issue Jul 23, 2022 · 1 comment · Fixed by #101
Closed

RNN and LSTM break when using GPU #100

gabrevaya opened this issue Jul 23, 2022 · 1 comment · Fixed by #101
Labels
bug Something isn't working

Comments

@gabrevaya
Copy link
Contributor

Bellow you can find a MWE with RNNCell. It is the same for LSTMCell.

using Lux, Random, CUDA

rnn = RNNCell(2 => 8)
rng = Random.default_rng()
Random.seed!(rng, 0)
ps, st = Lux.setup(rng, rnn) .|> gpu
x = rand(Float32, 2, 4, 10) |> gpu
rnn(view(x, :, 1, :), ps, st)
ERROR: ArgumentError: cannot take the CPU address of a CuArray{Float32, 2, CUDA.Mem.DeviceBuffer}
Stacktrace:
  [1] unsafe_convert(#unused#::Type{Ptr{Float32}}, x::CuArray{Float32, 2, CUDA.Mem.DeviceBuffer})
    @ CUDA ~/.julia/packages/CUDA/DfvRa/src/array.jl:319
  [2] gemm!(transA::Char, transB::Char, alpha::Float32, A::CuArray{Float32, 2, CUDA.Mem.DeviceBuffer}, B::Matrix{Float32}, beta::Float32, C::Matrix{Float32})
    @ LinearAlgebra.BLAS /network/scratch/a/abrevayg/julia-1.8.0-rc3/share/julia/stdlib/v1.8/LinearAlgebra/src/blas.jl:1514
  [3] gemm_wrapper!(C::Matrix{Float32}, tA::Char, tB::Char, A::CuArray{Float32, 2, CUDA.Mem.DeviceBuffer}, B::Matrix{Float32}, _add::LinearAlgebra.MulAddMul{true, true, Bool, Bool})
    @ LinearAlgebra /network/scratch/a/abrevayg/julia-1.8.0-rc3/share/julia/stdlib/v1.8/LinearAlgebra/src/matmul.jl:674
  [4] mul!
    @ /network/scratch/a/abrevayg/julia-1.8.0-rc3/share/julia/stdlib/v1.8/LinearAlgebra/src/matmul.jl:161 [inlined]
  [5] mul!
    @ /network/scratch/a/abrevayg/julia-1.8.0-rc3/share/julia/stdlib/v1.8/LinearAlgebra/src/matmul.jl:276 [inlined]
  [6] *
    @ /network/scratch/a/abrevayg/julia-1.8.0-rc3/share/julia/stdlib/v1.8/LinearAlgebra/src/matmul.jl:148 [inlined]
  [7] (::RNNCell{true, typeof(tanh), typeof(Lux.zeros32), typeof(Lux.glorot_uniform), typeof(Lux.ones32)})(::Tuple{SubArray{Float32, 2, CuArray{Float32, 3, CUDA.Mem.DeviceBuffer}, Tuple{Base.Slice{Base.OneTo{Int64}}, Int64, Base.Slice{Base.OneTo{Int64}}}, false}, Matrix{Float32}}, ps::NamedTuple{(:weight_ih, :weight_hh, :bias), Tuple{CuArray{Float32, 2, CUDA.Mem.DeviceBuffer}, CuArray{Float32, 2, CUDA.Mem.DeviceBuffer}, CuArray{Float32, 1, CUDA.Mem.DeviceBuffer}}}, st::NamedTuple{(:rng,), Tuple{Xoshiro}})
    @ Lux ~/.julia/packages/Lux/lEqCI/src/layers/recurrent.jl:81
  [8] (::RNNCell{true, typeof(tanh), typeof(Lux.zeros32), typeof(Lux.glorot_uniform), typeof(Lux.ones32)})(x::SubArray{Float32, 2, CuArray{Float32, 3, CUDA.Mem.DeviceBuffer}, Tuple{Base.Slice{Base.OneTo{Int64}}, Int64, Base.Slice{Base.OneTo{Int64}}}, false}, ps::NamedTuple{(:weight_ih, :weight_hh, :bias), Tuple{CuArray{Float32, 2, CUDA.Mem.DeviceBuffer}, CuArray{Float32, 2, CUDA.Mem.DeviceBuffer}, CuArray{Float32, 1, CUDA.Mem.DeviceBuffer}}}, st::NamedTuple{(:rng,), Tuple{Xoshiro}})
    @ Lux ~/.julia/packages/Lux/lEqCI/src/layers/recurrent.jl:76
  [9] top-level scope
    @ REPL[9]:1
 [10] top-level scope
    @ ~/.julia/packages/CUDA/DfvRa/src/initialization.jl:52
(rnn_gpu_issue) pkg> st
Status `/network/scratch/a/abrevayg/rnn_gpu_issue/Project.toml`
  [052768ef] CUDA v3.12.0
  [b2108857] Lux v0.4.9
julia> VERSION
v"1.8.0-rc3"
@avik-pal avik-pal added the bug Something isn't working label Jul 23, 2022
@avik-pal
Copy link
Member

I really need to setup GPU CI. But this is quite easy to fix, I will patch it soon.

@avik-pal avik-pal linked a pull request Jul 23, 2022 that will close this issue
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants