You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The following code is from a test in test/cuda/curnn.jl that used to pass, but I had to comment it out in #1258 since no longer working when CUDA.allowscalar(false)
julia>using Flux
julia> rnn =RNN(10, 5)
Recur(RNNCell(10, 5, tanh))
julia> curnn = rnn |> gpu
Recur(RNNCell(10, 5, tanh))
julia> batch_size =5# BATCH_SIZE=1 WORKS FINE!5
julia> ohx = batch_size ==1?
Flux.onehot(rand(1:10), 1:10) :
Flux.onehotbatch(rand(1:10, batch_size), 1:10)
10×5 Flux.OneHotMatrix{Array{Flux.OneHotVector,1}}:00000000000000000010000001000000000000000000101100
julia> cuohx =gpu(ohx)
10×5 Flux.OneHotMatrix{CuArray{Flux.OneHotVector,1,Nothing}}:
┌ Warning: Performing scalar operations on GPU arrays: This is very slow, consider disallowing these operations with `allowscalar(false)`
└ @ GPUArrays ~/.julia/packages/GPUArrays/X4SqE/src/host/indexing.jl:4300000000000000000010000001000000000000000000101100
julia> y = (rnn(ohx); rnn(ohx))
5×5 Array{Float32,2}:-0.971044-0.253226-0.253226-0.956147-0.821044-0.973713-0.528066-0.528066-0.968011-0.716920.646869-0.263039-0.263039-0.009658810.3499170.3073310.03665110.03665110.692060.1849570.161357-0.415907-0.4159070.07346430.279407
julia> cuy = (curnn(cuohx); curnn(cuohx))
5×5 CuArray{Float32,2,Nothing}:-0.971044-0.253226-0.253226-0.956147-0.821045-0.973713-0.528066-0.528066-0.968011-0.716920.646869-0.263039-0.263039-0.009658750.3499170.3073310.03665110.03665110.692060.1849570.161357-0.415907-0.4159070.07346430.279407
julia> CUDA.allowscalar(false)
julia> cuy = (curnn(cuohx); curnn(cuohx))
ERROR: scalar getindex is disallowed
Stacktrace:
[1] error(::String) at ./error.jl:33
[2] assertscalar(::String) at /home/carlo/.julia/packages/GPUArrays/X4SqE/src/host/indexing.jl:41
[3] getindex(::CuArray{Flux.OneHotVector,1,Nothing}, ::Int64) at /home/carlo/.julia/packages/GPUArrays/X4SqE/src/host/indexing.jl:96
[4] getindex at /home/carlo/.julia/dev/Flux/src/onehot.jl:23 [inlined]
[5] _getindex at ./abstractarray.jl:1020 [inlined]
[6] getindex at ./abstractarray.jl:980 [inlined]
[7] copyto!(::Array{Float32,2}, ::Flux.OneHotMatrix{CuArray{Flux.OneHotVector,1,Nothing}}) at ./multidimensional.jl:962
[8] Array at ./array.jl:541 [inlined]
[9] Array at ./boot.jl:430 [inlined]
[10] convert at ./array.jl:533 [inlined]
[11] CuArray at /home/carlo/.julia/packages/CUDA/42B9G/src/array.jl:209 [inlined]
[12] CuArray at /home/carlo/.julia/packages/CUDA/42B9G/src/array.jl:214 [inlined]
[13] (::Flux.RNNCell{typeof(tanh),CuArray{Float32,2,Nothing},CuArray{Float32,1,Nothing}})(::CuArray{Float32,2,Nothing}, ::Flux.OneHotMatrix{CuArray{Flux.OneHotVector,1,Nothing}}) at /home/carlo/.julia/dev/Flux/src/cuda/curnn.jl:43
[14] (::Flux.Recur{Flux.RNNCell{typeof(tanh),CuArray{Float32,2,Nothing},CuArray{Float32,1,Nothing}}})(::Flux.OneHotMatrix{CuArray{Flux.OneHotVector,1,Nothing}}) at /home/carlo/.julia/dev/Flux/src/layers/recurrent.jl:36
[15] top-level scope at REPL[17]:1
[16] eval(::Module, ::Any) at ./boot.jl:331
[17] eval_user_input(::Any, ::REPL.REPLBackend) at /home/carlo/julia/julia-1.4.1/share/julia/stdlib/v1.4/REPL/src/REPL.jl:86
[18] run_backend(::REPL.REPLBackend) at /home/carlo/.julia/packages/Revise/BqeJF/src/Revise.jl:1184
[19] top-level scope at none:0
The text was updated successfully, but these errors were encountered:
1261: reinstate curnn test r=CarloLucibello a=CarloLucibello
this is passing locally on my GPU, but in #1258 bors didn't seem to like it. Let's try again
EDIT: actually this is not passing test locally, but let's see what BORS says in any case
cfr #1262
Co-authored-by: CarloLucibello <[email protected]>
1261: reinstate curnn test r=DhairyaLGandhi a=CarloLucibello
this is passing locally on my GPU, but in #1258 bors didn't seem to like it. Let's try again
EDIT: actually this is not passing test locally, but let's see what BORS says in any case
cfr #1262
Co-authored-by: CarloLucibello <[email protected]>
The following code is from a test in
test/cuda/curnn.jl
that used to pass, but I had to comment it out in #1258 since no longer working whenCUDA.allowscalar(false)
The text was updated successfully, but these errors were encountered: