Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Trying ArrayFire #213

Open
ngphuoc opened this issue Dec 5, 2017 · 8 comments
Open

Trying ArrayFire #213

ngphuoc opened this issue Dec 5, 2017 · 8 comments

Comments

@ngphuoc
Copy link

ngphuoc commented Dec 5, 2017

I am trying example in the example directory of Knet with ArrayFire.jl. So far I got housing.jl and mnist.jl working. For lenet.jl, I got the error below. Is convolution tied to only KnetArray?:

ERROR: MethodError: no method matching conv4(::ArrayFire.AFArray{Float32,4}, ::ArrayFire.AFArray{Float32,4}; padding=0)
Closest candidates are:
  conv4(::Type{AutoGrad.Grad{1}}, ::Any, ::Any, ::AutoGrad.Rec{##1218}, ::AutoGrad.Rec{##1219}; o...) where {##1218, ##1219} at :0
  conv4(::Type{AutoGrad.Grad{2}}, ::Any, ::Any, ::AutoGrad.Rec{##1218}, ::AutoGrad.Rec{##1219}; o...) where {##1218, ##1219} at :0
  conv4(::##1218, ::AutoGrad.Rec{##1219}; o...) where {##1218, ##1219} at :0
  ...
Stacktrace:
 [1] predict(::Array{ArrayFire.AFArray{Float32,N} where N,1}, ::ArrayFire.AFArray{Float32,4}) at ./REPL[12]:4
 [2] ##core#687() at /home/phuoc/.julia/v0.6/BenchmarkTools/src/execution.jl:312
 [3] ##sample#688(::BenchmarkTools.Parameters) at /home/phuoc/.julia/v0.6/BenchmarkTools/src/execution.jl:318
 [4] #_run#12(::Bool, ::String, ::Array{Any,1}, ::Function, ::BenchmarkTools.Benchmark{Symbol("##benchmark#686")}, ::BenchmarkTools.Parameters) at /home/phuoc/.julia/v0.6/BenchmarkTools/src/execution.jl:346
 [5] (::BenchmarkTools.#kw##_run)(::Array{Any,1}, ::BenchmarkTools.#_run, ::BenchmarkTools.Benchmark{Symbol("##benchmark#686")}, ::BenchmarkTools.Parameters) at ./<missing>:0
 [6] anonymous at ./<missing>:?
 [7] #run_result#19(::Array{Any,1}, ::Function, ::BenchmarkTools.Benchmark{Symbol("##benchmark#686")}, ::BenchmarkTools.Parameters) at /home/phuoc/.julia/v0.6/BenchmarkTools/src/execution.jl:40
 [8] (::BenchmarkTools.#kw##run_result)(::Array{Any,1}, ::BenchmarkTools.#run_result, ::BenchmarkTools.Benchmark{Symbol("##benchmark#686")}, ::BenchmarkTools.Parameters) at ./<missing>:0
 [9] #run#21(::Array{Any,1}, ::Function, ::BenchmarkTools.Benchmark{Symbol("##benchmark#686")}, ::BenchmarkTools.Parameters) at /home/phuoc/.julia/v0.6/BenchmarkTools/src/execution.jl:63
 [10] (::Base.#kw##run)(::Array{Any,1}, ::Base.#run, ::BenchmarkTools.Benchmark{Symbol("##benchmark#686")}, ::BenchmarkTools.Parameters) at ./<missing>:0
 [11] warmup(::BenchmarkTools.Benchmark{Symbol("##benchmark#686")}) at /home/phuoc/.julia/v0.6/BenchmarkTools/src/execution.jl:96

Below is the simplified version of lenet.jl I used:

include(Pkg.dir("Knet","data","mnist.jl"))
using Knet,BenchmarkTools,ArrayFire

atype = Array{Float32}
#= gtype = KnetArray =#
gtype = AFArray
epochs = 5
batchsize = 128
winit = 0.1f0
lr = 0.1f0
iters = 2
gcheck = 0

function predict(w,x)
  n=length(w)-4
  for i=1:2:n
    x = pool(relu.(conv4(w[i],x;padding=0) .+ w[i+1]))
  end
  x = mat(x)
  for i=n+1:2:length(w)-2
    x = relu.(w[i]*x .+ w[i+1])
  end
  return w[end-1]*x .+ w[end]
end

loss(w,x,ygold) = nll(predict(w,x), ygold)

lossgradient = grad(loss)

function weights(;gtype=KnetArray)
  w = Array{Any}(8)
  w[1] = xavier(5,5,1,20)
  w[2] = zeros(1,1,20,1)
  w[3] = xavier(5,5,20,50)
  w[4] = zeros(1,1,50,1)
  w[5] = xavier(500,800)
  w[6] = zeros(500,1)
  w[7] = xavier(10,500)
  w[8] = zeros(10,1)
  return gtype.(atype.(w))
end

xtrn,ytrn,xtst,ytst = Main.mnist()
xtrn,xtst = gtype.((xtrn,xtst))
dtrn = minibatch(xtrn, ytrn, batchsize; xtype=gtype)
dtst = minibatch(xtst, ytst, batchsize; xtype=gtype)
w = weights(gtype=gtype)
report(epoch)=println((:epoch,epoch,:trn,accuracy(w,dtrn,predict),:tst,accuracy(w,dtst,predict)))

x,y = first(dtrn)
@btime predict(w,x)
# KnetArray 147.533 μs (337 allocations: 15.92 KiB)
@ngphuoc
Copy link
Author

ngphuoc commented Dec 6, 2017

It seems to me that GPU convolution and recurrence are kernel specific. ArrayFire.jl has its own convolution function. Other basic functions can have its gradients by AutoGrad.

@ngphuoc
Copy link
Author

ngphuoc commented Dec 6, 2017

Does your support for CPU convolution and rnn will apply to any array type like CuArray or AFArray?

@denizyuret
Copy link
Owner

denizyuret commented Dec 6, 2017 via email

@ngphuoc
Copy link
Author

ngphuoc commented Dec 6, 2017

Thank a lot for your help. I am trying ArrayFire therefore it would be great if you could help with conv4/pool methods for ArrayFire. I'd like to try it for rnn too but if you don't have time, I may be able to follow your changes in conv4/pool and apply to rnn.jl.

@denizyuret
Copy link
Owner

I will take a look. Also related is #129 and #150.

@ngphuoc
Copy link
Author

ngphuoc commented Dec 7, 2017

I did some tests following test/karray.jl. AFArray inherits AbstractArray but it seems some array operations are not supported yet: JuliaGPU/ArrayFire.jl#188

@denizyuret
Copy link
Owner

denizyuret commented Dec 7, 2017 via email

@ngphuoc
Copy link
Author

ngphuoc commented Dec 9, 2017

I replaced all KnetArray with AFArray, and add the following to the top of src/conv.jl. Then I rebuilt Knet. The build went OK.

using ArrayFire

# https://github.com/JuliaComputing/ArrayFire.jl/issues/189
# get_device_ptr, device_array, lock_device_ptr, unlock_device_ptr,


pointer{T}(a::AFArray{T})=convert(Ptr{T}, get_device_ptr(a))

pointer{T}(a::AFArray{T},i)=convert(Ptr{T}, get_device_ptr(a) + (i-1)*sizeof(T))

Base.similar(x::AFArray, s::Tuple) = AFArray(similar(Array(x),s))

AFArray{T}(len::Integer) where T = AFArray(Array{T}(len))
...

When I ran this example:

using Knet,ArrayFire
atype = Array{Float32}
gtype = AFArray
#= gtype = KnetArray =#

w = xavier(5,5,1,20) |> atype |> gtype
x = rand(28,28,1,128) |> atype |> gtype
conv4(w,x;padding=0)

I got the error

ERROR: conversion to pointer not defined for ArrayFire.AFArray{Float32,4}

If I add this line

unsafe_convert{T}(::Type{Ptr{T}}, a::AFArray) = pointer(a)

it caused segfault at line at build

@primitive conv4(w,x; o...),dy  conv4w(w,x,dy;o...)  conv4x(w,x,dy;o...) 

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants