-
-
Notifications
You must be signed in to change notification settings - Fork 83
Gamma family function support #321
base: master
Are you sure you want to change the base?
Conversation
@MikeInnes I need some help from you here. I cannot run tests on my local and saw the error message below. What did I do wrong? WARNING: could not import NNlib.spatial_dims into CUDNN
ERROR: LoadError: LoadError: LoadError: UndefVarError: DenseConvDims not defined
Stacktrace:
[1] top-level scope at none:0
[2] include at ./boot.jl:326 [inlined]
[3] include_relative(::Module, ::String) at ./loading.jl:1038
[4] include at ./sysimg.jl:29 [inlined]
[5] include(::String) at /home/kai/projects/CuArrays.jl/src/dnn/CUDNN.jl:1
[6] top-level scope at none:0
[7] include at ./boot.jl:326 [inlined]
[8] include_relative(::Module, ::String) at ./loading.jl:1038
[9] include at ./sysimg.jl:29 [inlined]
[10] include(::String) at /home/kai/projects/CuArrays.jl/src/CuArrays.jl:3
[11] top-level scope at none:0
[12] include at ./boot.jl:326 [inlined]
[13] include_relative(::Module, ::String) at ./loading.jl:1038
[14] include(::Module, ::String) at ./sysimg.jl:29
[15] top-level scope at none:2
[16] eval at ./boot.jl:328 [inlined]
[17] eval(::Expr) at ./client.jl:404
[18] top-level scope at ./none:3
in expression starting at /home/kai/projects/CuArrays.jl/src/dnn/libcudnn.jl:265
in expression starting at /home/kai/projects/CuArrays.jl/src/dnn/CUDNN.jl:35
in expression starting at /home/kai/projects/CuArrays.jl/src/CuArrays.jl:53
ERROR: LoadError: Failed to precompile CuArrays [3a865a2d-5b23-5a0f-bc46-62713ec82fae] to /home/kai/.julia/compiled/v1.1/CuArrays/7YFE0.ji.
Stacktrace:
[1] error(::String) at ./error.jl:33
[2] compilecache(::Base.PkgId, ::String) at ./loading.jl:1197
[3] _require(::Base.PkgId) at ./loading.jl:960
[4] require(::Base.PkgId) at ./loading.jl:858
[5] require(::Module, ::Symbol) at ./loading.jl:853
[6] include at ./boot.jl:326 [inlined]
[7] include_relative(::Module, ::String) at ./loading.jl:1038
[8] include(::Module, ::String) at ./sysimg.jl:29
[9] exec_options(::Base.JLOptions) at ./client.jl:267
[10] _start() at ./client.jl:436
in expression starting at /home/kai/projects/CuArrays.jl/test/special.jl:4 My package status:
|
You may just need to add NNlib master, and make sure CuArrays is up to date with master as well. |
@MikeInnes This works on my local now. |
Thanks! Should be fine for now, but we'll probably be able to just reuse the upstream SpecialFunctions.jl definitions once we have JuliaGPU/CUDAnative.jl#334 bors try |
tryBuild failed |
Looking forward to JuliaGPU/CUDAnative.jl#334
It seems that some tests not related to this PR fails. |
@maleadt Any idea what I should do for this PR? |
bors try |
tryBuild failed |
Looks like you need to update the manifest to include |
fc487fd
to
fced436
Compare
Bump:) |
The next version of CUDA.jl should have some form of method substitution capabilities, so I'd rather wait and see if these reimplementations are still required then. |
Are you referring to JuliaGPU/CUDAnative.jl#334, which doesn't seem quite "active"? Or is there somewhere else I can follow the progress? If there's not a PR that is essentially a "done deal", adding this and just bumping the minor version would be super-nice (and I'd be happy to help!). Support for SpecialFunctions.jl is really a big bottleneck to getting Bayesian inference + GPU, and for someone not familiar with CUDA.jl it can be really confusing figuring out what's going wrong (I started out thinking I needed to define custom adjoints that worked on the GPU for all these methods). Of course at this point I know how to fix it, but at it took me reading up on Julia's broadcasting mechanism and familiarizing myself quite a bit with CUDA.jl before getting here. On the bright side I now understand Julia's broadcasting mechanism better 🙃 |
ψ += CUDAnative.log(x) - 0.5 * t | ||
t *= t # 1/z^2 | ||
# the coefficients here are Float64(bernoulli[2:9] .// (2*(1:8))) | ||
ψ -= t * @evalpoly(t,0.08333333333333333,-0.008333333333333333,0.003968253968253968,-0.004166666666666667,0.007575757575757576,-0.021092796092796094,0.08333333333333333,-0.4432598039215686) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Btw, this will convert into Float64
; maybe it ought to be dependent on the input or use Float32
by default?
@@ -31,6 +31,7 @@ include("array.jl") | |||
include("subarray.jl") | |||
include("utils.jl") | |||
include("indexing.jl") | |||
include("special/gamma.jl") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just make it special.jl
, no need for directories with single source files.
This is now happening in JuliaGPU/GPUCompiler.jl#122.
Sure, but I don't plan to create a CuArrays.jl release. If you want to port this to CUDA.jl, I'm happy to create a minor release containing it though. |
Thanks for bumping this up @torfjelde. |
So just an update: these methods are just a small piece of the pie that we want, so I'm instead trying to improve the |
Porting codes from CuGammaFuns.jll; see xukai92/CuGammaFuns.jl#1
Currently I
special
and putgamma.jl
insideforwarddiff.jl
broadcast
.Open to make any changes!