-
Notifications
You must be signed in to change notification settings - Fork 52
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Added GMMConv #147
Added GMMConv #147
Conversation
Added GMMConv from the paper: Geometric deep learning on graphs and manifolds using mixture model CNNs
Tests are missing |
Codecov Report
@@ Coverage Diff @@
## master #147 +/- ##
==========================================
- Coverage 85.61% 85.52% -0.09%
==========================================
Files 15 15
Lines 1251 1285 +34
==========================================
+ Hits 1071 1099 +28
- Misses 180 186 +6
Continue to review full report at Codecov.
|
Co-authored-by: Carlo Lucibello <[email protected]>
Co-authored-by: Carlo Lucibello <[email protected]>
Co-authored-by: Carlo Lucibello <[email protected]>
Co-authored-by: Carlo Lucibello <[email protected]>
Co-authored-by: Carlo Lucibello <[email protected]>
learnable param in doc, num_edge in the last dim (remove permutedims)
Added the changes.
Still need to add the test. |
Co-authored-by: Carlo Lucibello <[email protected]>
Co-authored-by: Carlo Lucibello <[email protected]>
Test added as well |
Co-authored-by: Carlo Lucibello <[email protected]>
Co-authored-by: Carlo Lucibello <[email protected]>
Co-authored-by: Carlo Lucibello <[email protected]>
A first test failure could be solved by adding
|
Co-authored-by: Carlo Lucibello <[email protected]>
Co-authored-by: Carlo Lucibello <[email protected]>
added changes for test
I understand the error 1, this is that if my input is Float32 still my output is Float64, will see why this is happening. Could not understand the second test. |
src/layers/conv.jl
Outdated
(nin, ein), out = ch | ||
mu = init(ein, K) | ||
sigma_inv = init(ein, K) | ||
b = bias ? Flux.create_bias(ones(out), true) : false |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
b = bias ? Flux.create_bias(ones(out), true) : false | |
b = bias ? Flux.create_bias(mu, true, out) : false |
This should fix the Float64 issue
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The problem is here. Both w and mu are Float32 before this, but it changes to Float64 at this point. How exactly @.
functions? w = @. -0.5 * (w - mu)^2
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't know if this is the most elegant solution, but I had to change the code at two different points.
w = @. -0.5 * (w - mu)^2
goes tow = @. - ((w - mu) ^ 2)/ 2
m = 1 / d .* m
goes tom = m./reshape(d, (1, g.num_nodes))
The problem is if in degrees say d=[1,2,1] is operated to 1/d then it converts to Float 64, depending on the system. Same with 2 and 0.5 I guess.
Me neither. Seems to be a Zygote error due to a not supported try/catch. But we already have l = GMMConv((2,2)=>2)
g = rand_graph(5, 10, ndata=rand(Float32,2,5), edata=rand(Float32,2,10))
gradient(() -> sum(l(g, g.ndata.x, g.edata.e)), Flux.params(l)) |
Co-authored-by: Carlo Lucibello <[email protected]>
It is probably related to the string interpolation error in gradient. But changing this |
changed the aggr method to mean and got rid of dividing by in degrees (creating NAN)
NANs possibly because dividing by indegree, instead changed the aggregator method to mean. This should potentially get. rid of the NAN values |
Co-authored-by: Carlo Lucibello <[email protected]>
Co-authored-by: Carlo Lucibello <[email protected]>
σ only printed if different from identity, same with residual
Great, impressive works, thanks! |
Thanks a lot for all the help! Will try some other conv layer this weekend. :) And then will see about temporal GNN. |
Added GMMConv from the paper: Geometric deep learning on graphs and manifolds using mixture model CNNs