-
Notifications
You must be signed in to change notification settings - Fork 47
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[GNNLux] Adding NNConv Layer #478
Conversation
If some of the args are not needed for NNConv please let me know |
Writing tests |
dimension mismatch is in GNNLib implementation of nn conv:
in the return statement any suggestions or ideas as to why this could be happening? |
This code works fine using GraphNeuralNetworks, Flux
n_in = 3
n_in_edge = 10
n_out = 5
s = [1,1,2,3]
t = [2,3,1,1]
g = GNNGraph(s, t)
nn = Dense(n_in_edge => n_out * n_in)
l = NNConv(n_in => n_out, nn, tanh, bias = true, aggr = +)
x = randn(Float32, n_in, g.num_nodes)
e = randn(Float32, n_in_edge, g.num_edges)
y = l(g, x, e) Try to run the corresponding LuxGNN version and see if you get an error, we'll try to debug from there |
any progress here? |
yes got the same error
|
Can you paste the code that errors? |
GNNLux/test/layers/conv_tests.jl
Outdated
@testset "NNConv" begin | ||
n_in = 3 | ||
n_in_edge = 10 | ||
n_out = 5 | ||
|
||
s = [1,1,2,3] | ||
t = [2,3,1,1] | ||
g2 = GNNGraph(s, t) | ||
|
||
nn = Dense(n_in_edge => n_out * n_in) | ||
l = NNConv(n_in => n_out, nn, tanh, aggr = +) | ||
x = randn(Float32, n_in, g2.num_nodes) | ||
e = randn(Float32, n_in_edge, g2.num_edges) | ||
#y = l(g, x, e) # just to see if it runs without an error | ||
#edim = 10 | ||
#nn = Dense(edim, in_dims * out_dims) | ||
#l = NNConv(in_dims => out_dims, nn, tanh, aggr = +) | ||
test_lux_layer(rng, l, g2, x, sizey=(n_out, g2.num_nodes), container=true, edge_weight=e) | ||
end |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you paste the code that errors?
i used the code that you gave for tests (in the test file) just without the bias, but that shouldnt cause an issue since issue is with multiplication
it errors while testing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
as I said, can you translate this
using GraphNeuralNetworks, Flux
n_in = 3
n_in_edge = 10
n_out = 5
s = [1,1,2,3]
t = [2,3,1,1]
g = GNNGraph(s, t)
nn = Dense(n_in_edge => n_out * n_in)
l = NNConv(n_in => n_out, nn, tanh, bias = true, aggr = +)
x = randn(Float32, n_in, g.num_nodes)
e = randn(Float32, n_in_edge, g.num_edges)
y = l(g, x, e)
to the corresponding GNNLux code and see what happens?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
tried to run locally today but seems like there is an update to some package that leads to this error:
ERROR: LoadError: Failed to precompile MLDataDevices
while importing GNNlib (usual setup for the repo, activate, instantiate etc)
Could you check once as well?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Try to run the corresponding LuxGNN version
n_in = 3
n_in_edge = 10
n_out = 5
s = [1,1,2,3]
t = [2,3,1,1]
g2 = GNNGraph(s, t)
nn = Dense(n_in_edge => n_out * n_in)
l = NNConv(n_in => n_out, nn, tanh, aggr = +)
x = randn(Float32, n_in, g2.num_nodes)
e = randn(Float32, n_in_edge, g2.num_edges)
ps = LuxCore.initialparameters(rng, l)
st = LuxCore.initialstates(rng, l)
y = l(g2, x, e, ps, st)
added this to the test file, there we can see what errors
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
error:
DimensionMismatch: A has dimensions (15,10) but B has dimensions (3,3)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Any suggestions?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
getting the same dimension error
The problem is that In order to fix the layer you need to do two things:
@concrete struct NNConv <: GNNContainerLayer{(:nn,)}
nn <: AbstractLuxLayer
aggr
in_dims::Int
out_dims::Int
use_bias::Bool
init_weight
init_bias
σ
end
function NNConv(ch::Pair{Int, Int}, nn, σ = identity;
aggr = +,
init_bias = zeros32,
use_bias::Bool = true,
init_weight = glorot_uniform)
in_dims, out_dims = ch
σ = NNlib.fast_act(σ)
return NNConv(nn, aggr, in_dims, out_dims, use_bias, init_weight, init_bias, σ)
end
function LuxCore.initialparameters(rng::AbstractRNG, l::NNConv)
weight = l.init_weight(rng, l.out_dims, l.in_dims)
ps = (; nn = LuxCore.initialparameters(rng, l.nn), weight)
if l.use_bias
ps = (; ps..., bias = l.init_bias(rng, l.out_dims))
end
return ps
end
function LuxCore.initialstates(rng::AbstractRNG, l::NNConv)
return (; nn = LuxCore.initialstates(rng, l.nn))
end
function LuxCore.parameterlength(l::NNConv)
n = parameterlength(l.nn) + l.in_dims * l.out_dims
if l.use_bias
n += l.out_dims
end
return n
end
LuxCore.statelength(l::NNConv) = statelength(l.nn)
function (l::NNConv)(g, x, e, ps, st)
nn = StatefulLuxLayer{true}(l.nn, ps.nn, st.nn)
m = (; nn, l.aggr, ps.weight, bias = _getbias(ps), l.σ)
y = GNNlib.nn_conv(m, g, x, e)
stnew = _getstate(nn)
return y, stnew
end |
Since in this branch there are so many commits and a merge rebase could be hard. Close this and open a new PR if needed. |
rebasing and making new PR |
Adding conv layer according to #461