You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
where is a normalized graph Laplacian could be first calculated in a pre-processing step. is the output of layer l .
Intuitively a GCNConv could be build with the following code (pseudo):
functionpre_processing(g)
A_hat =adj_Matrix(g) + I # I is an identity matrix, for node self-connect
d =degree_Matrix(g)
d_hat =1./sqrt.(d)
return d_hat * A_hat * d_hat
endstruct GCNConv
w
A
end# constructor functionGCNconv(g,in, out)
A =pre_processing(g)
w =randn(out,in)
GCNconv(w, A)
end# layer calculation function (l::GCNconv)(x)
sigma(l.A * l.w * x)
end
then we can define and apply a GCN layer as follow:
#give a graph g, and a node feature array X
l =GCNconv(g, in, out )
l(x) # one layer GCN.l(l(x)) # two layer GCN.
The above is my very personal understanding. Maybe it's some preconceived notion, I find the GraphNeuralNetworks' GCNConv design is a bit difficult to follow:
In the GraphNeuralNetworks GCNConv layer,it seems there are no pre-processing step for something like , so it seems each time we call a GCNConv layer, it will do these repeat caculate steps, but during the model training, hundreds of thousands of runs is need, so will this slow down the speed?
Hi @5tinzi, your implementation is actually correct and functionally equivalent to the one in this repo.
The one in this repo is complicated by the fact that you don't want to materialize large dense matrices but do gather/scatter operations or sparse matrix multiplications and you want to support edge weights if present.
Thanks for such wonderful project, I get some troubles in follow the idea of GCNConv layer code.
Allow me to begin by presenting my understanding of the GCN. From the GCN paper Semi-supervised Classification with Graph Convolutional Networks, The computation representation of the GCN layer is simple as:
where is a normalized graph Laplacian could be first calculated in a pre-processing step. is the output of layer l .
Intuitively a GCNConv could be build with the following code (pseudo):
then we can define and apply a GCN layer as follow:
The above is my very personal understanding. Maybe it's some preconceived notion, I find the GraphNeuralNetworks' GCNConv design is a bit difficult to follow:
x = x .* c'
(in line 102 and 110 ) , so it seems it performshttps://github.com/CarloLucibello/GraphNeuralNetworks.jl/blob/afd80e8024abb270db79cd8592376ba60bc63f60/src/layers/conv.jl#L100-L110
Any advice would be appreciated and thank you for your patience.
The text was updated successfully, but these errors were encountered: