You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, the GCNConv implementation computes in each forward a dense Laplacian matrix for the graph.
This doesn't scale well for large graphs.
The layer should implement neighborhood aggregation and become a MessagePassing layer instead
The text was updated successfully, but these errors were encountered:
In the forward of GCNConv layer, an algebraic computation is required, instead of implementing indexing-purposed neighborhood aggregation and a MessagePassing.
To be honest, the use of message-passing scheme is not suitable for this kind of GNN layers and the implementation in pytorch geometric doesn't benefit for computation efficiency. The approach that pytorch geometric take is just force filling GCNConv layer in the message-passing scheme, which is totally not required.
Not only GCNConv layers but also ChebConv layers require algebraic computation, not indexing neighbors.
Instead, consider scaling to large graph, a sparse array support should be considered. A sparse adjacency matrix should be accepted as a graph representation and sparse computation over CPU and GPU should also supported. For extremely large case, distributed computing should be considered. For example, Alibaba develops a distributed graph deep learning framework for recommendation systems.
Currently, the
GCNConv
implementation computes in each forward a dense Laplacian matrix for the graph.This doesn't scale well for large graphs.
The layer should implement neighborhood aggregation and become a
MessagePassing
layer insteadThe text was updated successfully, but these errors were encountered: