Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GeometricFlux cora comparison #82

Merged
merged 1 commit into from
Dec 18, 2021
Merged

GeometricFlux cora comparison #82

merged 1 commit into from
Dec 18, 2021

Conversation

CarloLucibello
Copy link
Member

@CarloLucibello CarloLucibello commented Dec 18, 2021

Porting the script
https://github.com/CarloLucibello/GraphNeuralNetworks.jl/blob/master/examples/node_classification_cora.jl
to GeometricFlux.jl, it can be useful for quick performance comparisons.

GPU support seems to be still broken in GeometricFlux, running this script with train(usecuda=true) hangs forever. Same for
https://github.com/FluxML/GeometricFlux.jl/blob/master/examples/gcn.jl

GraphNeuralNetworks.jl vs GeometrixFlux.jl

Timings on my laptop, after some warmup.

On CPU, GraphNeuralNetworks.jl is 4 times faster than GeometrixFlux.jl.

GNN.jl CPU

julia> @time train(usecuda=false)
[ Info: Training on CPU
┌ Info: GNNGraph:
│     num_nodes = 2708
└     num_edges = 10556
Epoch: 0   Train: (loss = 1.9487f0, acc = 13.57)   Test: (loss = 1.9464f0, acc = 13.6)
Epoch: 10   Train: (loss = 1.6136f0, acc = 90.0)   Test: (loss = 1.7671f0, acc = 66.9)
Epoch: 20   Train: (loss = 0.9821f0, acc = 97.14)   Test: (loss = 1.3838f0, acc = 77.1)
Epoch: 30   Train: (loss = 0.3707f0, acc = 98.57)   Test: (loss = 0.9097f0, acc = 80.5)
Epoch: 40   Train: (loss = 0.1076f0, acc = 100.0)   Test: (loss = 0.6656f0, acc = 80.6)
Epoch: 50   Train: (loss = 0.034f0, acc = 100.0)   Test: (loss = 0.6272f0, acc = 79.1)
Epoch: 60   Train: (loss = 0.0144f0, acc = 100.0)   Test: (loss = 0.6323f0, acc = 78.7)
Epoch: 70   Train: (loss = 0.0077f0, acc = 100.0)   Test: (loss = 0.632f0, acc = 79.8)
Epoch: 80   Train: (loss = 0.0052f0, acc = 100.0)   Test: (loss = 0.6573f0, acc = 79.1)
Epoch: 90   Train: (loss = 0.0038f0, acc = 100.0)   Test: (loss = 0.6693f0, acc = 79.1)
Epoch: 100   Train: (loss = 0.0029f0, acc = 100.0)   Test: (loss = 0.678f0, acc = 79.3)
  6.040012 seconds (431.69 k allocations: 11.373 GiB, 6.85% gc time)

GNN.jl GPU

julia> @time train(usecuda=true)
[ Info: Training on GPU
┌ Info: GNNGraph:
│     num_nodes = 2708
└     num_edges = 10556
Epoch: 0   Train: (loss = 1.9487f0, acc = 13.57)   Test: (loss = 1.9464f0, acc = 13.6)
Epoch: 10   Train: (loss = 1.6139f0, acc = 93.57)   Test: (loss = 1.7683f0, acc = 68.6)
Epoch: 20   Train: (loss = 0.989f0, acc = 95.71)   Test: (loss = 1.388f0, acc = 77.3)
Epoch: 30   Train: (loss = 0.3757f0, acc = 98.57)   Test: (loss = 0.9163f0, acc = 80.8)
Epoch: 40   Train: (loss = 0.1102f0, acc = 100.0)   Test: (loss = 0.6602f0, acc = 81.2)
Epoch: 50   Train: (loss = 0.0355f0, acc = 100.0)   Test: (loss = 0.621f0, acc = 79.8)
Epoch: 60   Train: (loss = 0.0147f0, acc = 100.0)   Test: (loss = 0.6215f0, acc = 79.6)
Epoch: 70   Train: (loss = 0.0081f0, acc = 100.0)   Test: (loss = 0.6389f0, acc = 79.7)
Epoch: 80   Train: (loss = 0.0053f0, acc = 100.0)   Test: (loss = 0.6539f0, acc = 79.2)
Epoch: 90   Train: (loss = 0.0039f0, acc = 100.0)   Test: (loss = 0.6581f0, acc = 79.5)
Epoch: 100   Train: (loss = 0.003f0, acc = 100.0)   Test: (loss = 0.6679f0, acc = 79.2)
  0.710807 seconds (712.31 k allocations: 112.747 MiB, 2.47% gc time)

GF.jl CPU

julia> @time train(usecuda=false)
[ Info: Training on CPU
┌ Info: FeaturedGraph(
│ 	Undirected graph with (#V=2708, #E=5278) in adjacency matrix,
└ )
Epoch: 0   Train: (loss = 1.9487f0, acc = 13.57)   Test: (loss = 1.9464f0, acc = 13.6)
Epoch: 10   Train: (loss = 1.6136f0, acc = 90.0)   Test: (loss = 1.7671f0, acc = 66.9)
Epoch: 20   Train: (loss = 0.9821f0, acc = 97.14)   Test: (loss = 1.3838f0, acc = 77.1)
Epoch: 30   Train: (loss = 0.3707f0, acc = 98.57)   Test: (loss = 0.9097f0, acc = 80.5)
Epoch: 40   Train: (loss = 0.1076f0, acc = 100.0)   Test: (loss = 0.6656f0, acc = 80.6)
Epoch: 50   Train: (loss = 0.034f0, acc = 100.0)   Test: (loss = 0.6272f0, acc = 79.1)
Epoch: 60   Train: (loss = 0.0144f0, acc = 100.0)   Test: (loss = 0.6323f0, acc = 78.7)
Epoch: 70   Train: (loss = 0.0077f0, acc = 100.0)   Test: (loss = 0.632f0, acc = 79.8)
Epoch: 80   Train: (loss = 0.0052f0, acc = 100.0)   Test: (loss = 0.6573f0, acc = 79.1)
Epoch: 90   Train: (loss = 0.0038f0, acc = 100.0)   Test: (loss = 0.6693f0, acc = 79.1)
Epoch: 100   Train: (loss = 0.0029f0, acc = 100.0)   Test: (loss = 0.678f0, acc = 79.3)
 23.977515 seconds (421.21 k allocations: 42.810 GiB, 4.37% gc time)

GF.jl GPU

BROKEN (hangs forever)

@codecov
Copy link

codecov bot commented Dec 18, 2021

Codecov Report

Merging #82 (b349bc4) into master (91e93e0) will not change coverage.
The diff coverage is n/a.

Impacted file tree graph

@@           Coverage Diff           @@
##           master      #82   +/-   ##
=======================================
  Coverage   83.21%   83.21%           
=======================================
  Files          13       13           
  Lines        1007     1007           
=======================================
  Hits          838      838           
  Misses        169      169           

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 91e93e0...b349bc4. Read the comment docs.

@CarloLucibello CarloLucibello merged commit 749e038 into master Dec 18, 2021
@CarloLucibello CarloLucibello deleted the cl/gf branch March 23, 2022 07:55
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant