-
Notifications
You must be signed in to change notification settings - Fork 52
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
no method matching getobs(::NamedTuple{(:x,), Tuple{Matrix{Float32}}} #88
Comments
It's not clear what code you are running. Can you post the full example (not as an image but in a julia code block)? |
example code from: julia> using GraphNeuralNetworks, Graphs, Flux, CUDA, Statistics
julia> all_graphs = GNNGraph[];
julia> for _ in 1:1000
g = GNNGraph(random_regular_graph(10, 4),
ndata=(; x = randn(Float32, 16,10)), # input node features
gdata=(; y = randn(Float32))) # regression target
push!(all_graphs, g)
end
julia> gbatch = Flux.batch(all_graphs)
GNNGraph:
num_nodes = 10000
num_edges = 40000
num_graphs = 1000
ndata:
x => (16, 10000)
gdata:
y => (1000,)
julia> device = CUDA.functional() ? Flux.gpu : Flux.cpu;
julia> model = GNNChain(GCNConv(16 => 64),
BatchNorm(64), # Apply batch normalization on node features (nodes dimension is batch dimension)
x -> relu.(x),
GCNConv(64 => 64, relu),
GlobalPool(mean), # aggregate node-wise features into graph-wise features
Dense(64, 1)) |> device;
julia> ps = Flux.params(model);
julia> opt = ADAM(1f-4);
gtrain = getgraph(gbatch, 1:800)
gtest = getgraph(gbatch, 801:gbatch.num_graphs)
train_loader = Flux.Data.DataLoader(gtrain, batchsize=32, shuffle=true)
test_loader = Flux.Data.DataLoader(gtest, batchsize=32, shuffle=false)
loss(g::GNNGraph) = mean((vec(model(g, g.ndata.x)) - g.gdata.y).^2)
loss(loader) = mean(loss(g |> device) for g in loader)
for epoch in 1:100
for g in train_loader
g = g |> device
grad = gradient(() -> loss(g), ps)
Flux.Optimise.update!(opt, ps, grad)
end
@info (; epoch, train_loss=loss(train_loader), test_loss=loss(test_loader))
end
|
This code works fine for me |
julia> gbatch
GNNGraph:
num_nodes = 10000
num_edges = 40000
num_graphs = 1000
ndata:
x => (16, 10000)
gdata:
y => (1000,)
julia> getgraph(gbatch, 1:10)
ERROR: MethodError: no method matching getobs(::NamedTuple{(:x,), Tuple{Matrix{Float32}}}, ::BitVector)
Closest candidates are:
getobs(::GNNGraph, ::Any) at ~/.julia/packages/GraphNeuralNetworks/8LJLl/src/GNNGraphs/gnngraph.jl:227
Stacktrace:
[1] getgraph(g::GNNGraph{Tuple{Vector{Int64}, Vector{Int64}, Nothing}}, i::UnitRange{Int64}; nmap::Bool)
@ GraphNeuralNetworks.GNNGraphs ~/.julia/packages/GraphNeuralNetworks/8LJLl/src/GNNGraphs/transform.jl:323
[2] getgraph(g::GNNGraph{Tuple{Vector{Int64}, Vector{Int64}, Nothing}}, i::UnitRange{Int64})
@ GraphNeuralNetworks.GNNGraphs ~/.julia/packages/GraphNeuralNetworks/8LJLl/src/GNNGraphs/transform.jl:293
[3] top-level scope
@ REPL[15]:1
[4] top-level scope
@ ~/.julia/packages/CUDA/zwPff/src/initialization.jl:52
what might have caused this?? julia 1.7.1, GraphNeuralNetworks v0.3.6 |
I see. I think there is some version conflict with Flux, LearnBase, and MLDataUtils. Can you post the versions in your Manifest for those 3 packages? |
find Flux, LearnBase in my Manifest.
|
I removed MLJ.jl, and up LearnBase.jl , then run the code is OK . |
I test the code of
https://carlolucibello.github.io/GraphNeuralNetworks.jl/stable/
I got an error:

GraphNeuralNetworks v0.3.6
julia> versioninfo()
Julia Version 1.7.1
Commit ac5cc99908 (2021-12-22 19:35 UTC)
Platform Info:
OS: Linux (x86_64-pc-linux-gnu)
CPU: Intel(R) Xeon(R) Platinum 8255C CPU @ 2.50GHz
WORD_SIZE: 64
LIBM: libopenlibm
LLVM: libLLVM-12.0.1 (ORCJIT, cascadelake)
The text was updated successfully, but these errors were encountered: