Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

no method matching getobs(::NamedTuple{(:x,), Tuple{Matrix{Float32}}} #88

Closed
zsz00 opened this issue Dec 25, 2021 · 7 comments · Fixed by #90
Closed

no method matching getobs(::NamedTuple{(:x,), Tuple{Matrix{Float32}}} #88

zsz00 opened this issue Dec 25, 2021 · 7 comments · Fixed by #90

Comments

@zsz00
Copy link

zsz00 commented Dec 25, 2021

I test the code of
https://carlolucibello.github.io/GraphNeuralNetworks.jl/stable/

I got an error:
image

GraphNeuralNetworks v0.3.6

julia> versioninfo()
Julia Version 1.7.1
Commit ac5cc99908 (2021-12-22 19:35 UTC)
Platform Info:
OS: Linux (x86_64-pc-linux-gnu)
CPU: Intel(R) Xeon(R) Platinum 8255C CPU @ 2.50GHz
WORD_SIZE: 64
LIBM: libopenlibm
LLVM: libLLVM-12.0.1 (ORCJIT, cascadelake)

@CarloLucibello
Copy link
Member

CarloLucibello commented Dec 25, 2021

It's not clear what code you are running. Can you post the full example (not as an image but in a julia code block)?

@zsz00
Copy link
Author

zsz00 commented Dec 25, 2021

example code from:
https://carlolucibello.github.io/GraphNeuralNetworks.jl/stable/#Training

julia> using GraphNeuralNetworks, Graphs, Flux, CUDA, Statistics

julia> all_graphs = GNNGraph[];

julia> for _ in 1:1000
           g = GNNGraph(random_regular_graph(10, 4),  
                       ndata=(; x = randn(Float32, 16,10)),  # input node features
                       gdata=(; y = randn(Float32)))         # regression target   
           push!(all_graphs, g)
       end

julia> gbatch = Flux.batch(all_graphs)
GNNGraph:
    num_nodes = 10000
    num_edges = 40000
    num_graphs = 1000
    ndata:
        x => (16, 10000)
    gdata:
        y => (1000,)

julia> device = CUDA.functional() ? Flux.gpu : Flux.cpu;

julia> model = GNNChain(GCNConv(16 => 64),
                        BatchNorm(64),     # Apply batch normalization on node features (nodes dimension is batch dimension)
                        x -> relu.(x),     
                        GCNConv(64 => 64, relu),
                        GlobalPool(mean),  # aggregate node-wise features into graph-wise features
                        Dense(64, 1)) |> device;

julia> ps = Flux.params(model);

julia> opt = ADAM(1f-4);

gtrain = getgraph(gbatch, 1:800)
gtest = getgraph(gbatch, 801:gbatch.num_graphs)
train_loader = Flux.Data.DataLoader(gtrain, batchsize=32, shuffle=true)
test_loader = Flux.Data.DataLoader(gtest, batchsize=32, shuffle=false)

loss(g::GNNGraph) = mean((vec(model(g, g.ndata.x)) - g.gdata.y).^2)

loss(loader) = mean(loss(g |> device) for g in loader)

for epoch in 1:100
    for g in train_loader
        g = g |> device
        grad = gradient(() -> loss(g), ps)
        Flux.Optimise.update!(opt, ps, grad)
    end

    @info (; epoch, train_loss=loss(train_loader), test_loss=loss(test_loader))
end

@CarloLucibello
Copy link
Member

This code works fine for me

@zsz00
Copy link
Author

zsz00 commented Dec 25, 2021

julia> gbatch
GNNGraph:
    num_nodes = 10000
    num_edges = 40000
    num_graphs = 1000
    ndata:
        x => (16, 10000)
    gdata:
        y => (1000,)

julia> getgraph(gbatch, 1:10)
ERROR: MethodError: no method matching getobs(::NamedTuple{(:x,), Tuple{Matrix{Float32}}}, ::BitVector)
Closest candidates are:
  getobs(::GNNGraph, ::Any) at ~/.julia/packages/GraphNeuralNetworks/8LJLl/src/GNNGraphs/gnngraph.jl:227
Stacktrace:
 [1] getgraph(g::GNNGraph{Tuple{Vector{Int64}, Vector{Int64}, Nothing}}, i::UnitRange{Int64}; nmap::Bool)
   @ GraphNeuralNetworks.GNNGraphs ~/.julia/packages/GraphNeuralNetworks/8LJLl/src/GNNGraphs/transform.jl:323
 [2] getgraph(g::GNNGraph{Tuple{Vector{Int64}, Vector{Int64}, Nothing}}, i::UnitRange{Int64})
   @ GraphNeuralNetworks.GNNGraphs ~/.julia/packages/GraphNeuralNetworks/8LJLl/src/GNNGraphs/transform.jl:293
 [3] top-level scope
   @ REPL[15]:1
 [4] top-level scope
   @ ~/.julia/packages/CUDA/zwPff/src/initialization.jl:52

what might have caused this??

julia 1.7.1, GraphNeuralNetworks v0.3.6

@CarloLucibello
Copy link
Member

I see. I think there is some version conflict with Flux, LearnBase, and MLDataUtils. Can you post the versions in your Manifest for those 3 packages?

@zsz00
Copy link
Author

zsz00 commented Dec 25, 2021

find Flux, LearnBase in my Manifest.

(@v1.7) pkg> st --manifest Flux
      Status `~/.julia/environments/v1.7/Manifest.toml`
  [587475ba] Flux v0.12.8

(@v1.7) pkg> st --manifest MLDataUtils
  No Matches in `~/.julia/environments/v1.7/Manifest.toml`

(@v1.7) pkg> st --manifest LearnBase
      Status `~/.julia/environments/v1.7/Manifest.toml`
  [7f8f8fb0] LearnBase v0.4.1

(@v1.7) pkg>

image

@zsz00
Copy link
Author

zsz00 commented Dec 25, 2021

I removed MLJ.jl, and up LearnBase.jl , then run the code is OK .
LearnBase.jl is v0.5.3

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants