Skip to content

Commit

Permalink
cleaunp
Browse files Browse the repository at this point in the history
  • Loading branch information
CarloLucibello committed Feb 13, 2022
1 parent 2267d5d commit 21638c4
Show file tree
Hide file tree
Showing 2 changed files with 0 additions and 20 deletions.
6 changes: 0 additions & 6 deletions docs/src/api/conv.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,14 +4,8 @@ CurrentModule = GraphNeuralNetworks

# Convolutional Layers

<<<<<<< HEAD
Many different types of graphs convolutional layers have been proposed in the literature.
Choosing the right layer for your application can be a matter of trial and error.
Some of the most commonly used layers are the [`GCNConv`](@ref) and the [`GATv2Conv`](@ref) layers. Multiple graph convolutional layers are stacked to create a graph neural network model
=======
Many different types of graphs convolutional layers have been proposed in the literature. Choosing the right layer for your application can bould involve a lot of exploration.
Some of the most commonly used layers are the [`GCNConv`](@ref) and the [`GATv2Conv`](@ref). Multiple graph convolutional layers are typically stacked together to create a graph neural network model
>>>>>>> 07276df (docs)
(see [`GNNChain`](@ref)).

The table below lists all graph convolutional layers implemented in the *GraphNeuralNetworks.jl*. It also highlights the presence of some additional capabilities with respect to basic message passing:
Expand Down
14 changes: 0 additions & 14 deletions src/layers/conv.jl
Original file line number Diff line number Diff line change
Expand Up @@ -269,11 +269,7 @@ with ``z_i`` a normalization factor.
In case `ein > 0` is given, edge features of dimension `ein` will be expected in the forward pass
and the attention coefficients will be calculated as
```
<<<<<<< HEAD
\alpha_{ij} = \frac{1}{z_i} \exp(\mathbf{a}^T LeakyReLU([W_3 \mathbf{e}_{j\to i}; W_2 \mathbf{x}_i; W_1 \mathbf{x}_j]))
=======
\alpha_{ij} = \frac{1}{z_i} \exp(LeakyReLU(\mathbf{a}^T [W_e \mathbf{e}_{j\to i}; W \mathbf{x}_i; W \mathbf{x}_j]))
>>>>>>> 07276df (docs)
````
# Arguments
Expand Down Expand Up @@ -393,15 +389,9 @@ with ``z_i`` a normalization factor.
In case `ein > 0` is given, edge features of dimension `ein` will be expected in the forward pass
and the attention coefficients will be calculated as
<<<<<<< HEAD
```
\alpha_{ij} = \frac{1}{z_i} \exp(\mathbf{a}^T LeakyReLU([W_3 \mathbf{e}_{j\to i}; W_2 \mathbf{x}_i; W_1 \mathbf{x}_j]))
````
=======
```math
\alpha_{ij} = \frac{1}{z_i} \exp(\mathbf{a}^T LeakyReLU([W_3 \mathbf{e}_{j\to i}; W_2 \mathbf{x}_i; W_1 \mathbf{x}_j])).
```
>>>>>>> 07276df (docs)
# Arguments
Expand Down Expand Up @@ -430,11 +420,7 @@ struct GATv2Conv{T, A1, A2, A3, B, C<:AbstractMatrix} <: GNNLayer
end

@functor GATv2Conv
<<<<<<< HEAD
Flux.trainable(l::GATv2Conv) = (l.dense_i, l.dense_j, l.dense_j, l.bias, l.a)
=======
Flux.trainable(l::GATv2Conv) = (l.dense_i, l.dense_j, l.dense_e, l.bias, l.a)
>>>>>>> 07276df (docs)

GATv2Conv(ch::Pair{Int,Int}, args...; kws...) = GATv2Conv((ch[1], 0) => ch[2], args...; kws...)

Expand Down

0 comments on commit 21638c4

Please sign in to comment.