-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Question about trainable parameters #1
Comments
Hi Romit,
Thanks for your interest in our work. The convolution layer first performs a GMLS reconstruction of the function and then applies a parameterized functional to the reconstruction. In your first example, you're using a 2d order Taylor polynomial (dimension 6), layers with biases (on by default), and 1 channel. So the reconstruction has dimension 6 and the functional has 7 parameters (f(c) = \sum_i (c_i t(P)_i) + b, where t(P)_i and b are the parameters)
The first layer in your next example maps 1 channel to 2 channels. Here, the reconstruction has dimension 6x1 and a functional maps it to a 2 dimensional vector (f_1(c) = \sum_i (c_i t(P)_1i) + b^1 and f^2(c) = \sum_i (c_i t(P)^2_i) + b^2). The next layers map 2 channels to 2 channels
(f_k(c) = \sum_ij (c_ij t(P)_ijk ) + b_k).
Ravi
…________________________________
From: Romit Maulik <[email protected]>
Sent: Tuesday, April 7, 2020 7:31 PM
To: rgp62/gmls-nets
Cc: Subscribed
Subject: [EXTERNAL] [rgp62/gmls-nets] Question about trainable parameters (#1)
Hello!
Before I start - I would like to thank you for making your code/papers/talks public - it has been very useful! I have been trying to build a bottleneck (or autoencoder) type network using the MFConvLayers that you have designed and wanted to clarify a few things. First, let me explain my point clouds and my network design before I ask my question. My point clouds are given by
n = 60
x1 = np.reshape(list(itertools.product(np.linspace(coods[:,0].min(),coods[:,0].max(),n),np.linspace(coods[:,1].min(),coods[:,1].max(),n))),
(n**2,2)).astype('float32')
n = 30
x2 = np.reshape(list(itertools.product(np.linspace(coods[:,0].min(),coods[:,0].max(),n),np.linspace(coods[:,1].min(),coods[:,1].max(),n))),
(n**2,2)).astype('float32')
n = 10
x3 = np.reshape(list(itertools.product(np.linspace(coods[:,0].min(),coods[:,0].max(),n),np.linspace(coods[:,1].min(),coods[:,1].max(),n))),
(n**2,2)).astype('float32')
n = 2
x4 = np.reshape(list(itertools.product(np.linspace(coods[:,0].min(),coods[:,0].max(),n),np.linspace(coods[:,1].min(),coods[:,1].max(),n))),
(n**2,2)).astype('float32')
where coods is a numpy array of shape (4096,2) containing coordinates of an unstructured mesh (scaled to between 0 and 1 in each dimension). My goal is to successively compress the degrees of freedom while employing the MFConvLayer. With that in mind, here is my network specification:
# constructs 2d Taylor basis of order 4
order = 2
dim = 2 # 2D problem
fP = gnets.bases.Taylor(dim,order)
encoder_inputs = Input(shape=(4096,1),name='Field')
# First layer
# Point clouds and epsilon balls to compute neighbors
eps0 = 0.2
chans0=1
gmls_encoded = gnets.MFConvLayer(coods,x1,fP,gnets.weightfuncs.fourth,eps0,chans0,activation='relu')(encoder_inputs)
# Second layer
# Point clouds and epsilon balls to compute neighbors
eps0 = 0.4
chans0=1
gmls_encoded = gnets.MFConvLayer(x1,x2,fP,gnets.weightfuncs.fourth,eps0,chans0,activation='relu')(gmls_encoded)
# Third layer
# Point clouds and epsilon balls to compute neighbors
eps0 = 0.8
chans0=1
gmls_encoded = gnets.MFConvLayer(x2,x3,fP,gnets.weightfuncs.fourth,eps0,chans0,activation='relu')(gmls_encoded)
# Fourth layer
# Point clouds and epsilon balls to compute neighbors
eps0 = 1.6
chans0=1
gmls_encoded = gnets.MFConvLayer(x3,x4,fP,gnets.weightfuncs.fourth,eps0,chans0,activation='linear')(gmls_encoded)
# Fifth layer
# Point clouds and epsilon balls to compute neighbors
eps0 = 0.8
chans0=1
gmls_encoded = gnets.MFConvLayer(x4,x3,fP,gnets.weightfuncs.fourth,eps0,chans0,activation='relu')(gmls_encoded)
# Sixth layer
# Point clouds and epsilon balls to compute neighbors
eps0 = 0.4
chans0=1
gmls_encoded = gnets.MFConvLayer(x3,x2,fP,gnets.weightfuncs.fourth,eps0,chans0,activation='relu')(gmls_encoded)
# Seventh layer
# Point clouds and epsilon balls to compute neighbors
eps0 = 0.2
chans0=1
gmls_encoded = gnets.MFConvLayer(x2,x1,fP,gnets.weightfuncs.fourth,eps0,chans0,activation='relu')(gmls_encoded)
# Output layer
# Point clouds and epsilon balls to compute neighbors
eps0 = 0.1
chans0=1
field_recon = gnets.MFConvLayer(x1,coods,fP,gnets.weightfuncs.fourth,eps0,chans0,activation='relu')(gmls_encoded)
gmls_encoder = Model(inputs=encoder_inputs,outputs=field_recon)
gmls_encoder.summary()
Note that I have kept my channels as 1 (for simplicity). When one runs gmls_encoder.summary() they will get:
Model: "model"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
Field (InputLayer) [(None, 4096, 1)] 0
_________________________________________________________________
mf_conv_layer (MFConvLayer) (None, 3600, 1) 7
_________________________________________________________________
mf_conv_layer_1 (MFConvLayer (None, 900, 1) 7
_________________________________________________________________
mf_conv_layer_2 (MFConvLayer (None, 100, 1) 7
_________________________________________________________________
mf_conv_layer_3 (MFConvLayer (None, 4, 1) 7
_________________________________________________________________
mf_conv_layer_4 (MFConvLayer (None, 100, 1) 7
_________________________________________________________________
mf_conv_layer_5 (MFConvLayer (None, 900, 1) 7
_________________________________________________________________
mf_conv_layer_6 (MFConvLayer (None, 3600, 1) 7
_________________________________________________________________
mf_conv_layer_7 (MFConvLayer (None, 4096, 1) 7
=================================================================
Total params: 56
Trainable params: 56
Non-trainable params: 0
My question relates to the 7 trainable parameters in each layer - what do they correspond to? When I increase the channels to 2 each (except the output channel) - I get more trainable parameters as shown here:
Model: "model_3"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
Field (InputLayer) [(None, 4096, 1)] 0
_________________________________________________________________
mf_conv_layer_24 (MFConvLaye (None, 3600, 2) 14
_________________________________________________________________
mf_conv_layer_25 (MFConvLaye (None, 900, 2) 26
_________________________________________________________________
mf_conv_layer_26 (MFConvLaye (None, 100, 2) 26
_________________________________________________________________
mf_conv_layer_27 (MFConvLaye (None, 4, 2) 26
_________________________________________________________________
mf_conv_layer_28 (MFConvLaye (None, 100, 2) 26
_________________________________________________________________
mf_conv_layer_29 (MFConvLaye (None, 900, 2) 26
_________________________________________________________________
mf_conv_layer_30 (MFConvLaye (None, 3600, 2) 26
_________________________________________________________________
mf_conv_layer_31 (MFConvLaye (None, 4096, 1) 13
=================================================================
Total params: 183
Trainable params: 183
Non-trainable params: 0
_________________________________________________________________
but once again I am not sure about how to interpret them. Any help would be great!
Thanks,
Romit
-
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub<#1>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/ADLKIFX2DAIOYSLE77I7SO3RLPHX3ANCNFSM4MDRXM5Q>.
|
Thank you for responding. A few follow-up questions:
In my first example - in the first layer my input coordinates Is this what is happening in this framework? Thanks! |
Hello!
Before I start - I would like to thank you for making your code/papers/talks public - it has been very useful! I have been trying to build a bottleneck (or autoencoder) type network using the MFConvLayers that you have designed and wanted to clarify a few things. First, let me explain my point clouds and my network design before I ask my question. My point clouds are given by
where
coods
is a numpy array of shape (4096,2) containing coordinates of an unstructured mesh (scaled to between 0 and 1 in each dimension). My goal is to successively compress the degrees of freedom while employing theMFConvLayer
. With that in mind, here is my network specification:Note that I have kept my channels as 1 (for simplicity). When one runs
gmls_encoder.summary()
they will get:My question relates to the 7 trainable parameters in each layer - what do they correspond to? When I increase the channels to 2 each (except the output channel) - I get more trainable parameters as shown here:
but once again I am not sure about how to interpret them. Any help would be great!
Thanks,
Romit
The text was updated successfully, but these errors were encountered: