You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Usually when I implement CNN-series model, the calculation of the last layer CNN dimension was always a problem.
In your code, Line17, it looks the final linear layer just catches a part of the conv-layer output. Is this understanding correct? Does it ignore many other parameters that in the conv-layer output?
What shocks me is, when I test such implementation on other traditional CNN models, they also works. (I mean just use like self.linear(y1[:, :, -1])) Does this mean the task is simple for the designed CNN because we just dropped a lot of neurons in it?
Will be highly appreciated if someone could advice.
The text was updated successfully, but these errors were encountered:
Great code guys! Can I ask a question at this code?
https://github.com/locuslab/TCN/blob/master/TCN/adding_problem/model.py#L17
Usually when I implement CNN-series model, the calculation of the last layer CNN dimension was always a problem.
In your code, Line17, it looks the final linear layer just catches a part of the conv-layer output. Is this understanding correct? Does it ignore many other parameters that in the conv-layer output?
What shocks me is, when I test such implementation on other traditional CNN models, they also works. (I mean just use like
self.linear(y1[:, :, -1])
) Does this mean the task is simple for the designed CNN because we just dropped a lot of neurons in it?Will be highly appreciated if someone could advice.
The text was updated successfully, but these errors were encountered: