You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am very happy to see this work.
I often think that although the original STGCN (Yu et al., 2018) mentioned using 1D CNN to substitute RNN, CNN is always limited by the width of the convolution kernel and receptive field, so it seems better to rejoin LSTM.
Your work proves that this idea is right. The code is also clear and easy to read.
Thank you!
The text was updated successfully, but these errors were encountered:
I am very happy to see this work.
I often think that although the original STGCN (Yu et al., 2018) mentioned using 1D CNN to substitute RNN, CNN is always limited by the width of the convolution kernel and receptive field, so it seems better to rejoin LSTM.
Your work proves that this idea is right. The code is also clear and easy to read.
Thank you!
The text was updated successfully, but these errors were encountered: