You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Firstly, the code is well written.
I am new to this field. I know how the masked_fill() works.
However, I want to know the exact reason why masking of features is done here. From what I understood, wherever the absolute sum of features along the last dimension is 0, you are basically avoiding its consideration for computing the score (in MHAtt)
What is the exact reason of doing so and how is it beneficial?
Thanks in advance.
The text was updated successfully, but these errors were encountered:
I assume you mean the mask used in the MCAN model since the rest methods do not use this function.
The reason for using masks is that the question & visual features contain zero-padded features to make the input features of the same shape, and we do not want these features to be involved in model training.
For visual features, the number of extracted objects is dynamic within [10,100], and for question features, the number of words is also dynamic within [1,14]. A detailed description of the input features can be found in the paper in CVPR2019
Yes, I have read your paper, but the zero-padded features were linked with the masking never popped into my mind. Thanks a ton for sharing this. :)
My doubt is cleared and I am closing the the issue.
Firstly, the code is well written.
I am new to this field. I know how the masked_fill() works.
However, I want to know the exact reason why masking of features is done here. From what I understood, wherever the absolute sum of features along the last dimension is 0, you are basically avoiding its consideration for computing the score (in MHAtt)
What is the exact reason of doing so and how is it beneficial?
Thanks in advance.
The text was updated successfully, but these errors were encountered: