You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
You used the dot product between the ground truth semantic description and predicted semantic description as a metric to measure the closeness between the two vectors.
Why did you prefer the dot product over Euclidean distance? What is the intuition behind this approach. I mean, why is euclidean measure is not working.
Why do you compare the predicted semantic description with the ground truth semantic description of only new classes and not all classes.
What is the input matrix present in all.kernel. Why is this matrix symmetric and squared? What features did you use?
Thank you in advance.
The text was updated successfully, but these errors were encountered:
Three reasons are that a) it can be interpreted as a simple two layer linear network in which the last layer is fixed to be the attributes signatures, and the first layer is to be learned, b) euclidean norm would consider all attributes to be equally important, whereas in real world it may be the case that some may be noisier than others or correlated between them, and c) this leads to a closed form solution.
Most zero-shot learning literature was about making inference on new classes, based on knowledge learned on previous classes, and so were the zero-shot learning benchmarks. However your question makes a lot of sense, and nowadays more papers are taking care of this.
That matrix is the kernel (aka Gram) matrix, which is n x n, being n the number of instances. You can think of it like each input in the matrix represents the "degree of similarity" of every pair of instances (that is why it is symmetric and squared, and it is also positive semidefinite). You can read more about this in https://en.wikipedia.org/wiki/Kernel_method and in references there. The way kernel matrices were created is explained in this paper.
You used the dot product between the ground truth semantic description and predicted semantic description as a metric to measure the closeness between the two vectors.
Thank you in advance.
The text was updated successfully, but these errors were encountered: