Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ValueError: Shape must be rank 2 but is rank 3 for 'MatMul' (op: 'MatMul') with input shapes: [?,1000,2048], [2048,64]. #1

Open
zhangxgu opened this issue Aug 29, 2017 · 1 comment

Comments

@zhangxgu
Copy link

zhangxgu commented Aug 29, 2017

The error raised when I run this code:
'''
import loupe as lp
import tensorflow as tf
x = tf.placeholder("float", [None,1000,2048])
NetVLAD = lp.NetVLAD(feature_size=2048, max_samples=1000, cluster_size=64,
output_dim=2048, gating=True, add_batch_norm=True,
is_training=True)
NetVLAD.forward(x)
'''
I think this x.shape is #batch_size dot #max_sample dot #feature_size. Should I change line 126 in loupe.py into
'''
cluster_weights = tf.get_variable("cluster_weights",
[1, self.feature_size, self.cluster_size],
initializer = tf.random_normal_initializer(
stddev=1 / math.sqrt(self.feature_size)))
''' ?
But it also lead to other error, can you help me?
Thank you very much!

@antoine77340
Copy link
Owner

Hi,
You are right sorry there was an error in my documentation. I am changing that.
So in the documentation I wrote:
Args:
reshaped_input: The input in reshaped in the following form:
'batch_size' x 'max_samples' x 'feature_size'.

But I meant:

   Args:
    reshaped_input: Your input of form:
    'batch_size' x 'max_samples' x 'feature_size'.
    must be reshaped in that form:
     'batch_size*max_samples' x 'feature_size' by performing:
     reshaped_input = tf.reshape(input, [-1, feature_size])

So if you instead do
x = tf.placeholder("float", [1,1000,2048]) # you can put any batch size and not only 1
x = tf.reshape(x, [-1,2048])
NetVLAD = lp.NetVLAD(feature_size=2048, max_samples=1000, cluster_size=64,
output_dim=2048, gating=True, add_batch_norm=True,
is_training=True)
NetVLAD.forward(x)

It should work.

Thank you for pointing this.

Antoine

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants