Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The "alpha" and "beta" in the paper are the opposite of the "alpha" and "beta" in the code of Tip-Adapter #4

Open
euminds opened this issue May 21, 2022 · 3 comments

Comments

@euminds
Copy link

euminds commented May 21, 2022

In Code,
"alpha_list = [i * (6.0 - 1.0) / 20 + 1 for i in range(20)] " "beta_list = [i * (7 - 0.1) / 200 + 0.1 for i in range(200)]"
In paper,
image

@euminds
Copy link
Author

euminds commented May 23, 2022

The optimal hyper-parameters of Tip-Adapter, the results I achieved are Residual Ratio1.0, Sharpness Ratio 5.0, acc 62.02%. Also what is the optimal hyperparameter for Tip-Adapter-F with acc 65.51% (16-shot)? My current result is 65.45%.

@ZrrSkywalker
Copy link
Collaborator

@euminds Thanks for pointing out.
We have fix this and release a new code base in a repo.

Concerning 65.45% for Tip-Adapter, the released code would achieve 65.51% on my original device, but has variance on others. Thus, it's common to get +-0.1% accuracy jittor.

@euminds
Copy link
Author

euminds commented Jul 31, 2022

@ZrrSkywalker
Thank you for your amazing paper,

I am trying to evaluate CLIP with RN50x16 on ImageNet,
output = model.encode_image(test_image)
but get error:
File "", line 1, in <cell line: 1>
output = model.encode_image(test_image)
File "/home/user/anaconda3/envs/yolov5_4/lib/python3.8/site-packages/clip/model.py", line 337, in encode_image
return self.visual(image.type(self.dtype))
File "/home/user/anaconda3/envs/yolov5_4/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/home/user/anaconda3/envs/yolov5_4/lib/python3.8/site-packages/clip/model.py", line 148, in forward
x = self.attnpool(x)
File "/home/user/anaconda3/envs/yolov5_4/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/home/user/anaconda3/envs/yolov5_4/lib/python3.8/site-packages/clip/model.py", line 69, in forward
x = x + self.positional_embedding[:, None, :].to(x.dtype) # (HW+1)NC
RuntimeError: The size of tensor a (50) must match the size of tensor b (145) at non-singleton dimension 0

Thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants