Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Problem in the scale of execution times #2

Open
iago-suarez opened this issue Mar 1, 2021 · 5 comments
Open

Problem in the scale of execution times #2

iago-suarez opened this issue Mar 1, 2021 · 5 comments

Comments

@iago-suarez
Copy link
Owner

iago-suarez commented Mar 1, 2021

We have found a problem with the scale of the time measurements in the original paper "Suárez, I., Sfeir, G., Buenaposada, J. M., & Baumela, L. (2020). BEBLID: Boosted efficient binary local image descriptor. Pattern Recognition Letters, 133, 366-372.". The time measurements have been scaled down by a constant factor around x13 due to a bug in the experiment source code. For example, the real execution times for BEBLID-512 in the images of Oxford dataset with sizes between (765x512) and (1000x700) is not 0.21 ms as pointed out in the paper but 0.21 x13 ms = 2.73 ms . This is also the case for the other descriptors and therefore the relevance and conclusions of the paper remains the same.

@xushangnjlh
Copy link

Hi Suarez,
I also noticed that there are some mis-understanding upon the execution time.

  1. In original BEBLID paper (Tab. 1), the BEBLID holds ~10 ms in Desktop CPU, while in BAD/HashSIFT paper(Table V) it holds 1.56 ms. So what is the difference? May be the difference from float BEBLID to binary BEBLID?
  2. I have tested by myself on OpenCV wrapper BEBLID and BAD. But the execution time between ORB and these two are not that significant as in paper. (ORB: 77ms, BEBLID_256b: 44.28ms, TEBLID_256b: 44.16). Also TEBLID is not faster than BEBLID. I am not sure whether something is missing in my experiments, maybe building configuration for OpenCV?

Thank you in advance!

@iago-suarez
Copy link
Owner Author

iago-suarez commented Sep 21, 2022

Hi @xushangnjlh ,

I haven't had time to respond to the other issue yet, I has been a busy week 😓.

Regarding execution times, the good ones should be the ones described in BAD/HashSIFT paper, since the ones in BEBLID paper have the aforementioned issue of the wrong scale.

Despite this, the execution time of BEBLID is faster than ORB one's because of the parallel execution:

https://github.com/opencv/opencv_contrib/blob/de84cc02a876894a4047ce31f7d9fd179f213e95/modules/xfeatures2d/src/beblid.cpp#L368-L370

The parallel execution should be enabled by default in OpenCV, but the speedup would depend on how many cores you have available in your machine and how many keypoints you have to process. We do our experiments with a maximum of 2000 keypoints and the image size is around 800 x 800px (Oxford dataset).

I hope you find this answer helpful.

Best,
Iago.

@Shengnan-Zhu
Copy link

Hi Suarez, I also noticed that there are some mis-understanding upon the execution time.

  1. In original BEBLID paper (Tab. 1), the BEBLID holds ~10 ms in Desktop CPU, while in BAD/HashSIFT paper(Table V) it holds 1.56 ms. So what is the difference? May be the difference from float BEBLID to binary BEBLID?
  2. I have tested by myself on OpenCV wrapper BEBLID and BAD. But the execution time between ORB and these two are not that significant as in paper. (ORB: 77ms, BEBLID_256b: 44.28ms, TEBLID_256b: 44.16). Also TEBLID is not faster than BEBLID. I am not sure whether something is missing in my experiments, maybe building configuration for OpenCV?

Thank you in advance!

Hi, have you ever tried HashSIFT with parallel_for_ option OFF? The time cost in computing descriptors is even higher than opencv SIFT in my experiment, and is this normal?

@jmbuena
Copy link
Collaborator

jmbuena commented Dec 1, 2022

Hi Shengnan,

Iago did a very good job taking only the SIFT descriptor computation part from the OpenCV SIFT implementation for HashSIFT. If I remember well, this means that HashSIFT is more efficient than OpenCV's SIFT descriptor implementation when used with keypoint detectors other than SIFT. However, when SIFT descriptor and detector are used together, the OpenCV implementation is faster as the descriptor is reusing results already computed by the detector. The other way around is also true, when using HashSIFT with OpenCV SIFT detector should be slower than OpenCV SIFT detector used with SIFT detector.

Is this the case? Are you using HashSIFT with OpenCV's SIFT detector?

Iago, please, correct me if I'm wrong.

@iago-suarez
Copy link
Owner Author

Hi guys, since this is BEBLID's repo, let's keep the discussion in iago-suarez/efficient-descriptors#3

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants