Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Three questions regarding the code #3

Closed
qiaozhijian opened this issue May 26, 2023 · 2 comments
Closed

Three questions regarding the code #3

qiaozhijian opened this issue May 26, 2023 · 2 comments

Comments

@qiaozhijian
Copy link

Thank you for your excellent work! I have three questions regarding the code.

Firstly, in the code located at this link:

wijk += pow(Graph(i, a) * Graph(i, b) * Graph(a, b), 1.0 / 3); //wij + wik
, it appears that the edge weights are being added rather than multiplied as described in the associated paper. If this is the case, I am curious to know the reason for this change.

Secondly, in the code located at this link:

OTSU = OTSU_thresh(cluster_coefficients);
, it seems that a histogram is used to organize the scores, and the metric
sb = (double)n1 * (double)n2 * pow(m1 - m2, 2);
is used to choose the threshold. Is this metric related to any statistical theory? Have you considered using the median instead?

Thirdly, in the code located at this link:

while (1)
{
if (f * max(OTSU, total_factor) > cluster_factor[49].score)
{
f -= 0.05;
}
else {
break;
}
}
for (int i = 0; i < Graph.rows(); i++)
{
if (Match_inlier[i] && cluster_factor_bac[i].score > f * max(OTSU, total_factor))
, I am unsure about the purpose of lines 664-673. Is it possible that the final value of f * max(OTSU, total_factor) is very similar to cluster_factor[49].score? Would it be possible to use cluster_factor[49].score directly instead? Additionally, this section of code appears to only retain the top 50 vertices, which may result in a very sparse compatibility graph. Could you please provide some insight into the reasoning behind this design choice?

@qiaozhijian qiaozhijian changed the title Two questions regarding the code Three questions regarding the code May 26, 2023
@zhangxy0517
Copy link
Owner

Your questions are about the dynamic adjustment of the compatibility graph scale. Let's suppose that the inlier ratio of the input correspondence set is high, then the compatibility graph will be dense, making searching for maximal cliques in the whole graph more time-consuming. Based on this, we introduce the clustering coefficient to measure the density of the compatibility graph. Please refer to our previous work mutual voting for ranking 3D correspondences. If the coefficient is large, we choose to reduce the size of the compatibility graph, that is, only retain the nodes with higher weights and the edges formed by these nodes. Note that the mechanism is not mentioned in our paper, because: 1) it is not our main innovation point; 2) the operation of reducing the graph size will only take effect when the inlier ratio is high.

@qiaozhijian
Copy link
Author

Great idea. I think I should take the time to read this article carefully.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants