Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

pvrcnn_dts performance #8

Open
CBY-9527 opened this issue Aug 5, 2023 · 3 comments
Open

pvrcnn_dts performance #8

CBY-9527 opened this issue Aug 5, 2023 · 3 comments

Comments

@CBY-9527
Copy link

CBY-9527 commented Aug 5, 2023

Excuse me. I have encountered some problems. The performance of the pvrcnn_dts I reproduced (nus->KITTI) is only 82.61/66.86 (Mod. BEV/3D AP), which is a bit far from the 83.9/71.8 in the paper. The model trained in the source domain should be fine (the model trained in the source domain is used to directly test KITTI, and the performance reaches 77.26/63.43). I would like to ask what may be the reason for the low performance, is there a problem with the code or configuration file? In addition, I found that the number of pseudo-labels generated by pvrcnn_dts gradually decreases to 0 as the number of iteration updates increases. This seems to be an abnormal phenomenon. Have you encountered it in previous experiments?

@WoodwindHu
Copy link
Owner

I found that the number of pseudo-labels generated by pvrcnn_dts gradually decreases to 0 as the number of iteration updates increases

It seems like the model collapse into a trivial state. How about reduce the inter_graph_loss_weight?

@WoodwindHu
Copy link
Owner

I would like to ask what may be the reason for the low performance, is there a problem with the code or configuration file?

There may be some problems with the configuration file, I'll check it in my spare time.

@CBY-9527
Copy link
Author

CBY-9527 commented Sep 7, 2023

I found that the number of pseudo-labels generated by pvrcnn_dts gradually decreases to 0 as the number of iteration updates increases

It seems like the model collapse into a trivial state. How about reduce the inter_graph_loss_weight?

Thank you for your reply. I tried reducing the inter_graph_loss_weight to 1/10 before, but the self-training performance was even worse.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants