-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reproduction of QA Task Issues #14
Comments
|
Thank you very much for your response! But I still have some questions: For answer two, does "The reported results are trained with all the 3D-LLM data" mean that when I run bash scripts/opt-1.3b/train.generalist.sh, I only need to use the datasets unified_3dllm_scene_description, unified_3dllm_embodied_dialogue, unified_3dllm_embodied_planning, and the rest of the datasets are only used during fine-tuning? |
More comment on: Q2 - No, the Q3 - For quantitative results for row-2 in Table 8, we naively use all the object-id annotations for both training and evaluation, since the original annotations selects more objects than what's related to the question. We have not released that code either. Indeed, the text instructions are required while the visual prompts are optional, and only adopted in tasks like ScanQA, 3D dense captioning, and 3D open-vocabulary detection. |
OK, thank you for your answer 😊 |
Hello there! I'm interested in your work, but I'm having some differences when reproducing the results of the paper. So, I'd like to consult with you.
The text was updated successfully, but these errors were encountered: