-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Question About Perplexity Calculation in "3.2 Rationale-Guided Filtering" and Request for Relevant Code #1
Comments
Thank you for your interest in RAG^2 ! I will share the exact code I used
Let me know any specific errors you encounter! |
Thank you so much for your response and for providing the code—it’s been very helpful and has greatly clarified my understanding. I do have a follow-up question regarding the perplexity values. Could you please share the approximate range of perplexity values you obtained? In my own implementation, the perplexity values are quite small, typically between 1.x and 2.x (for example, 1.48). I wonder if this range is correct or if it suggests an error in my calculation. Thanks again for your time and kind explanation. |
Yes, the perplexity values can indeed be small, but still able to calculate thresholds based on the top percentages (e.g., top 5%, 10%, 25%) across the entire training set. Let me know if you have further questions or need clarification! |
Thank you so much for your timely responses and for addressing all my questions. Wishing you all the best in your future work! |
Feel free to reach out whenever you need help :) |
Dear authors,
First of all, congratulations on having your paper accepted to NAACL 2025!
I have a question regarding the perplexity calculation method mentioned in Section 3.2 ("Rationale-Guided Filtering") of your paper. I’ve tried to manually implement this calculation, but the perplexity values I obtained seem to be incorrect. Could you please help me identify if there’s anything wrong with my implementation? Also, would it be possible for you to release the code for your own implementation of the perplexity calculation? It would be incredibly helpful in understanding and reproducing the results from your paper.
Thank you so much for your time and assistance!
Below is the code I’ve implemented for this purpose:
The text was updated successfully, but these errors were encountered: