-
Notifications
You must be signed in to change notification settings - Fork 411
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Added Blue Score the respective folders #360
Added Blue Score the respective folders #360
Conversation
for more information, see https://pre-commit.ci
Codecov Report
@@ Coverage Diff @@
## master #360 +/- ##
==========================================
- Coverage 96.45% 96.43% -0.03%
==========================================
Files 113 117 +4
Lines 3691 3726 +35
==========================================
+ Hits 3560 3593 +33
- Misses 131 133 +2
Flags with carried forward coverage won't be shown. Click here to find out more.
Continue to review full report at Codecov.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @karthikrangasai,
Looks good until now. Could you also:
- Add corresponding tests / move the onces in
tests/functiona/test_nlp.py
totests/text/test_bleu.py
- Add /update references in the docs
@SkafteNicki |
I have refactored the code such that the Class implementation uses the functional implementation of the BLEU score. I have moved the tests for the functional implementation to the respective directory. Regarding the tests for the One hack I thought we could use is to provide the iterable of strings as Other option I see is to implement a separate How do I proceed with the tests for Class based Text Metrics? |
for more information, see https://pre-commit.ci
…ikrangasai/metrics into feature/352_add_blue_score
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
mix of comments
…d tests for class implementation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
mind check the failing tests?
@@ -281,17 +281,6 @@ ssim [func] | |||
.. autofunction:: torchmetrics.functional.ssim | |||
:noindex: | |||
|
|||
|
|||
*** | |||
NLP |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
shall we call it NLP or Text?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not sure about this. NLP also includes speech processing. So, if we are to add those metrics as well then we can call it NLP.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
well, speech processing you mean conversion from audio > text and back, right?
bu then you still shall measure the quality against each independently as your prediction and target are always other audio or text, right? so we can split NLP as text and audio... 🐰
cc: @SkafteNicki @maximsch2
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I prefer text, as the data modality that bleu works on is text, similar to how we have grouped other metrics based on data modality.
Also, this does not matter for the end users, as all modular metrics can just be imported with from torchmetrics import *
and functional `from torchmetrics.functional import *
@Borda the tests are failing on Github only because it says list type is not hashable. They run fine on my system. |
and what is your system? I assume list shall not be hashable anywhere as it immutable |
Yeah, it was my mistake. I was running |
cool, mind commit your fix? 🐰 |
Before submitting
What does this PR do?
Fixes #352 .
PR review
Anyone in the community is free to review the PR once the tests have passed.
If we didn't discuss your PR in Github issues there's a high chance it will not be merged.
Did you have fun?
Make sure you had fun coding 🙃