-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement Context Matching #2293
Conversation
Tests to be added... |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Awesome :) Thanks for also adding match_contexts
!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks very good already. Looking forward to some tests. Maybe it's better to rename matching.py
to context_matching.py
. Otherwise it's too generic. For now utils
is fine but I could imagine the code also under modeling/evaluation
. Don't forget to add labels to the PR. 😉
haystack/utils/matching.py
Outdated
|
||
:param context: The context to match. | ||
:param candidate: The candidate to match the context. | ||
:param min_words: The minimum number of words context and candidate need to have in order to be scored. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just wondering whether we want to call this words
or tokens
. Could also be min_seq_len
(minimum number of tokens) in reference to max_seq_len
of the reader models.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
words
should be generally better understandable then tokens
by most people. Also min_seq_len
refers to a sequence of "whatever" because it totally depends on the tokenizer what you're dealing with (words, wordpieces, bytes, etc.). Here we're actually dealing with words. So I'd leave it like that.
@julian-risch @ArzelaAscoIi
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Quite complex! Looks good! 🚀
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM! 👍 There is just a small typo to be fixed before merging. Further, let's keep an eye on whether boost_split_overlaps
increases the number of false positive matches. In that case, we might not want to use by default and set boost_split_overlaps=False
by default.
grouped_matches = groupby(group_sorted_matches, key=lambda candidate: candidate.context_id) | ||
for context_id, group in grouped_matches: | ||
sorted_group = sorted(group, key=lambda candidate: candidate.score, reverse=True) | ||
match_list = list((candiate_score.candidate_id, candiate_score.score) for candiate_score in sorted_group) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
typo in candiate_score
I agree the |
Proposed changes:
calculate_context_similarity
andmatch_context
Status (please check what you already did):
closes #2265