-
Notifications
You must be signed in to change notification settings - Fork 37
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
It's slow to evaluate #12
Comments
Have you solved this problem? i just run on the 2014 conll GEC dataset,only 1313 sentences,it takes more than 5 hours but not gives out the result. |
@kouhonglady |
I found that in my case, the reason for the "never-ending" computation of the metric were some bad predictions where the same ngram was repeated multiple times at the end of a sentence. |
In order to find which sentence is causing a trouble, |
I think the edit lattice of M2 scorer is DAG. So it is topological sortable. If the graph is topological sorted, the shortest path can be calculated by O(V + E). And topological sort can be done by O(V + E). Therefore, the total calculation is O(V + E). This is faster than Bellman-Ford algorithm with O(V×E). This can be one solution of this problem. |
It seems that transitive_args() of levenshtein.py is very time-consuming. These 3 for loops of adding transitive arcs may be replaced with a more efficient algorithm. |
As nymwa said, in this case, Bellman-Ford algorithm seems to be too slow. Please let me know if you need to delete my repository. |
Hi,
Sometimes, it is very slow to evalute using m2scorer. How to fix it? And Could I evaluate scores of each errror types separately? How to achieve this function? Thank you very much.
The text was updated successfully, but these errors were encountered: