You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
While our results using ConceptNet (+ CoreNLP) are still ok, the data seems to be much more noisy, resulting in a drop in supertagging accuracy and LAS.
What ways are there to improve this?
can we count calls of ConceptNet with input, output pairs and see if there are frequent calls that have unexpected or suboptimal outputs and fix the most frequent ones by handwritten rules?
Is that feasible in a reasonable amount of time?
The text was updated successfully, but these errors were encountered:
It is easy to count the calls, but the problem is that ConceptNet only provides options for the aligner. Which ones the aligner ends up choosing is much harder to count, and I don't have a clear idea how to do it.
While our results using ConceptNet (+ CoreNLP) are still ok, the data seems to be much more noisy, resulting in a drop in supertagging accuracy and LAS.
What ways are there to improve this?
Is that feasible in a reasonable amount of time?
The text was updated successfully, but these errors were encountered: