-
Notifications
You must be signed in to change notification settings - Fork 131
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to work with result (.pk) file #4
Comments
I am facing the same issue. How do we use the "...wmd_d.pk" file? |
Hi, I already have found a solution for this topic. The python code below can be used to transform the wmd resultfile into an csv structured file. Just use the name of the resultfile for the first variable and the new filename for your csv file asthe secound variable. Otherwise you can edit the code. In the for-loops you iterate trough the resultmatrix. Hope this helps import pdb, sys, numpy as np, pickle, os load_file = sys.argv[1] def main(): if name == "main": |
hi @asptutorial2016 could you create a pull request please? |
Hi @asptutorial2016 , Can you please explain the code? |
@budhiraja it just loads the pickle file |
BTW I just fixed the indentation here: import pdb, sys, numpy as np, pickle, os
load_file = sys.argv[1]
fileName = sys.argv[2]
def main():
with open(load_file) as f:
WMD_D = pickle.load(f)
with open(fileName,"a") as myfile:
for valx in WMD_D:
for valy in valx:
myfile.write('%f;' % (valy))
myfile.write('\n')
if __name__ == '__main__':
main() and you run like
That said, I'm still not sure of this output since I would expect one WMD value for each text in the dataset. The output is like
|
Hi,
first of all thank you for the great work and nice implementation!
The tool works fine for me and I will use it for document comparison in the socal media context. Can you please give me some advise how to work with the resulting "...wmd_d.pk" file? First I thought the result would be a textfile with a readable matrix in it but now I think I need any additional software?
Thank you very much!
The text was updated successfully, but these errors were encountered: