-
Notifications
You must be signed in to change notification settings - Fork 31
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Identify and document scalability benchmarks #74
Comments
Just wondering if there are any updates on this issue; thank you. |
Installed the latest version of empress and ran it on one of the large trees generated in Qiita within a 2020.2 Qiime2 conda environment; the mapping file, feature-table and taxonomies from the moving pictures dataset - only one dataset. Note that this is a tree was created over a year ago (we could generate even larger today), is the 100bp fragments insertion tree and is ~8.8M tips: In [1]: from skbio import TreeNode
In [2]: tree = TreeNode.read('../insertion_tree.relabelled.tre')
In [3]: print(tree.count(tips=True))
8830174 I generated the no-taxonomy, GG and Silva added empress qzv's to test, each takes ~3hrs to generate the qzv and it works just fine (no error messages). However, when I try to open them in https://view.qiime2.org/, the browser fails with: Anyway, here are the testing files. cc: @ElDeveloper |
@antgonza I'm looking into this |
Once we identify upper bounds for what sorts of data sizes Empress can comfortably visualize, we should document this clearly in the README so that e.g. users with billion-tip trees know that they probably want to consult another tool and/or a priest ._. |
Empress needs to be run against a huge tree (> 1 million tips)
The text was updated successfully, but these errors were encountered: