-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Paralellization #10
Comments
Hi Pablo If you're on a compute cluster the best way to speed it up is to use multiple nodes. So if you use the qsub system, this could be added to the maginator command: Can you see in the logs how far in the process maginator is? With 5 samples it shouldn't take that long. |
Hi Russel, It was my bad that I didn't include those optionsat the beginning. However, I did it now, and it seems like it got stucked after the refinement step. This is the output from MAGINATOR's log. ESC[36m[2023-11-14 11:56:38] INFO:ESC[0m Running MAGinator version 0.1.18 |
Can you post the log for the signature_gene workflow? |
I have not been able to find that log. Should it be in the logs directory created by maginator? I have been looking at your code, and the process stops at the rule refinement. I get the output file from that step, and the logs are there, indicating that there was no error, but the next rule (gene_counts) is never executed. |
Hi, I would like to run MAGinator on a pretty large data set. I have around 420 samples, with 60 bins per sample on average, and the preprocessed reads are around 6GB each sample.
I have been running a subset of the samples (5) as a trial run on a cluster (40ppn and 180GB), and it has been running for more than 24 hours already.
Is there any possibility to run MAGinator in parallel to speed up the process? I am running the following command:
maginator -v trial/maginator_clusters.tsv
-r trial/maginator_reads.csv
-c trial/maginator_contigs.fasta
-o trial/maginator
-g /home/people/pablop/workdir/databases/gtdb_release207_v2
bin/run_maginator.sh (END)
Thank you,
Pablo
The text was updated successfully, but these errors were encountered: