You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I know Porechop isn't being maintained, but thought I'd document this here.
Porechop loads the entire FastQ (or other sequence) file into RAM when processing. This is probably fine for minION runs, where the FAST5s and corresponding FASTQs are broken up into blocks of max 4K reads (averaging somewhere around 120MB). However, for PromethION, the data being provided (and presumably coming out of the basecaller) comes as ~60GB .fastq.gz files. Watching top, this used ~150GB of RAM.
This is OK for me working on a server with 800GB of RAM, but may not be ideal for all users.
A workaround for PromethION would be to manually chunk up the input files and run Porechop separately on each chunk (to simulate the current format of minION data). If anyone adopts Porechop (or it gets rewritten), it might be a good idea to add chunking as an option (eg process reads in 4K blocks).
The text was updated successfully, but these errors were encountered:
I know Porechop isn't being maintained, but thought I'd document this here.
Porechop loads the entire FastQ (or other sequence) file into RAM when processing. This is probably fine for minION runs, where the FAST5s and corresponding FASTQs are broken up into blocks of max 4K reads (averaging somewhere around 120MB). However, for PromethION, the data being provided (and presumably coming out of the basecaller) comes as ~60GB .fastq.gz files. Watching top, this used ~150GB of RAM.
This is OK for me working on a server with 800GB of RAM, but may not be ideal for all users.
A workaround for PromethION would be to manually chunk up the input files and run Porechop separately on each chunk (to simulate the current format of minION data). If anyone adopts Porechop (or it gets rewritten), it might be a good idea to add chunking as an option (eg process reads in 4K blocks).
The text was updated successfully, but these errors were encountered: