You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
There are examples where the CPU memory used to store the full posterior is quite large. For example, a deeply sequenced, multiplexed multi-donor sample (overloaded) and run with PIP-seq, which generates like 600+ ambient RNA counts per drop.
This posterior is very big, since there are lots of nonzero entries in the count matrix, and a lot of them are not small.
There are examples where the CPU memory used to store the full posterior is quite large. For example, a deeply sequenced, multiplexed multi-donor sample (overloaded) and run with PIP-seq, which generates like 600+ ambient RNA counts per drop.
This posterior is very big, since there are lots of nonzero entries in the count matrix, and a lot of them are not small.
The question is, can we just incrementally write the posterior to disk? Something like this:
https://pandas.pydata.org/pandas-docs/stable/user_guide/io.html#table-format
The text was updated successfully, but these errors were encountered: