You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@talegari Thank you for your interest in peakRAM! Admittedly, anything to do with parallelizing R really pushes the limits of my knowledge. However, I'm happy to give this problem a think-over. Do you happen to have a simple piece of reproducible code I could try out?
I wonder whether something like the following pseudo-code could work. I assume the garbage collector will detect RAM use regardless of the number of R processes distributed? Maybe not, though...
makeCluster <- function(){
# make cluster
# deliver jobs across multiple cores
# close cluster
}
peakRAM::peakRAM(makeCluster())
I notice that my R codes actually uses almost 15 GB of RAM, by HTOP. Meanwhile the recording RAM usage is merely 300 MB by peakRAM. The code block uses several functions from BiocParallel. I'm still checking whether the RAM issue was caused by parallel methods, or just because some methods cannot be monitored by peakRAM.
@tpq
Thanks for a handy package, I had been using a rough utility of mine for the same: https://gist.githubusercontent.com/talegari/ad06da7795b8771e2e152f304ca00f6f/raw
Do you have an idea to compute the peak RAM when multiple cores are used when a sock/fork cluster is instantiated?
The text was updated successfully, but these errors were encountered: