-
Notifications
You must be signed in to change notification settings - Fork 147
Tips for memory efficient processing and speed gains
-
Load the data in single precision format. This requires half the memory compared to double precision and can also be faster for many operations (e.g., FFT). The only conflict arises when we want to multiply the data matrix Y with the sparse matrix of spatial components A. The code then needs to either transform the data matrix Y into a double precision matrix (default choice), or transform A to a full matrix. The default choice can be changed by setting
options.full_A = 1
. -
Use only a subset of the data when determining the noise values through
preprocess_data.m
andget_noise_fft.m
. The noise estimation requires the FFT of the whole dataset along the temporal dimension which can be inefficient and unnecessary for long datasets. To make this more efficient we take the FFT only on the first N timesteps where is defined throughoptions.max_timesteps
. The default value is 3000 timesteps but other values can also be used. -
Use neural activity deconvolution only at the last updating iteration. During the first iterations of
update_temporal_components.m
we can setp=0
which ignores the indicator dynamics and therefore can update the temporal components very fast since it does not perform deconvolution. At the last iteration ofupdate_temporal_components.m
we can set p back to its original value to also perform deconvolution. -
Downsampling is your friend. Spatially, a field of view where neurons have typical diameter > 10 pixels, can be downsampled by factor of 2. Temporally, traces can be downsampled up to an effective rate of ~ 5Hz or even lower. Note, that downsampling is used only during initialization to boost SNR and increase efficiency by reducing the size of the dataset. Before the output of
initialize_components.m
all the estimates are upsampled back to the original spatial and temporal resolution. -
Divide the field of view in spatially overlapping patches and process them in parallel using memory mapping. More information can be found here.
-
For long datasets consider finding the matrix of spatial components
A
and backgroundb
throughupdate_spatial_components.m
using only a (contiguous) smaller time interval of the data. The underlying assumption is that during all the sources are active and their location do not significantly drift. Alternatively,A
andb
can be found by operating on temporally downsampled data. In this case, new noise values need to be estimated for the temporally downsampled data. OnceA
andb
have been found,update_temporal_components.m
can be run without the need to specify initial values forCin
andfin
. They can instead be left empty.