What is the most insightful large scale spike analysis that could be run on the 1million cells of multi_area_model? #526
Replies: 2 comments 3 replies
-
Hi @Tijl and @aMarcireau, This is a spike train is the output of 1million cells from a scaled down version of the Multi-scale spiking network model of macaque visual cortex (full scale 100 million). Goal: do Representational Dissimilarity Analysis between the simulation outputs and fMRI voxel activations in V1 in a virtual experiment where both networks are stimulated by the same stimulus. Problem 1: Kreutz spike distance is the most natural measure of spike train similarity, computing the Kreutz spike distance between each possible pairing of cells in a 1million neuron simulation, leads to filling a matrix with $ 1million^{2} $ elements, a calculation that takes 4 days to compute, and that won't be easily reproducible. Problem 2: Although PCA is a good dimensionality reduction technique. Finding a projection space that maximises variance in the data, is going to upward bias the found dissimilarity, because the spike trains returned by a PCA fitting routine are going to be the ones that represent high variance, and each one is going to come from an orthogonal projection vector, so each spike train is going to look very dissimilar to the other. Alternative idea, perform n=10 random subsamples of ncell=250 into the 1million neuron spike train, compute Kreutz spike distance matrix on the $ 250^{2} $ element matrix 10 times. Use the 10 samples to do a hypothesis test about the dissimilarity. Other issues:
Interesting but Un-related:I can divide it into chunks of 750 cells and sort the raster spike plots to increase cluster sizes of coincident events transformed to this chunk: |
Beta Was this translation helpful? Give feedback.
-
@Tijl, |
Beta Was this translation helpful? Give feedback.
-
https://github.com/BrainsOnBoard/procedural_paper
Use Julia Packages: https://github.com/lindermanlab/PPSeq.jl, https://github.com/dpshorten/CoTETE.jl, OnlineStats.jl
https://github.com/JuliaNeuroscience/SpikeSynchrony.jl
To analyse the 1million cells of the multi_area_model spiking data at large scale.
This is a technical area involving several points of curiosity.
Does this choice of analysis algorithm scale okay?
Can algorithms be re-written to work in an online fashion? Ie, not the whole data-set is stored in memory, but chunks are operated on in a rotating buffer type manner.
Can GPU acceleration be used (Julia provides nice GPU libraries for CuArrays and for writing custom kernels).
If not does operate on chunks of data and provides valuable approximations.
russelljjarvis/PPSeq.jl#1
Beta Was this translation helpful? Give feedback.
All reactions