Skip to content

Commit

Permalink
Optimized SPADE analysis and SPADE tutorial (#419)
Browse files Browse the repository at this point in the history
* bug fixed dealing with units

* fixed same unit error in trial shuffling

* added Bielefeld fim.so and adapted filtering and added window param to fpgrowth

* removed max_occ test

* further unit stuff

* debugging for new fim version

* enabled multithreading in fpgrowth

* less verbose in spade

* set equal number of bins in bin shuffling wrt spade

* added tolerance to binning in spade everywhere

* Added accelerated FIM algorithm sources by Florian Porrmann.

* Enh/accelerated spade build (#82)

* Added cibuildwheel action

* Added Python requirements to wheel build

* Build only on 64bit machines, otherwise overflow

* Removed Windows for testing, as vc is not available

* Removed MacOS for testing, as -fopenmp is not available

* Removed pp- (pypy) builds since they lack C.

* Fixed removing pp- (pypy) builds since they lack C.

* Put Macos back in.

* Windows Hack

* Remove vcpython alltogether, ignore 2.7 Python

* Removed extra compile option, which breaks on Windows

* Removed more extra compile options, which breaks on Windows

* Try C++ instead of Gnu++.

* Try C++ instead of Gnu++ Windows style argument.

* Remove linux build while testing windows.

* Remove libraries.

* Differentiate Windows and Linux.

* Added missing import.

* Last mile: MacOS

* Remove openMP lib

* Remove openMP lib

* Add openMP lib

* More brew installs

* Mac is called mac on github

* Make sure C is reinstalled.

* Multilib

* Next try, new options

* Ignore warning about void type

* Update newsest fim package

* Revert "Ignore warning about void type"

This reverts commit 3ff6b62

* Revert to prior fim, new compiler argument.

* Revert "Update newsest fim package"

This reverts commit f321f77

* Definitely, gnu++17, but new try.

* Try C++

* Warning message

* llvm maybe?

* Added apple in source

* Small fixes for MacOS, but not comprehensive

* Limit to Windows and Linux for now

* Remove MacOS entry

* Fix fix from mindlessness

* Testrun

* Trying to include fim.so, despite its renaming by wheels

* Added newest version of original module

* Reverted previous breaking change commited by accident.

* Reverted package name from testing.

* Test focal as CI build

* Test bionic as CI build

* Understand installation issue on CI -- is importing elephant importing the installed version?

* Spelling error only

* Try to make sure travis loads the installed elephant, not the cwd.

* One step further -- which version will nosetests use?

* Switch to pytest as of PR #413

* Added authors of new FIM module and reference in new docs.

* Added authors of new FIM module and reference in new docs.

* Small text clarifications.

* Test if entry for fim.so/pyd in MANIFEST is now redundant.

* Update elephant/spade.py

Co-authored-by: Alexander Kleinjohann <[email protected]>

* Update elephant/spade.py

Co-authored-by: Alexander Kleinjohann <[email protected]>

* Added SPADE tutorial

* Prevent wheel building on every push, and limit scipy version workaround

* Pushed tutorial, removed file added in error

* New attempt to make mybinder install requirements.

* New attempt, dropping viziphant.

* Avoid recursive elephant installation by viziphant in postBuild

* Removed unit test that is fragile as it depends on the implementation of surrogate methods

* Add viziphant to RTD environment

* Typo in tutorial

* Add viziphant to travis doc tests

Co-authored-by: pbouss <[email protected]>
Co-authored-by: stellalessandra <[email protected]>
Co-authored-by: Alessandra Stella <[email protected]>
Co-authored-by: Alexander Kleinjohann <[email protected]>
  • Loading branch information
5 people authored Aug 13, 2021
1 parent 6479381 commit 8c388e4
Show file tree
Hide file tree
Showing 30 changed files with 4,310 additions and 120 deletions.
43 changes: 43 additions & 0 deletions .github/workflows/build_wheels.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,43 @@
name: Build Wheels

# Trigger the workflow on push or pull request of the master
on:
push:
branches:
- master
pull_request:
branches:
- master

# Building wheels on Ubuntu and Windows systems
jobs:
build_wheels:
name: Build wheels on ${{ matrix.os }}
runs-on: ${{ matrix.os }}
strategy:
matrix:
os: [ubuntu-20.04, windows-2019]

steps:
- uses: actions/checkout@v2

# Used to host cibuildwheel
- uses: actions/setup-python@v2

- name: Install cibuildwheel
run: python -m pip install cibuildwheel==1.10.0

- name: Install libomp
if: runner.os == 'macOS'
run: brew install libomp

- name: Build wheels
run: python -m cibuildwheel --output-dir wheelhouse
env:
CIBW_SKIP: "cp27-* cp33-* cp34-* cp35-* pp*"
CIBW_PROJECT_REQUIRES_PYTHON: ">=3.6"
CIBW_ARCHS: "auto64"

- uses: actions/upload-artifact@v2
with:
path: ./wheelhouse/*.whl
11 changes: 6 additions & 5 deletions .travis.yml
Original file line number Diff line number Diff line change
@@ -1,12 +1,11 @@
dist: xenial
dist: bionic
language: python
sudo: false

addons:
apt:
update: true


matrix:
include:
- name: "conda 3.6 extras,opencl"
Expand All @@ -17,7 +16,7 @@ matrix:
- conda install -c conda-forge pyopencl oclgrind clang=9.0.1
- pip install -r requirements/requirements-extras.txt
- pip install mpi4py
script: mpiexec -n 1 python -m mpi4py.futures -m pytest --cov=elephant
script: mpiexec -n 1 python -m mpi4py.futures -m pytest --cov=elephant --import-mode=importlib
after_success: coveralls || echo "coveralls failed"

- name: "conda 3.7"
Expand All @@ -42,6 +41,7 @@ matrix:
- pip install -r requirements/requirements-tutorials.txt
- pip install -r requirements/requirements-extras.txt
- pip install mpi4py
- pip install viziphant # remove viziphant, once integrated into requirements-tutorials.txt
- sed -i -E "s/nbsphinx_execute *=.*/nbsphinx_execute = 'always'/g" doc/conf.py
script: cd doc && make html

Expand All @@ -66,9 +66,10 @@ install:
- pip install -r requirements/requirements-tests.txt
- pip install pytest-cov coveralls
- python setup.py install
- python -c "from elephant.spade import HAVE_FIM; assert HAVE_FIM"
- python -c "import sys; sys.path.remove(''); import elephant; print(elephant.__file__)"
- python -c "import sys; sys.path.remove(''); from elephant.spade import HAVE_FIM; assert HAVE_FIM"
- pip list
- python --version

script:
pytest --cov=elephant
pytest --cov=elephant --import-mode=importlib
3 changes: 0 additions & 3 deletions MANIFEST.in
Original file line number Diff line number Diff line change
Expand Up @@ -7,14 +7,11 @@ include elephant/VERSION
include elephant/current_source_density_src/README.md
include elephant/current_source_density_src/test_data.mat
include elephant/spade_src/LICENSE
recursive-include elephant/spade_src *.so *.pyd
include elephant/asset/*
include elephant/test/spike_extraction_test_data.txt
recursive-include doc *
prune doc/_build
prune doc/tutorials/.ipynb_checkpoints
prune doc/reference/toctree
include doc/reference/toctree/kernels/*
recursive-exclude * *.h5
recursive-exclude * *.nix
recursive-exclude * *~
3 changes: 3 additions & 0 deletions doc/authors.rst
Original file line number Diff line number Diff line change
Expand Up @@ -47,6 +47,8 @@ contribution, and may not be the current affiliation of a contributor.
* Philipp Steigerwald [12]
* Manuel Ciba [12]
* Maximilian Kramer [1]
* Florian Porrmann [13]
* Sarah Pilz [13]

1. Institute of Neuroscience and Medicine (INM-6), Computational and Systems Neuroscience & Institute for Advanced Simulation (IAS-6), Theoretical Neuroscience, Jülich Research Centre and JARA, Jülich, Germany
2. Unité de Neurosciences, Information et Complexité, CNRS UPR 3293, Gif-sur-Yvette, France
Expand All @@ -60,5 +62,6 @@ contribution, and may not be the current affiliation of a contributor.
10. Instituto de Neurobiología, Universidad Nacional Autónoma de México, Mexico City, Mexico
11. Case Western Reserve University (CWRU), Cleveland, OH, USA
12. BioMEMS Lab, TH Aschaffenburg University of applied sciences, Germany
13. Cognitronics and Sensor Systems, CITEC, Bielefeld University, Bielefeld, Germany

If we've somehow missed you off the list we're very sorry - please let us know.
7 changes: 7 additions & 0 deletions doc/tutorials.rst
Original file line number Diff line number Diff line change
Expand Up @@ -47,6 +47,13 @@ Advanced
.. image:: https://mybinder.org/badge.svg
:target: https://mybinder.org/v2/gh/NeuralEnsemble/elephant/master?filepath=doc/tutorials/gpfa.ipynb

* Spike Pattern Detection and Evaluation (SPADE)

:doc:`View the notebook <../tutorials/spade>` or run interactively:

.. image:: https://mybinder.org/badge.svg
:target: https://mybinder.org/v2/gh/NeuralEnsemble/elephant/master?filepath=doc/tutorials/spade.ipynb

* Analysis of Sequences of Synchronous EvenTs (ASSET)

:doc:`View the notebook <../tutorials/asset>` or run interactively:
Expand Down
239 changes: 239 additions & 0 deletions doc/tutorials/spade.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,239 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# SPADE Tutorial"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"ExecuteTime": {
"end_time": "2020-04-23T08:16:59.289299Z",
"start_time": "2020-04-23T08:16:58.185541Z"
}
},
"outputs": [],
"source": [
"import quantities as pq\n",
"import neo\n",
"import elephant\n",
"import viziphant\n",
"import random\n",
"random.seed(4542)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Generate correlated data"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"SPADE is a method to detect repeated spatio-temporal activity patterns in parallel spike train data that occur in excess to chance expectation. In this tutorial, we will use SPADE to detect the simplest type of such patterns, synchronous events that are found across a subset of the neurons considered (i.e., patterns that do not exhibit a *temporal extent*). We will demonstrate the method on stochastic data in which we control the patterns statistics. In a first step, let use generate 10 random spike trains, each modeled after a Poisson statistics, in which a certain proportion of the spikes is synchronized across the spike trains. To this end, we use the `compound_poisson_process()` function, which expects the rate of the resulting processes in addition to a distribution `A[n]` indicating the likelihood of finding synchronous spikes of a given order `n`. In our example, we construct the distribution such that we have a small probability to produce a synchronous event of order 10 (`A[10]==0.02`). Otherwise spikes are not synchronous with those of other neurons (i.e., synchronous events of order 1, `A[1]==0.98`). Notice that the length of the distribution `A` determines the number `len(A)-1` of spiketrains returned by the function, and that `A[0]` is ignored for reasons of clearer notation."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"ExecuteTime": {
"end_time": "2020-04-23T08:16:59.454207Z",
"start_time": "2020-04-23T08:16:59.419213Z"
}
},
"outputs": [],
"source": [
"spiketrains = elephant.spike_train_generation.compound_poisson_process(\n",
" rate=5*pq.Hz, A=[0]+[0.98]+[0]*8+[0.02], t_stop=10*pq.s)\n",
"len(spiketrains)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In a second step, we add 90 purely random Poisson spike trains using the `homogeneous_poisson_process()|` function, such that in total we have 10 spiketrains that exhibit occasional synchronized events, and 90 uncorrelated spike trains."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"for i in range(90):\n",
" spiketrains.append(elephant.spike_train_generation.homogeneous_poisson_process(\n",
" rate=5*pq.Hz, t_stop=10*pq.s))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Mining patterns with SPADE"
]
},
{
"cell_type": "markdown",
"metadata": {
"ExecuteTime": {
"end_time": "2020-04-23T08:17:01.595733Z",
"start_time": "2020-04-23T08:17:01.591410Z"
}
},
"source": [
"In the next step, we run the `spade()` method to extract the synchronous patterns. We choose 1 ms as the time scale for discretization of the patterns, and specify a window length of 1 bin (meaning, we search for synchronous patterns only). Also, we concentrate on patterns that involve at least 3 spikes, therefore significantly accelerating the search by ignoring frequent events of order 2. To test for the significance of patterns, we set to repeat the pattern detection on 100 spike dither surrogates of the original data, creating by dithing spike up to 5 ms in time. For the final step of pattern set reduction (psr), we use the standard parameter set `[0, 0, 0]`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"ExecuteTime": {
"end_time": "2020-04-23T08:17:03.218505Z",
"start_time": "2020-04-23T08:17:02.387311Z"
}
},
"outputs": [],
"source": [
"patterns = elephant.spade.spade(\n",
" spiketrains=spiketrains, binsize=1*pq.ms, winlen=1, min_spikes=3, \n",
" n_surr=100,dither=5*pq.ms, \n",
" psr_param=[0,0,0],\n",
" output_format='patterns')['patterns']"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The output `patterns` of the method contains information on the found patterns. In this case, we retrieve the pattern we put into the data: a pattern involving the first 10 neurons (IDs 0 to 9), occuring 4 times."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"patterns"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Lastly, we visualize the found patterns using the function `plot_patterns()` of the viziphant library. Marked in red are the patterns of order ten injected into the data."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"ExecuteTime": {
"end_time": "2020-04-23T08:17:04.600606Z",
"start_time": "2020-04-23T08:17:04.423012Z"
},
"scrolled": true
},
"outputs": [],
"source": [
"viziphant.spade.plot_patterns(spiketrains, patterns)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.3"
},
"latex_envs": {
"LaTeX_envs_menu_present": true,
"autocomplete": true,
"bibliofile": "biblio.bib",
"cite_by": "apalike",
"current_citInitial": 1,
"eqLabelWithNumbers": true,
"eqNumInitial": 1,
"hotkeys": {
"equation": "Ctrl-E",
"itemize": "Ctrl-I"
},
"labels_anchors": false,
"latex_user_defs": false,
"report_style_numbering": false,
"user_envs_cfg": false
},
"toc": {
"nav_menu": {},
"number_sections": true,
"sideBar": true,
"skip_h1_title": false,
"title_cell": "Table of Contents",
"title_sidebar": "Contents",
"toc_cell": false,
"toc_position": {},
"toc_section_display": true,
"toc_window_display": false
},
"varInspector": {
"cols": {
"lenName": 16,
"lenType": 16,
"lenVar": 40
},
"kernels_config": {
"python": {
"delete_cmd_postfix": "",
"delete_cmd_prefix": "del ",
"library": "var_list.py",
"varRefreshCmd": "print(var_dic_list())"
},
"r": {
"delete_cmd_postfix": ") ",
"delete_cmd_prefix": "rm(",
"library": "var_list.r",
"varRefreshCmd": "cat(var_dic_list()) "
}
},
"types_to_exclude": [
"module",
"function",
"builtin_function_or_method",
"instance",
"_Feature"
],
"window_display": false
}
},
"nbformat": 4,
"nbformat_minor": 2
}
Loading

0 comments on commit 8c388e4

Please sign in to comment.