This repository includes libraries for the training and execution of Spiking Neural Networks (SNNs) using PyTorch. The data preprocessing module transform UCI datasets such as the Iris, Wine and WBC to spike trains using Gaussian Field Encoding. Then, the SNN neuron file contains definitions for Leaky Integrate & Fire (LIF) and a normalized Izhikevich model. Finally, SNN network defines a basic network composed of two linear and neuron layers intertwined. This network will be trained to perform classification tasks based on rate coding for the previously mentioned datasets.
- Download the repository with
git clone
. - Install Python3 with
sudo apt update
andsudo apt install python3
. - Install pip with
sudo apt install python3-pip -y
. - Use
pip install -r requirements.txt
, to download the associated libraries. Numpy, matplotlib, pandas and torch will be used.
Three demos are available. They aim to illustrate the execution of the same operations with each dataset, which can be selected with arguments. -v
will enable the verbose mode.
-p
will process the data according to the selected arguments. In this case, data is just normalized and scrambled for training before being stored in the processedData folder.-l
will train and test an SNN with the LIF model, using the available data. The trained network weights and biases are stored in the networks folder.-i
will evaluate the network using the Izhikevich model. Since both models are normalized, it is expected to obtain similar performance with the LIF and Izhikevich models.
This repository provides a small set of functions to read UCI datasets or similar, process the different variables of the data elements and encode those values into sets of firing intervals for
A Gaussian distribution
However, the normalization constant
A Gaussian field will be placed equidistantly for the
Each receptive field will have the same standard deviation. A field superposition value
The following encoding example was calculated using this library:
Use cases of the following functions are included on the three available demo files.
-
readCSVData() returns a list of lists that contains the file data. To parse a .csv file the element separator would be
\n
and the parameter separator,
. -
writeCSVData() saves the processed data on a .csv file. processedData must be formatted as a list of lists.
-
processData() returns a list of lists with the inputs for every neuron. It can be selected as a latency coded Gaussian or not. If the former is selected, the following variables are available.
- variablePositions selects the columns of the file that contain the data to be processed.
- resultPosition indicates the position of the column which contains the classification of the element.
- resultEncoding is a dictionary with the labels that correspond to each category.
- fieldSuperposition and nInputNeurons control the width of each receptive field and number of receptive fields. Please refer to the mathematical fundamentals for a more detailed explanation.
- nIntervals contains the number of discrete input intervals that will encode the excitation of every receptive field for a given parameter. High excitations (close to 1) are encoded as input spikes on the first intervals, meanwhile spikes on the latter intervals correspond to low excitation values (near 0).
-
plotDataPoint() Creates a figure in .pdf format that represents the receptive field set, the excitation values and the input intervals for each neuron of one of the parameters of an element of the dataset. The latter two parameters of the function select the element and parameter to plot.
The Leaky Integrate & Fire neuron model is one of the simplest yet powerful neuron models for SNNs. According to this model, neuron membranes behave as capacitors in parallel with resistors. This voltage sharply increases when a spike is received, and it experiences an exponential with a
The first term handles the decay that neuron voltage exhibits over time, the second one adds the weighted input
As an example, the following neuron receives spikes at the steps 10, 40, 50 and 60. Since they only last for a single timestep, they do not increase the membrane voltage enough to produce a spike. However, as you can see with the last input, multiple consecutive spikes are capable of making the neuron fire because its capacitor would not have enough time to discharge. This is the reason a spike is produced at the 60th step.
The Izhikevich neuron model aims to provide an accurate neuron model at a relatively low computational cost. Instead of having a single hyperparameter, such as the LIF model, it includes four. By adjusting this parameters, it is possible to produce a wide variety of behaviors besides the regular spiking that characterizes LIF neurons. Izhikevich models are capable of chattering, bursting and they can even behave as resonators
For example to produce regular spiking patterns the following parameters can be used:
-
$C = 100 \;\;\rightarrow$ Membrane capacitance (pF) -
$k = 0.7\;\;\rightarrow$ Input resistance (pA/mV) -
$v_r = -60\;\;\rightarrow$ Resting membrane potential (mV) -
$v_t = -40\;\;\rightarrow$ Instantaneous threshold potential (mV) -
$a = 0.03 \;\;\rightarrow$ Time scale of the recovery variable (1/ms) -
$b = -2 \;\;\rightarrow$ Sensitivity of the recovery variable (pA/mV) -
$c = -50\;\;\rightarrow$ Potential reset value (mV) -
$d = 100 \;\;\rightarrow$ Spike triggered adaptation (pA) -
$v_p = 35\;\;\rightarrow$ Spike cutoff (mV)
However, when introducing this neuron model as an SNN layer, adjusting the weights and biases of the layers that this neurons are connected to can be really challenging. For this reason, researchers have proposed alternatives to normalize this model.
By operating with the parenthesis, the previous equation can be expressed as: $$ \dot v = a_1v^2 + a_2v + a_3u + a_4I + a_5$$
As an example, the following neuron receives impulses with increased durations. The impulses start at the time steps 200, 400 and 600. The first two ones are not long nor high enough to elicit spikes. However, the last one is enough to reach the threshold.
To normalize the Izhikevich model equations, it can be established that:
The same process that was previously shown results in:
However, since the Input
$a_1 = 1.0$ $a_2 = -0.21$ $a_3 = -0.019$ $b_1 = -1/32$ $b_2 = -1/32$ $b_3 = 0$ $c_1 = 1$ $c_2 = 0.105$ $c_3 = 0.412$
As you can see, with an adequately scaled input the neuron's behavior is identical to the previous one.