a generalized leaky integrate-and-fire neuron ... - Semantic Scholar

International Journal of Neural Systems World Scientific Publishing Company

A GENERALIZED LEAKY INTEGRATE-AND-FIRE NEURON MODEL WITH FAST IMPLEMENTATION METHOD ZHENZHONG WANG*, LILIN GUO† and MALEK ADJOUADI‡

Center for Advanced Technology and Education Florida International University 10555 W Flagler St Miami, FL 33174, United States of America * [email protected][email protected][email protected] This work introduces a Generalized Leaky Integrate-and-Fire neuron model with variable leaking resistor and bias current which could accurately reproduce the membrane voltage dynamics as a biological neuron cell. The model is motivated by assuming that conductance in Hodgkin-Huxley model repeats after each action potential a same trace, which is used to find out the variable leaking parameters experimentally. The accuracy of this model has been verified by comparing its simulation results with those from Hodgkin-Huxley model stimulated by the same stochastic spike inputs, while the speed of the model has been enhanced by introducing a Generalized Exponential Moving Average method to solve the model equation. Keywords: Exponential moving average, Hodgkin-Huxley model, Leaky Integrate-and-Fire, Neuron model.

neuron of a squid3; 4. In the HH model, the neuron cell membrane is considered as a leaky capacitor, which is connected to both stimulation current from outside of the neuron and internal ionic current caused by transportation of ions between through the neuron cell membrane. Hodgkin and Huxley studied two major components of the ionic current related to the sodium ion channel and the potassium ion channel; yet more types of ion channels were found later to characterize the complicated neuron activities in human and animal nervous systems. Pospischil et al.5 reviewed these ion channels and proposed a general form of HH model for neurons as postulated in the human brain. Although the HH model has been proved to be accurate in reproducing the spiking activities of a biological neuron, its applications in ASNN are still rare due to its computational complexity. A simpler model is thus preferred when one is trying to build a large scale ASNN. One example of the simple model was proposed by Izhikevich6; 7, which applies bifurcation and folding theories to the HH ODEs to yield dynamic equations with only two state variables for each neuron. Although the conductance and current changes of the ion-channels are not fully described, the

1. Introduction Inspired by the ability to process information of a biological neural network by utilizing sequences of action potentials of the neuron cell membrane voltage (spikes), Artificial Spiking Neural Network (ASNN) mimicking the biological Spiking Neural Network (SNN) gained wide interests among researchers in areas like machine learning and artificial intelligence1; 2. The computing unit in an ASNN is usually a mathematical model or electronic unit which characterizes the membrane potential dynamics of a neuron cell. The challenge remains in selecting a proper model for the ASNN because of the difficulty in balancing accuracy and complexity of the mathematical model while attempting to reproduce the dynamic behavior of a neuron model. Hodgkin-Huxley (HH) model is the most popular model known to biologists. This HH model is a set of Ordinary Differential Equations (ODE) which describes the dynamics of cell membrane potential and ion transportations of a neuron. The parameters are estimated by analyzing the ionic current data gathered from the voltage clamped experiments on the giant axon 1

2

Author’s Names

Izhikevich model, which is expressed in a concise form, remains successful in reproducing different types of neuronal dynamics. This was widely acknowledged by researchers working on large scale neural network practices8; 9. Another genre of simple neuron models are the so-called Leaky-Integrate-and-Fire (LIF) models10, which ignore all the intricacies of the ionic current in the HH model, and simplify the neuron as a capacitor connected to a leaky resistor. Whenever the membrane voltage of a LIF neuron crosses a threshold value, an output spike is fired and the membrane voltage is reset immediately after the spike to a lower level. Although the simplest LIF model considers the leaky resistor as a linear resistor, hence lacking of mechanisms to model complicated neuron activities, LIF model is argued to be valuable in analyzing key neuron properties because of the way it models the key feature of a neuron membrane: the conductance; and the model ODE could be solved analytically, thus providing intuitive insight to the biological neuron activities11. A variety of modifications had been proposed for the LIF model to make it more biological plausible12; 13, still the simpler Integrate-and-Fire design remains the preferred model by researchers working on large scale SNN14-17. In this study, we propose a design model and implementation of a new Generalized LIF (GLIF) by innovative modifications of the Normal Leaky Integrate-and-Fire (NLIF) model, so that comparable accuracy of the HH model is attained, while the computational complexity of the model is significantly reduced.

Cm

du  is  t   g kern  t  t f  u  ikern  t  t f  dt t f : u  t f   uth and

du dt

 0,

(1)

(2)

tf

where u is the membrane voltage of the neuron, Cm is the membrane capacitance. The gkern and ikern elements are temporal functions which describe respectively the changes of membrane conductance and bias current related to the ionic current flow through the neuron cell membrane. Moreover, gkern is the variable conductance of the leaky resistor, and ikern is the bias current of the neuron which could stimulate an action potential. The “kern” subscripts indicate that these two kernel functions are repeated each time the neuron fires. The stimulation of this model is expressed as a summation of currents from all synapses connected to this neuron, denoted by (1). The firing time of this neuron is defined in (2) as the time when the membrane voltage crosses a threshold uth. Immediately after each fire, a new tf is used to calculate gkern and ikern. The time interval t – tf is usually named as the survival time, i.e., the length in which the neuron stays quiescent since the last time it fired. We assume that the kernel functions could be described through summation terms of a set of functions fj as follows: g kern (t )  k L   f j (t ) j

ikern (t )  iL   W j f j (t ),

(3)

j

2. The model The following equations define the newly developed spiking neuron model, which is a modification of the NLIF model. These modifications are intended to resolve the contentious issues of reducing significantly the computational complexity of the HH model while maintaining the high accuracy in reproducing effectively the dynamic behavior of a neuron cell’s membrane voltage. These mathematical derivations consolidate both of these models (NLIF and HH) by introducing variable membrane conductance and bias current to the LIF model in order to reach the high accuracy of the HH model. A Generalized Exponential Moving Average (GEMA) is then used as means to reduce the processing time:

where kL and iL are the constant leaking conductance and bias current when the neuron is quiet, fj are bellshaped functions which could fit the trajectories of conductance for all ion-channels during an action potential, and Wj are the resting potentials for those channels. The bell-shaped curves fj as used here are the derivatives of generalized sigmoid functions Φj:  j (t ) 

f j (t ) 

d j dt



Aj 1 e Aj



(4)



 t  j /l j

e





 t  j /l j

l j 1  e  (t   j )/ l j  2  

,

(5)

Instructions for Typing Manuscripts (Paper’s Title)

where Aj, µj and lj are empirical parameters controlling the bell amplitude, shape and location, which need to be fitted to the experimental ionic current data.

3

a Poisson process, the HH model could then be approximated by the GLIF model. 2.2. Comparison with the NLIF model

2.1. Compare with HH models

The generalized HH model suggested by Pospischil5 could be formulated as: du Cm  is (t )  iion (u , t ) dt

(6)

iion (u, t )   g j m j j n j j  u  E j ,

(7)

p

q

j

where iion is a summation over all possible ionic currents, with g j being the maximum conductance, Ej is the resting potential. mj and nj define the gating parameters for each ion channel, with pj and qj being their power indices. Parameters mj and nj evolve according to the following dynamic equations: dm j dt dn j dt

(8)

g * (t )   g j m j j n j j q

j

i* (t )   g j m j j n j j E j . p

q

du is   u, dt R

(9)

j

Functions g* and i* are equivalent to gkern and ikern when the following conditions are satisfied: (i) g* and i* repeat the same trajectory after each time the neuron fires; (ii) g*(t − tf ) and i*(t − tf ) could be fitted well by (3), with tf being the most recent firing time of the neuron. The next section elaborates on the supposition that if the stimulation is a train of random spikes generated by

(10)

where R is the resistance of a linear resistor and τm is a time constant for the integrator. A NLIF neuron has no explicit action potentials but only the firing time defined in the same way as in the GLIF neuron in (2). The membrane voltage is reset to urest < uth right after the neuron fires. Consider t being significantly larger than zero in (3), gkern and ikern will both tend to constant values: g   lim g kern (t )  k L t  t 

where αm,j, βm,j, αn,j, and βn,j are constants or explicit functions only related to the membrane voltage u. It should be noted that, the contribution of the constant leaking channel in the ionic current, which is usually expressed as an individual term, could also be expressed in the summation of (7) by simply defining mj(0) = 1, nj(0) = 1, αm,j≡ 0, βm,j≡ 0, αn,j≡ 0 and βn,j≡ 0 for the leaking channel j. In order to compare (6) with (1), the following relations are assumed: p

m

i  lim ikern (t )  iL .

  m , j  u  1  m j    m , j  u  m j   n , j  u  1  n j    n , j  u  n j ,

A NLIF model having a linear leaky resistor could be described by the following equation18:

(11)

If the bias iL is set to zero in (3), the NLIF model could be considered as a degraded form of the GLIF model for long survival times, and where the time constant and linear resistant are defined as follows:

 m  Cm / g  R  1/ g  .

(12)

In other words, the NLIF model describes the membrane potentials when the neuron stays quiescent, while the GLIF model modified the NLIF model by introducing the variable membrane conductance and bias current, which yields a better accuracy in approximating the HH model in the post-fire region. The next section provides the derivations and empirical evaluations that led to these modifications and describes the methods used for establishing and implementing the GLIF model. 3. Methodology

The following sections describe the modification that were made on the NLIF and the consequential reasons for (1) introducing the variable membrane conductance and bias current to make it accurate, and (2) making use of the GEMA approximation of the GLIF model solution in order to optimize the computational requirements and speed up its simulation.

4

Author’s Names

iion  g Na  u  ENa   g K  u  EK 

3.1. Parameters in GLIF model

There are several parameters that need to be determined empirically in (1) and (2) for generating the GLIF model. As shown in Fig. 1, a synapse and HH neuron pair was constructed to estimate the parameters required by the GLIF model. An input spike train with m spikes arriving at time instants s(1), s(2), ..., s(m) along with their respective connection weights w1, w2, ..., wm, was fed to the synapse as stimulations.

Fig. 1. A Pair of Synapse and Neuron. Input spikes generated by Poisson Process with random connection weights stimulate the neuron and induce regular spikes of action potentials.

The spike train was converted into synaptic current by an amplitude modulated spike shape current source model as follows: is  t    w i  t  s * j s

( j)

j

is* ( )  I s,max

  / , e s s

  t  s  ( j)

(13)

where is* is the shape function for a single input current spike, with maximum current Is,max = 23 µA/cm and rising/decay time τs = 2 ms. The HH model as suggested by Hodgkin and Huxley19 is used here with two active ion channels for demonstration purposes. The currents in the sodium channel, the potassium channel and the linear leaky channel follow:

 g L  u  EL  g Na  g Na m3 h gK  gK n4 .

(14)

The maximum conductance for sodium, potassium and leaking channels are g̅ Na = 120 mS/cm2, 2 2 g̅ K = 36 mS/cm , and g̅ L = 0.3 mS/cm respectively. The corresponding reversal potentials adopted were ENa = 115 mV, EK = −12 mV, and EL = 10.6 mV. An experiment to determine parameters in the GLIF model was designed using the following implementation steps: (i) Randomly choose a frequency λP between 1 Hz and 1000 Hz. (ii) Generate a spike train lasts 1000 ms by a Poisson process using λP as the mean rate. (iii) Randomly choose a connection weight between 0 and 1 for each spike in the spike train. (iv) Randomly assign a sign for each connection weight with equal positive and negative possibility. (v) Calculate the input currents for the generated spike train by using Eq. (13). (vi) Numerically solve the differential equation (6) with ionic current defined in (14) for 1000 ms in Wolfram Mathematica computation environment. (vii) Record membrane voltage u, gating variables m, n, and h at 0.1 ms time steps and save to a file. The experiment was repeated 1000 times with different mean rates λP and to yield statistically reliable results. 3.1.1. Firing thresholds There are published methods to estimate the firing threshold of a biological neuron18, yet they are all based on constant stimulations. In our experiment, the threshold could be estimated statistically when the neuron is driven by random spike trains, which is more realistic, especially when a neuron is a functional element of an overall neural network. The idea for estimating the threshold statistically is based on the assumption that all action potential spikes are similar with single peaks, and uth is much smaller than the average peak value of the action potential spike, hence no local maximums could be found between uth and the average action potential peak value among all the recorded membrane voltages. We searched for all local maximums (peaks) of the

Instructions for Typing Manuscripts (Paper’s Title)

membrane voltage from the obtained experimental results.

5

Since there is no peak located between 10 mV and 90 mV, all peaks lower than 50 mV were considered as regular peaks. The cumulated distribution function (CDF) of probability density for these regular peaks were calculated and plotted as shown in Fig. 2 (b), and a firing threshold uth = 4.69 mV was found from the peak value of the CDF. 3.1.2 Kernel functions The variable conductance for the two ion channels were calculated at discrete time steps t[i] for i = 1, 2 , .., k as follows:

(a)

g Na [i ]  g Na m[i ]3 h[i ] g K [i ]  g K n[i ]4 .

(16)

In order to find the kernel functions, the simulation time t[i] should be converted to the survival time. We searched for all the firing moments with step index f1, f2, ..., fp satisfying:

  u[ f j ]  uth     u[ f j  1]  uth   1, (b) Fig. 2. The firing threshold of membrane voltage. (a) Probability distribution of the local maximums of membrane voltage. The left region of this distribution indicates those peaks did not induced action potentials; the right region of this distribution shows the action potential peaks. Almost no local maximum could be found in between; (b) Cumulative distribution of the regular peaks; the firing threshold uth is defined so that 95% of regular peaks happens below it.

The distribution of probability density of the local maximum values were calculated and plotted as shown in Fig. 2 (a). The probability density curve shows that most local maximums fell into the range from −20 mV to 10 mV, and the range from 90 mV to 110 mV. We denote the peaks in the lower range separated from the action potential peaks as the non-spiking peaks. Since there is almost no peaks in the voltage region between 10 mV and 90 mV, the threshold voltage could be identified below 50 mV. We defined the firing threshold as a membrane voltage below which 95% of all recorded regular peaks could be found: uth : Ppeaks U | U  uth   0.95.

(15)

(17)

which are all the moments when u[i−1] < uth and u[i] ≥ uth. The survival time for each step was calculated by

 [i ]  t[i ]  t[ f j ] when t[ f j ]  t[i ]  t[ f j 1 ],

(18)

for all j = 2, 3, 4, ..., p. A boundary condition fp+1 > 1000 ms was used to calculate the survival time for steps after the last action potential in each experiment, yet the recording starts from t[f2] because those time steps before the first action potential in each experiment should be discarded for no corresponding survival times. The membrane voltage and conductance were categorized into sequential survival time intervals: [ 0 , 1 ),[ 1 , 2 ),...,[ q  1, q ).

(19)

The mean values and standard variances were calculated for u, gNa, and gK at each interval, with results shown in Fig. 3. Although they vary slightly from their mean values, the sodium and potassium channel conductance and the membrane voltage evolve along certain trajectories after each firing of the action potential. We could find that the variances of membrane voltage are negligible after the spike, due to the disturbance originated from

6

Author’s Names

Table 1. Fitting parameters for the kernel functions. X Na K

lx (ms) 0.3180 0.7889

Ax (mS·ms/cm2) 35.88 39.53



 Na     g Na (t )dt 0



 K     g K (t )dt. 0

(20)

The following sigmoid functions are used to fit ΓNa and ΓK data for the construction of the kernel functions: S x ( ) 

A

1 e

x (   u x )/ lx

 k x ,

(21)

where subscript x could be “Na” or “K” indicating that the function is fitted to the integrated sodium or potassium conductance data. The fitted parameters for these functions are shown in Table I, and the fitting goodness is illustrated in Fig. 4. The variable conductance and bias current function needed in (1) could be defined in the following way:

kx (ms) 0.0115 0.3118

(a)

Fig. 3. Mean value of Membrane voltage (black squares using left and bottom axes) and conductance of sodium and potassium channel (red rounds and green triangles using right and top axes) versus survival time. The standard variances are indicated by the error bars at all sample points individually.

random stimulations. However, the small variances of ionic conductance indicate that the ionic conductance repeats precisely for each action potential, and is insensitive to the input stimulations. The mean values of gNa and gK could be used to construct the gkern and ikern functions respectively. Since the integration of gkern and ikern are more important than the conductance in affecting the dynamics of membrane voltage, we tried to fit the integral functions of fi to the numerically integrated gNa and gK data. Suppose the integrated conductance of sodium and potassium over the survival time can be formulated as:

ux (ms) 2.128 3.837

(b) Fig. 4. Curve fits for kernel functions. (a) Modified sigmoid function fitted to ΓNa(τ), with SSE = 34.91, adjusted R-square = 0.9977 and RMSE = 0.7386; (b) Modified sigmoid function fitted to ΓK(τ), with SSE = 63.65, adjusted R-square = 0.9983 and RMSE = 0.9972

dS Na dSK  d d dS dS ikern ( )  iL  Na ENa  K EK , d d

g kern ( )  k L 

(22)

where kL is the constant leaky conductance and iL is the bias current contributed by the constant leaking channel defined as follows: iL  k L EL .

(23)

The constants ENa = 115 mV, EK = −12 mV and EL = 10.6 mV are reversal potentials, the same as those found in the HH model. We could select the linear leaky conductance gL = 0.3 mS/cm2 suggested by HH model as the constant kL in (23), yet there would be a small amount of mismatch which yields a non-zero iL = 1.1013 mA/cm2 in that case. Thus we adjusted the kL to 0.1961 mS/cm2 so that the bias current vanishes when the neuron stays quiescent.

Instructions for Typing Manuscripts (Paper’s Title)

3.2. Implementation of GLIF

The model complexity introduced by the GLIF might increase the computational load when it is implemented, which is a trade-off for the enhanced model accuracy. Yet a special numerical method could be used to solve the GLIF ODE to speed up the computation. The method is to solve the GLIF ODE analytically first, and then a solution using the Generalized Exponential Moving Average (GEMA) can be sought. 3.2.1 Solution of GLIF ODE According to the Appendix A, the ODE of GLIF model could be solved analytically using the following relation: u  tr   ukern  tr   usyn  tr  ,

(24)

where tr is the survival time related to the most recent firing time tf, ukern is the action potential membrane voltage trajectory and usyn is the contribution of input current spikes to the membrane voltage. ukern and usyn are defined by the following equations: tr i   ( ) ukern  tr     tr   uth   kern d  0 C  ( )   m

usyn  tr    (tr ) 

tr

0

is ( ) d , Cm  ( )

(25)

(26)

with the function Π (tr) defined as: tr

   tr   e 0

g kern ( )/ Cm d 

us Punit ( , s ) s Punit (t0 , t )    t0     t0  t   ,

7

is*,pulse   

(28)

where Punit is a unit pulse function, while Δs and Δus are the width and height of the pulse respectively. The membrane voltage is increased by Δus when such pulse stimulations arrive. We set Δus = 8.96 mV in the proposed method, which is similar to the maximum possible membrane voltage of a GLIF neuron stimulated by a single input spike, which happens when the survival time tr is significantly larger than zero, with shape function is* and connection weight of 1. Using Eq. (28), we could rewrite (26) as: usyn (t )  

t

0

 (t )us I s d  ( )s

I s   w j Punit   s ( j ) , s .

(29)

j

Since no singular points could be found from the integrand function in (29), the integration could be discretized at small time intervals Δs:  (ns ) us  w j  p , f j , p  0  ( p s ) j n

usyn [n]  

(30)

where n is the current time index, fj defines the input spike time index, and δi,j is the Kronecker delta function: ns  t  (n  1) s

.

(27)

The GLIF ODE solution is still difficult to implement given its inherent computational complexity. The function Π carries with it a heavy computational burden and is practically impossible to simplify by means of a look-up-table although it decreases to zero within several milliseconds, and a divide by zero error may occur in (26) if the tail of function Π at large tr is trimmed to zero. Our method discretized the ODE solution and made some further simplifications so that the calculation of the Π function could be substituted by a look-up-table operation. The first simplification was to substitute the linear raising and exponentially decaying current shape function is* by a pulse shape current function:

f j s  s ( j )   f j  1 s 1, when i  j 0, when i  j.

i, j  

(31)

Therefore, the discrete solution of the GLIF ODE can be written as follows:

u[n]  ukern [n]  usyn [n].

(32)

3.2.2 Generalized Exponential Moving Average Exponential Moving Average (EMA) is a way to calculate the moving average S for a set of discrete data x: S [0]  x[0] S [n  1]   S [n]   x[n  1].

The average value S could then be expressed as:

(33)

8

Author’s Names n

S [n]   n x[0]    e( n  i ) log  x[i ],

(34)

i 1

so that when α < 1, the averaging weights of the data decrease exponentially from the latest data x[n] to the earlier data. For normal EMA, parameters α and β are constants and fulfill boundary condition α + β ≡ 1, yet the exponential averaging properties are still satisfied as long as α < 1. We henceforth denote EMA with variable averaging parameters as the Generalized EMA (GEMA). In our model solution, we could thus define ns

 ( n s )   e ( n1) s [(n  1)s ]   1, 

 [ n] 

g kern ( )/ Cm d 

(35) 3.3. Model performance

so that the discrete solution of GLIF ODE could be written in an iterative fashion as follows: u[n]  ukern [n]  usyn [n] usyn [n]   [n]usyn [n  1]  us  w j  n, f j j

usyn [0]  0.

(36)

Note that gkern is smooth and Δs is small, and we have

 [ n]  e  g

kern

( ns ) s / Cm

Fig. 5. GEMA window width according to the survival time. The simulation step size is 0.1 ms

.

(37)

The variable parameter α has been plotted in Fig. 5 for a simulation step size Δs = 0.1 ms, where we could see that α is always smaller than one, satisfying the EMA requirement. It can also be observed that α tends to be constant at larger values of n, when the input spikes are regularly exponential moving averaged into usyn. Yet in the “Variable Window Region”, α is significantly decreased. Since α is the parameter indicating the averaging window size of the GEMA, a smaller α means that less input spikes are averaged into usyn. Suppose two same input spike trains arrive the neuron in the variable window region and the constant window region respectively, the decreased α for the first spike train will make it more difficult to increase the neuron membrane voltage or fire a new action potential than the latter spike train, which suggest how refractory period appearing in biological neurons could be characterized by the GLIF model. In contrary, the averaging parameter α is always a constant if we did the same GEMA process to a NLIF model, so no refractory period could be found in the NLIF model.

The two major issues that define a model’s performance are its speed and resulting accuracy. The less computation effort to simulate a single neuron, the easier it is to integrate it into a large scale ASNN. The accuracy describes how reliable the model could reproduce the behavior of a biological neuron. The computation effort of a neuron model could be easily evaluated by simulating it together with another known model under same condition, and compare their computation time. Yet the evaluation of the accuracy for a model is a bit more complicated. Since there is no quantity index that exists in evaluating the accuracy of a neuron model so far. In this study, the Missed Fire Rate (MFR) and the Accidental Fire Rate (AFR) are used as indicators of a neuron model’s accuracy. Suppose one input spike train is used to stimulate both a reference neuron model (a model considered as reliable in reproducing the biological neuron activities) and a testing model. The generated output spike trains are Sr = {sr(1), sr(2), …, sr(m)} and St = {st(1), st(2), …, st(n)} respectively, which means the reference model fires output spikes at simulation time sr(i) and the testing model fires at simulation time st(j) , where i = 1, 2, …, m and j = 1, 2, …, n are the spike indices. A small tolerance ε is used so that if a spike pair sr(i) and st(j) was found satisfying | sr(i) − st(j) | < ε, the i-th spike in Sr and the j-th spike in St should be considered as matched spikes. By searching all the matched spike pairs throughout the two output spike trains, and removing the matched spikes from their own output spike trains immediately after they are found, the remaining spikes in Sr are

Instructions for Typing Manuscripts (Paper’s Title)

9

called the Missed Fires, and the remaining spikes in St are called the Accidental Fires. Note that the removing operation of matched fires from their own spike trains ensured that each spike in either output spike trains could be matched to no more than one spike in the other output spike train. Consequently, the MFR and AFR could be defined as follows: Number of Missed Fires Number of matched spike pairs Number of Accidental Fires AFR  , Number of matched spike pairs MFR 

(38)

so that MFR is the error rate that the testing model should fire an action potential spike yet missed; and AFR is the rate that the testing model should remain quiescent yet fires a spike accidentally. Lower MFR and AFR values indicate better accuracy of the testing model. 4.

Results

We have built three neurons using GLIF, NLIF, and HH models respectively. The HH model described in (14) was considered as the reference model due to its widely accepted accuracy in reproducing the activities of the giant axon neuron in a squid. The GLIF parameters and kernel functions were extracted from the HH model, using the methods described earlier. A NLIF neuron was tested in order to compare it with the GLIF model. A same firing threshold uth = 4.69 mV was selected as in the GLIF model; and the time constant and linear resistance was selected using (12) to ensure similar quiescent behaviors between the NLIF model and GLIF model. An absolute refractory period Δabs = 5 ms suggested by Schliebs et al. was adopted to prevent oscillations of the membrane potential for the NLIF model20. Spike trains generated by Poisson Process with mean frequency λP = 150 Hz were used to stimulate and test all models. A portion of the recorded membrane potentials and fire times for all neurons are illustrated in Fig. 6. Note that these results were obtained by solving the HH, GLIF and NLIF ODEs numerically in Mathematica Symbolic Computation Environment, with adaptive step size and automatic solving method options. No precision warnings were found during any solving procedures, indicating that the solving method is considerably accurate.

Fig. 6. Comparison of membrane voltage evolution under current input is of a Poisson input spike train for NLIF, PLIF and HH models. Vertical bars marked under the membrane voltage of NLIF model indicate its spiking locations.

Compared to the NLIF model, the GLIF model could reproduce similar spike shape action potentials as the HH model. It could be found intuitively from Fig. 6 that the GLIF model generates fewer accidental fires then the NLIF model. The only fire missed by both PLIF and NLIF models in Fig. 6 is the second fire of two adjacent fires found in the HH model, which is probably due to the bursting fire mechanism under large stimulation. Such bursting feature is impossible to be captured by any one-dimensional neuron model. A comparison of the membrane potentials for these three models is shown in Fig.7. The membrane potentials are almost the same for these three models when the survival time is longer than 10 ms (in the so-called “silent period”), proving that the NLIF model is a good approximation when the neuron stays quiescent for long enough. Yet the membrane voltage trajectory of NLIF model is completely different from GLIF and HH models in the post-fire region, marked as the refractory period in Fig. 7. During the

10

Author’s Names

Fig. 7. Zoom view of simulated membrane potentials in PLIF, HH and NLIF models.

refractory period, influence of the input current is attenuated by the membrane potential, which makes the neuron less capable to fire a new action potential during that same period. The speed of GLIF model is also tested and compared to HH and NLIF models. In order to make the comparison fair, we built 1000 neurons for each of these three models in Matlab environment on an Intel i7-2600 workstation with 4GB memory. The neurons were simulated for 1000 ms at different simulation step size h. A commonly used forth-order Runge-Kutta (RK4) method was adopted to solve the HH model and NLIF model ODEs numerically, while the GEMA method was used for the GLIF model. The computation times for these simulations are as listed in Table II.

The solving of HH model at step sizes larger than 0.08 ms all failed, because the RK4 method failed to converge at some steps while solving the model ODE under those step sizes. Generally, the GLIF model with GEMA method is about 30% faster than the NLIF model, while both of these LIF models are about 50 times faster than the HH model. The MFR and AFR measures have been evaluated for accuracy as well. The HH model simulation result at 0.04 ms step size was used as the reference output spike train; and the matching tolerance ε = 5 ms was adopted. The MFR and AFR for both NLIF and GLIF neurons are plotted in Fig. 8. It can be observed that AFR and MFR of GLIF are much lower than the NLIF model, and are empirically found to be independent of the simulation step size. Comparatively, the MFR of the NLIF model increases dramatically when the simulation step size is larger than 0.4 ms, which is an indication that the calculation error of RK4 method increase along with the step size. Since the computation time of either models decrease exponentially while the step size increase, the possibility of selecting a larger step size for the GLIF model makes it more competitive in large scale ASNN applications.

Table 2. Computation time of HH, NLIF and GLIF models under various simulation steps. h (ms) 0.02 0.04 0.06 0.08 0.10 0.20 0.30 0.40 0.50 0.60 0.70 0.80 0.90 1.00

THH (s) 100.1 50.04 33.36 25.02 N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A

TNLIF (s) 3.554 1.775 1.180 0.8881 0.7125 0.3596 0.2431 0.1858 0.1481 0.1246 0.1094 0.0954 0.0844 0.0766

TGLIF (s) 2.574 1.288 0.8588 0.6453 0.5143 0.2577 0.1711 0.132 0.1032 0.0863 0.0734 0.0646 0.0572 0.0517

Fig. 8. Missed Fire Rates and Accidental Fire Rates of NLIF and GLIF model under various simulation step size.

5. Conclusion and future work

Clearly the GLIF model introduced in this study could accurately simulate the dynamics of a neuron cell membrane potential if the parameters and kernel function were extracted and fitted meticulously using the statistical method we provided. Although the reference model we used to demonstrate the extracting procedure and to test the GLIF model was the HH model with two ionic channels, our GLIF model could be easily fitted to any complicated HH model with more

Instructions for Typing Manuscripts (Paper’s Title)

ion channels. This last assertion can be supported simply by plotting the trace of the conductance after each spike for each ion channel and fit them to the kernel function. The comparison of the GLIF model performance with that of the NLIF shows that our model provides much better calculation accuracy in simulating the biological neuron activity. After using the GEMA method for the GLIF model, the calculation complexity was kept to an acceptable level. Such outcomes increase the prospects of the GLIF model for its implementation in larger scale and real-time ASNNs. As we seek to reach this implementation goal, future research work will focus on building more biological plausible ASNN using the GLIF neuron model on parallel computation platforms such as the General Purpose Graphic Process Unit (GPGPU) and FPGA devices, and applying such ASNN to resolve a multitude of real-world problems associated with pattern recognition and classification, among other things.

Hence the particular solution of (1) is  t Q( ) u u (t )    d  thf   ( )  (t )  



where   ( )  e 

tr

 u (t )  uth e 0



tr

0



P ( t ) dt  P ( t ) dt u (t )    Q(t )e  dt  C  e  .  

P ( ) d 



  Q( )e  tf



P ( ) d 

d .

t f  tr

  e 

P (  t f ) d   t f

Q(  t )e  f

P (  t f ) d 

P ( ) d 

P ( ) d 

d

t f t

  e 

r

P ( ) d 

 t f

tf

P ( ) d   f Q(  t )e  et f

P ( ) d 

d .

(A.6)

tf

P ( ) d  e  ,

(A.7)

is not related to the integral variable τ, it could be moved out of the integration and yields  u (t )  uth e 0



tr

0



Q(  t f )e 0 tr

  uth e 0



tr

0

P (  t f ) d 

tr

  e 0

P (  t f ) d 

g kern ( )/ Cm d 

P (  t f ) d 

d tr

  e 0 

Q(  t f ) / Cm e 0

g kern ( )/ Cm d 

g kern ( )/ Cm d 

d .

(A.8)

Define tr

(A.2)

  (tr )  e 0

g kern ( )/ Cm d 

,

(A.9)

so that the solution could be written as

Consider the closest firing time tf before t, and note that u(tf) = uth, the constant C could be solved as: tf

(A.5)

Since the term

(A.1)

and a general solution of (1) could be found as

C  uth e 

tr

0

P (t )  g kern (t  t f ) / Cm Q(t )  is (t )  ikern (t  t )  / Cm ,

P ( ) d 

f Consider survival time tr  t  t for any t and tf we have

tr

f





Acknowledgements

The first order ordinary differential equation in (1) could be solved analytically. Regardless of any changes of tf, we could always define

(A.4)

t Q ( )   d  ,   (t ) uth   f t  ( )  

tr

Appendix A. Solving for the GLIF ODE

Q( )  d  (t )  ( ) 

tf

  uth e 0

This work is supported by the National Science Foundation under grants CNS-0959985, CNS-1042341, HRD-0833093, and IIP-1230661. This is a preprint of an article submitted for consideration in International Journal of Neural Systems, Copyright© 2014 World Scientific Publishing Co. http://www.worldscientific.com/worldscinet/ijns.

11

(A.3)

u (t )  uth  (tr )   (tr ) 

tr

0

 (tr ) 

tr

0

ikern ( ) d Cm  ( )

is (  t f ) d . Cm  ( )

(A.10)

12

Author’s Names

The first two terms of this solution describes the post-fire membrane voltage trajectory ignoring the influence from new stimulations after the fire, which could be defined as tr i   ( ) ukern (tr )   (tr ) uth   kern d  , (A.11) 0 C  ( ) m  

while the third term in the solution is the influence of input spikes on the membrane voltage: usyn (tr )   (tr ) 

tr

0

is (  t f ) d . Cm  ( )

(A.12)

10.

11.

12.

13.

References 1. W. Maass, Networks of spiking neurons: The third generation of neural network models, IEEE Trans. Neural Netw. 10(9) (1997) 1659–1671. 2. D. Z. Jin, Spiking neural network for recognizing spatiotemporal sequences of spikes, Phys. Rev. E 69 (2004) 021905–021918. 3. A. L. Hodgkin and A. F. Huxley, Resting and action potentials in single nerve fibres, J. Physiol. 104(2) (1945) 176–195. 4. A. L. Hodgkin and A. F. Huxley, The components of membrane conductance in the giant axon of Loligo, J. Physiol. 116(4) (1952) 473–496. 5. M. Pospischil, M. Toledo-Rodriguez, C. Monier, Z. Piwkowska, T. Bal, Y. Fregnac, H. Markram and A. Destexhe, Minimal Hodgkin-Huxley type models for different classes of cortical and thalamic neurons, Biol. Cybern. 99(4-5) (2008) 427–441. 6. E. M. Izhikevich, Simple model of spiking neurons, IEEE Trans. Neural Netw. 14(6) (2003) 1569–1572. 7. E. M. Izhikevich, Which model to use for cortical spiking neurons?, IEEE Trans. Neural Netw. 15(5) (2004) 1063– 1070. 8. D. Yudanov and L. Reznik, Scalable multi-precision simulation of spiking neural networks on GPU with OpenCL, in Proc. 2012 Int. Joint Conf. Neural Networks (Brisbane, Australia, 2012), pp. 1–8. 9. P. Arena, L. Fortuna, M. Frasca and L. Patane, Learning

14.

15.

16.

17.

18.

19.

20.

anticipation via spiking networks: application to navigation control, IEEE Trans. Neural Netw. 20(2) (2009) 202–216. A. N. Burkitt, A review of the integrate-and-fire neuron model: I. Homogeneous synaptic input, Biol. Cybern. 95(1) (2006) 1–19. M. J. Chacron, K. Pakdaman and A. Longtin, Interspike interval correlations, memory, adaptation, and refractoriness in a leaky integrate-and-fire model with threshold fatigue, Neural Comput. 15(2) (2003) 253–278. Y.-H. Liu and X.-J. Wang, Spike-frequency adaptation of a generalized leaky integrate-and-fire model neuron, J. Comput. Neurosci. 10(1) (2001) 25–45. C. R. Huyck and R. V. Belavkin, Counting with neurons: Rule application with nets of fatiguing leaky integrate and fire neurons, in Proc. 7th Int. Conf. on Cognitive Modelling (Trieste, Italy, 2006), pp. 142–147. E. Nichols, L. J. McDaid and N. H. Siddique, Case Study on A Self-Organizing Spiking Neural Network for Robot Navigation, Int. J. Neural Syst. 20(6) (2010) 501–508. T. J. Strain, L. J. McDaid, T. M. McGinnity, L. P. Maguire and H. M. Sayers, An STDP Training Algorithm for A Spiking Neural Network with Dyanmic Threshold Neurons, Int. J. Neural Syst. 20(6) (2010) 463–480. J. L. Rossello, V. Canals, A. Morro and A. Oliver, Hardware Implementation of Stochastic Spiking Neural Networks, Int. J. Neural Syst. 22(4) (2012) 1250014-1– 1250014-11. J. Iglesias and A. E. P. Villa, Emergence of Preferred Firing Sequences in Large Spiking Neural Networks During Simulated Neuronal Development, Int. J. Neural Syst. 18(4) (2008) 267–277. W. Gerstner and W. M. Kistler, Spiking neuron models: Single neurons, populations, plasticity. (Cambridge Univ. Pr., Cambridge, United Kingdom, 2002). A. L. Hodgkin and A. F. Huxley, A quantitative description of membrane current and its application to conduction and excitation in nerve, J. Physiol. 117(4) (1952) 500–544. S. Schliebs, N. Nuntalid and N. Kasabov, Towards spatio-temporal pattern recognition using evolving spiking neural networks, in Proc. 17th Conf. Neural Information Processing. Theory and Algorithms eds. K. Wong, B. S. Mendis and A. Bouzerdoum (Sydney, Australia, 2010), pp. 163–170.