Note on the coefficient of variations of neuronal spike trains

Report 2 Downloads 93 Views
Noname manuscript No. (will be inserted by the editor)

Note on the coefficient of variations of neuronal spike trains Johannes Lengler · Angelika Steger

February 17, 2015

Abstract It is known that many neurons in the brain show spike trains with a coefficient of variation (CV) of approximately 1, thus resembling the properties of Poisson spike trains. Computational studies have been able to reproduce this phenomenon. However, the underlying models were too complex to be examined analytically. In this paper we offer a simple model that shows the same effect but is accessible to an analytic treatment. The model is a random walk model with a reflecting barrier; we give explicit formulas for the CV in the regime of excess inhibition. We also analyze the effect of probabilistic synapses in our model and show that it resembles previous findings that were obtained by simulation. Keywords spike train · interspike interval variability · probabilistic synapses · Poisson spike train · coefficient of variation

1 Introduction In a seminal paper Softky and Koch [1993] analysed spike train recordings from primary visual cortex (V1) and extrastriate cortex (MT) and found that the coefficient of variation of the interspike time is approximately one and thus almost equal to that of a completely random (Poisson) process. Since then numerous papers studied the problem of exhibiting neuronal mechanism that can be made responsible for generate spike trains with a coefficient of variation (CV) around 1, see for example Shadlen and Newsome [1998], Christodoulou and Bugmann [2000] and the references therein. In a nutshell, by today the effect has been reproduced in various computational settings and is mainly attributed to a balancedness of inhibition and excitation. However, a model Department of Computer Science, ETH Z¨ urich, Switzerland and Collegium Helveticum, Schmelzbergstrasse 25, 8092 Z¨ urich, Switzerland E-mail: {lenglerj|steger}@inf.ethz.ch

2

which exhibits such a phenomenon in an analytic setting is still missing. This is the aim of this short note. We follow the approach of Gerstein and Mandelbrot [1964] by considering a random walk on the integers and enhance it with a reflecting barrier as done in Shadlen and Newsome [1998]. For this model we compute expectation and variance of the interspike time explicitly as a function of the parameters of the model. Other than previous models we include (and focus) on the case of excess inhibition, and show that it is compatible with the neuronal regime, i.e., it gives a CV value close to 1 and a high-input regime in which many input spikes are needed for an output spike. Our analysis also shows that the previously observed importance of setting the reset potential to a different value than the resting potential (Troyer and Miller [1997]) is mainly responsible for an increase in firing rate and by thus only indirectly also for an increase of the coefficient of variation. We also complement the recent computational study of the importance of probabilistic synapses from Moreno-Bote [2014] by an analysis which among other things implies that, even without inhibition, the irregularity of spike trains is enhanced by successive stages of processing, as observed in Kara et al. [2000].

Model and Assumptions Neuron model. We consider a simple counting model that receives excitatory and inhibitory input. We assume that each excitatory signal contributes +1, while each inhibitory signal contributes -1. The neuron evicts an output signal as soon as the total sum reaches a threshold (that we denote by k in this paper). Then the neuron’s counter is reset to zero and the process continues. ˆ i.e., if the counter is at We also assume that we have a reflecting barrier at −k, ˆ −k and the neuron receives an inhibitory signal this signal is ignored. Clearly, this simple model lacks many features displayed in real neurons, most notably we ignore membrane leakage. However, in the high-input range that we study in our paper, the changes imposed by the leakage are small compared to the fluctuation imposed by excitatory and inhibitory pulses, so the effect on the CV is marginal.

Input signals. For our analysis we assume that the excitatory input is a Poisson spike train with rate λ+ and inhibitory input is a Poisson spike train with rate λ− . Note that in a Poisson process with parameter λ all interspike times are independent and exponentially distributed with parameter λ. Note also that due to the nature of a Poisson process the combined process of excitatory and inhibitory signals is a Poisson process with rate λ+ + λ− in which the probability that the next signal is excitatory is p := λ+ /(λ+ +λ− ), independent of the history of the process.

3

The coefficient of variation is defined as the quotient of standard deviation and mean of the lengths of the interspike intervals, p Var(ISI) CV = . E(ISI) Random walk. With the assumptions and notation from above the state of our model neuron can be viewed as a (homogeneous) discrete Markov chain on the ˆ . . . , k} (cf. Gerstein and Mandelbrot [1964], Holden [1976]); the integers {−k, Markov chain starts in 0 and in each step moves from a current state i either to state i + 1 (with probability p) or (with probability 1 − p) moves to state ˆ In the latter case, if i = −kˆ then it stays at −kˆ instead of i − 1 if i > −k. going down. To compute the CV of the interspike intervals, it will be sufficient to understand the random variable N that counts the number of steps needed to reach k, as we will see below.

2 Balanced excitation and inhibition Intuitively, it is easy to understand what happens if we have both excitation and inhibition. If excitation exceeds inhibition we have a tendency to go towards k quickly - which will lead to a CV value of less than one (Feng and Brown [1998]). If inhibition exceeds excitation we have a tendency to go towards the lower barrier. Once we are there we can then intuitively view the random walk as a geometric distribution: when we leave −kˆ we have a certain (tiny) probability of reaching k. If this fails we come back to the lower barrier – and try again. As the geometric distribution is the discrete analog on of the exponential distribution this analogy indicates that for the case of excess inhibition we should expect a CV value around 1. There is a small deviation from the geometric distribution stemming from the fact that after a spike we ˆ but rather at 0. Therefore there is a do not restart at the lower barrier −k, slightly increased chance to reach k without ever touching the lower barrier. This is only relevant if excitation and inhibition are almost balanced, since otherwise this event is of negligible probability. Before we state our results, we first state the setup precisely. Output interspike times. Let Xout denote the interspike time of the output spike train. Recall that the interspike time Xin of the combined input signal satisfies E[Xin ] = 1/(λ+ + λ− ) and Var[Xin ] = 1/(λ+ + λ− )2 . Thus E[Xout ] =

X n≥0

E[X | N = n] Pr[N = n] =

X

n · E[Xin ] Pr[N = n] =

n≥0

E[N ] λ+ + λ−

and, similarly, Var[X] = E[N ] Var[Xin ] + Var[N ] · (E[Xin ])2 =

E[N ] + Var[N ] . (λ+ + λ− )2

4

In order to determine p the coefficient of variation of the output signal (defined as CV (Xout ) = Var[Xout ]/E[Xout ]) we thus only need to determine E[N ] and Var[N ], as the λ-terms cancel. Determining the expectation. We first aim at computing E[N ]. As it turns out we need to require several expectations simultaneously: let Ni→k denote the number of steps that are needed to reach k, if we start in i. Denote by di := E[Ni→k ] the expectations of these random variables. Then these satisfy the following recursion system: for −kˆ < i ≤ k − 1,

di = 1 + pdi+1 + (1 − p)di−1

(1)

d−kˆ = 1 + pd−k+1 + (1 − p)d−kˆ , ˆ

(2)

dk = 0.

(3)

Rearranging the terms we obtain d−kˆ = d−k+1 + ˆ

1 p

and

di+1 =

1 1−p 1 di − di−1 − , p p p

from which we obtain by a straightforward induction that d−k+j ˆ



1−p j + · = d−kˆ − 2p − 1 (2p − 1)2

1−

1−p p

j ! ,

and the condition dk = 0 thus implies that d−kˆ

1−p kˆ + k − · = 2p − 1 (2p − 1)2

 1−

1−p p

! k+k ˆ

and thus d−k+j ˆ

kˆ + k − j 1−p = − · 2p − 1 (2p − 1)2



1−p p

j

 −

1−p p

! k+k ˆ .

(4)

ˆ We are interested in d−k+j for j = k: ˆ 1−p k − · E[N ] = d0 = 2p − 1 (2p − 1)2



1−p p

kˆ

 ·

1−

1−p p

k ! .

(5)

k Note that for kˆ → ∞ we have d0 → 2p−1 , consistent with the well-known answer for the unrestricted random walk in the case p > 1/2, cf. e.g. Feller [1968].

5

Determining the variance. Now we compute Var[N ]. As before, we have to compute all variances fi := Var[Ni→k ] simultaneously. For −kˆ < i < k we have fi = E[(Ni→k − di )2 ] = p · E[(1 + di+1 − di + Ni+1→k − di+1 )2 ] + (1 − p)E[(1 + di−1 − di + Ni−1→k − di−1 )2 ] = p · (1 + di+1 − di )2 + (1 − p) · (1 + di−1 − di )2 + p · fi+1 + (1 − p) · fi−1 . ˆ

We use the abbreviations α := 1/(2p − 1), β := −(1 − p)α2 γ k , where γ := (1 − p)/p in order to deduce from (4) that di = α(k − i) + β(γ i − γ k ). With this we can rewrite the constant term in the above recursion as p · (1 + di+1 − di )2 + (1 − p) · (1 + di−1 − di )2 = p · (1 − α + βγ i (γ − 1))2 + (1 − p) · (1 + α + βγ i (γ −1 − 1))2 = 4p(1 − p)α2 + 4βγ i + β 2 γ 2i (−1 + pγ 2 + (1 − p)γ −2 ). We set fˆi := fi + β 2 γ 2i and obtain fˆi = 4p(1 − p)α2 + 4βγ i + pfˆi+1 + (1 − p)fˆi−1 . One easily checks that fˆi = i · 4βαγ i − i · 4α3 p(1 − p) + η + ξγ i is a solution. Thus fi = i · 4βαγ i − i · 4α3 p(1 − p) + η + ξγ i − β 2 γ 2i Here η and ξ are parameters that have to be chosen in order to fulfill the two boundary conditions fk = 0 and f−kˆ = p · (1 + d−k+1 − d−kˆ )2 + (1 − p) +p · f−k+1 + (1 − p) · f−kˆ . ˆ ˆ {z } | =γ

Rephrasing the latter we get f−kˆ =

γ p

+ f−k+1 and thus ˆ ˆ

ˆ ξ = (1 − p)α4 (1 − 4k(2p − 1) − 4(2 − p)p)γ k . Similarly, fk = 0 gives us ˆ

η = 4α3 (1 − p)pk + α4 (1 − p)2 ∗ γ 2(k+k)) ˆ − 8p(1 + k + k) ˆ + 4p2 )γ k+kˆ . − (1 − p)α4 (1 + 4(k + k) With these facts at hand we can finally determine Var[N ] = f0 = η + ξ − β 2 . ˆ ˆ 3 (1 − p)(1 − γ k )γ kˆ + 4kα3 (1 − p)γ k+k Var[N ] = 4kα3 (1 − p)p − 4kα ˆ

ˆ

− α4 (1 − p)2 γ 2k (1 − γ 2k ) + α4 (1 − p)γ k (1 − γ k )(1 − 4(2 − p)p). Recalling that we already deduced that p E[N ] + Var[N ] CV(Xout ) = , E[N ]

(6)

(7)

(5) and (6) provide us with an explicit formula for CV(Xout ) as a function of k, kˆ and p := λ+ /(λ+ + λ− ).

6 CV 1.6 1.4

` k = 10, k Š 10 ` k = 20, k Š 0 ` k = 5, k Š 15 ` k = 10, k Š 20

1.2 1.0 0.8 0.6

0.45

0.50

0.55

0.60

p

Fig. 1 CV value of the output spike train as a function of the probability p of going ’up’ for ˆ data computed by using equations different values of the threshold k and the lower barrier k; (5), (6) and (7).

ˆ

Interpretation of the result. For p → 0 the leading term of E[N ] is α2 γ k+k , ˆ while that of E[N ] + Var[N ] is α4 γ 2k+2k . This confirms our intuition that the CV value is approximately one for small p. Next consider the case that p = 21 −ε for some small ε > 0. Then α = 1/(2ε) and 1 − γ k ≈ −4k. The leading term of E[N ] is thus 2kα2 . This √ time the corresponding term in E[N ] + Var[N ] only yields a CV value of 1/ k. However, there are three more terms that grow proportional to ε−3 and that thus can increase the CV value. In particular, the ˆ ˆ As the leading term 4kα3 (1 − p)(1 − γ k+k ) yields a term that grows with k. term of E[N ] does not depend on kˆ this shows that we can make the CV value arbitrarily large by choosing kˆ sufficient large. Figure 1 illustrates this ˆ phenomenon for several values of k and k. Figure 2 shows the CV value for the same set of parameters as a function of the total excitatory input. Note that for Poisson spike trains the total rate of the union of several spike trains is just the sum of their rates. Thus, a model neuron that receives input from 25 neurons that each spike with 40 Hz receives a total excitatory input spike train with 1000 Hz. In Figure 2 we have ˆ such that set the rate λ− of the inhibitory input (depending on λP and k, k) the output rate is 40 Hz, where the output rate is computed as (λ+ + λ− )/d0 with d0 as in (5). We see that a reset potential that is approximately half way between resting potential and threshold value leads to a CV value close to one for a wide range of input rates; this coincides with the computational observations of Troyer and Miller [1997].

3 Probabilistic Synapses From neurophysiology it is known that two neurons that are connected typically have 5-10 vesicle release sites, distributed over 1 − 5 synapses (Markram et al. [1997], Feldmeyer and Sakmann [2000]). For each site and incoming signal, at most one vesicle is released. It is also known that this happens only

7

ΛN

CV

5000 1.5

` k = 10, k Š 10 ` k = 20, k Š 0 ` k = 5, k Š 15 ` k = 10, k Š 20

4000 1.0

3000

2000 0.5 1000

0.0

0

1000

2000

3000

4000

5000

0

ΛP

Fig. 2 We consider a model neuron with total excitatory input rate λ+ and total inhibitory input rate λ− , where λ− is set such that the resulting output rate (given by (λ+ + λ− )/d0 with d0 as in (5)) is 40 Hz. Dashed lines, right axis: required choice for λ− ; solid lines, left axis: resulting CV value of output spike train. We see that for excess inhibition the output CV is close to 1 if reset potential that is approximately half way between resting potential and threshold value.

with very small probability, say in the order of 10-20% (Branco et al. [2008]). It is plausible to assume that the mechanisms at different sites are independent. In this section we consider the following scenario. The input to our model neuron consists only of excitatory signals with independent interspike times distributed like a random variable Xin (that we do not require to be exponentially distributed). The input goes through a synapse with s sites that each forward the input to the target neuron with probability 1/s. As before, Xout is the random variable that describes the interspike intervals of the output signal. In the following we show q 1 CV(Xout ) ≈ √ · 1 + (CV(Xin ))2 − 1s (8) k We first consider the random variable S that counts the number of synaptic sites that are needed to elicit one output spike. We know that E[S] = ks, but we would like to know the distribution of S. Recall that a negative binomial distribution NBin(k, p) with parameters k and p is the distribution of the number of successes in a sequence of independent and identically distributed Bernoulli trials with success probability p until k failures occur. The probability generating function of a negative binomially distributed random variable N is  k 1−p GN (z) = . 1 − pz Then S is given by S = k + N,

where n ∼ NBin(k, 1 − 1/s).

8

Its probability generating function is thus  k 1 GS (z) = · zk . s − (s − 1)z The number of true input spikes needed to generate S arrivals at synaptic sites is dS/se. We ignore the ceiling function and let S 0 := S/s. The probability generating function of S 0 is then 0

GS 0 (z) = E[z S ] = E[(z 1/s )S ] = GS (z 1/s ). Note that ignoring the ceiling function corresponds to allowing to use the remaining sites to contribute for the following spike. This is the reason why (8) is only an approximation, not a strict equality. PS/s Recall that the probability generating function of j=1 Xj is given by GS 0 (GX (z)) = GS ((GX (z))1/s ) and that this must equal (up to rounding, as explained above) to GX (k,s) (z), the probability generating function of the output interspike times. Hence, we have GX (k,s) (z) = GS 0 (GX (z)) = GS ((GX (z))1/s ), (9) and we can compute expectation and variance of X (k,s) in a routine manner: E[X (k,s) ] = G0X (k,s) (1) and Var[X (k,s) ] = G00X (k,s) (1) + G0X (k,s) (1) − (G0X (k,s) (1))2 . Using (9) we obtain E[X (k,s) ] = k · E[X] (as desired), and by elaborate computations (or a computational software tool)   s−1 2 (k,s) (E[X]) + Var[X] = k(E[X])2 · (1 + CV (X)2 − 1s ) Var[X ]=k· s from which (8) follows immediately. Chains of neurons. To illustrate the effect of stochastic synapses, we study the extreme example of a chain of neurons, where the whole input of the (i + 1)-st neuron is given by the output of the i-th neuron:

input

s sites

output

output

level 1

level 2 s sites

output level 3 s sites

9

Figure 3 shows the effect of the number of sites for k = 7 (and, say, kˆ = 0, but note that as we have no inhibition here, the value of kˆ is irrelevant). It is well-known (e.g., Softky √ and Koch [1993]) that for s = 1 the CV value drops by a factor of 1/ k at each √ level. In particular, for a Poisson spike train we have a CV value of 1/ k = 0.447.. for the output spike train of the first neuron, and a value of 1/k = 0.2 for the output spike train of the second neuron. Figure 3 shows that the behavior changes dramatically when the number of sites is larger than one. Regardless whether the input consists of a Poisson spike train with CV value of one or a uniform distribution with a CV value of zero, the CV of thepoutput spike train of the neurons converges very quickly a limiting value of (1 − 1/s)/(k − 1) which is roughly 0.47 for s ≥ 7. Thus even in this extreme example, the CV values do not fall below a moderately high level, respectively, rise up to this level if the starting spike train is regular. This is consistent with the experimental observation in Kara et al. [2000]. For the simulation in Figure 3 we have chosen k = 7 to reflect the increase in amplitude of excitatory input to spiny cells in layers 4 and 6 as described in Bannister et al. [2002].

Fig. 3 Change of CV value in a chain of neurons: for a Poisson input spike train (that has a CV value of 1) the CV value decreases, while for a regular input spike train (that has p a CV value of 0) the CV value increases. The limiting value is in both cases given by (1 − 1/s)/(k − 1), which is the stable point of equation (8). Values are computed as the mean of 100 spike trains of length 500; error bars show the standard mean error.

10

References N. J. Bannister, J. C. Nelson, and J. J. B. Jack. Excitatory Inputs to Spiny Cells in Layers 4 and 6 of Cat Striate Cortex. Philosophical Transactions: Biological Sciences, 357(1428):1793–1808, Dec. 2002. T. Branco, K. Staras, K. J. Darcy, and Y. Goda. Local dendritic activity sets release probability at hippocampal synapses. Neuron, 59(3):475–485, Aug. 2008. C. Christodoulou and G. Bugmann. Near Poisson-type firing produced by concurrent excitation and inhibition. Biosystems, 58(1-3):41–48, Sept. 2000. D. Feldmeyer and B. Sakmann. Synaptic efficacy and reliability of excitatory connections between the principal neurones of the input (layer 4) and output layer (layer 5) of the neocortex. The Journal of Physiology, 525(1):31–39, May 2000. W. Feller. An Introduction to Probability Theory and Its Applications, Vol. 1. Wiley, 3rd edition, 1968. J. J. Feng and D. D. Brown. Impact of temporal variation and the balance between excitation and inhibition on the output of the perfect integrateand-fire model. Biological Cybernetics, 78(5):369–376, Apr. 1998. G. Gerstein and B. Mandelbrot. Random walk models for the spike activity of a single neuron. Biophys J., 4:41–68, 1964. A. V. Holden. Models of the stochastic activity of neurones. Springer, 1976. P. Kara, P. Reinagel, and R. Reid. Low response variability in simultaneously recorded retinal, thalamic, and cortical neurons. Neuron, 27:635–46, 2000. H. Markram, J. L¨ ubke, M. Frotscher, A. Roth, and B. Sakmann. Physiology and anatomy of synaptic connections between thick tufted pyramidal neurones in the developing rat neocortex. J Physiol (Lond), 500 ( Pt 2):409–40, Apr 1997. R. Moreno-Bote. Poisson-like spiking in circuits with probabilistic synapses. PLOS Computational Biology, 10:1–13, 2014. M. N. Shadlen and W. T. Newsome. The variable discharge of cortical neurons: implications for connectivity, computation, and information coding. The Journal of Neuroscience, 18(10):3870–3896, May 1998. W. R. Softky and C. Koch. The highly irregular firing of cortical cells is inconsistent with temporal integration of random epsps. The Journal of Neuroscience, 13(1):334–50, Jan 1993. T. Troyer and K. Miller. Physiological gain leads to high ISI variability in a simple model of a cortical regular spiking cell. Neural Computation, 9: 971–983, 1997.