Biophysical information representation in temporally correlated spike trains William H. Nessea,b,1, Leonard Malera, and André Longtinb,a a
Department of Cellular and Molecular Medicine and bDepartment of Physics, University of Ottawa, Ottawa, ON, Canada K1H 8M5
neural correlations ∣ spike train information
N
egative-feedback processes are ubiquitous in biological systems. In neurons, spike-triggered adaptation processes inhibit subsequent spiking and can produce temporal correlations in the spike train, in which a longer than average interspike interval (ISI) makes a shorter subsequent ISI more likely and vice versa. Such correlations are common in neurons (1–7) and model neurons (8–14). The impact of temporal relationships in spike trains, including ISI correlations, on neural information processing has been under intense investigation (1, 2, 15–22). Temporally correlated spike trains pose a challenge for ISI-based sensory inferences because each ISI depends on both present and past stimulus activity, requiring complex strategies to make accurate inferences (23, 24). Some ISI correlations can be attributed to long-time-scale autocorrelations of the input (23, 25) or correlations related to long-time-scale adaptive responses (23, 26). However, if the ISI correlations arise from inputs or noise with fast autocorrelation time scales relative to both the mean ISI and the adaptation time scales of the cell, the resulting fine-grained time structure of spike trains can shape information processing of the underlying slower input fluctuations (2, 3, 11, 15, 18, 19, 27–29). Specifically, Jacobs et al. (29) have recently shown that fine-grained temporal coding of ISIs that takes into account adaptive responses (e.g., refractoriness) not only conveys significant information, but the information benefit accounts for behaviorally important levels of stimulus discrimination. If fine-grained temporal encoding is utilized for neural spike patterns, how are the temporal codes computed and communicated to postsynaptic neurons? This question is commonly posed as a “decoding” problem, in which statistical models are used to infer likely inputs (15, 17, 22, 29, 30). Whereas these analyses can evaluate if an encoding of the spike train is viable, in this article we investigate the subjacent question of how internal cellular www.pnas.org/cgi/doi/10.1073/pnas.1008587107
dynamics can represent a fine-grained spike train encoding and carry out decoding. Commonly, spike-triggered adaptation currents (ACs) decay exponentially, reflecting state transitions of a large number of independent molecules governing the current’s conductance (31). Through analysis of mathematical models, we show that an AC can efficiently represent spike train information. Our main result is that the dynamics of correlated spike emission can create statistical independence in the adaptation process. That is, we have found an independent statistical decomposition of correlated spike trains. Decoding of correlated ISIs involves inferences on the basis of conditional probability distributions (11, 29, 30). Yet, we find the AC activation states are a probabilistically independent and biophysical representation of the ISI code that does not require decoding on the basis of more computationally costly conditional probabilities. This dynamical coding property is distinct from coarse-grained integrative coding by slow-time-scale, activity-dependent processes (32–35). We further show that the entropy of the AC states represents the maximal stimulus information when the AC system is sufficiently activated and displays weak signal detectability levels equivalent to noncorrelated Poisson ISI coding but at a much-reduced firing-rate gain. We conclude by showing how simple synaptic dynamics may communicate adaptation state information to postsynaptic targets, which provides a testable experimental prediction of our theory. The adaptation-independence property is relevant to information transmission in regular-firing sensory afferents (1, 3, 36), in which correlated spike sequences are used to discriminate weak underlying signals from noise (2, 3, 11). Results Noisy stimulus current injection xðtÞ to a Morris–Lecar (ML) model (37), with generic parameters (38), produced input fluctuation-driven spike patterns (Fig. 1A and SI Text, SI Methods). We set the constant injected current I m ¼ 38 nA∕cm2 to be just below the deterministic threshold for firing, so that fluctuations in the noisy input xðtÞ determine spike times fti g. The strength of the fluctuations was large enough to perturb V ðtÞ past spiking threshold but small enough to not obscure the intrinsic spike kinetics. In addition to the fast-activating potassium current W ðtÞ responsible for spike repolarization, the model possessed a slower-decaying spike-activated AC: HðtÞ (31, 35) (Fig. 1B). The AC kinetics were modeled as a voltage-gated potassium channel, similar to a KV3.1 channel (2, 31). Activation of the AC hyperpolarizes the membrane and discourages spiking. Spikes that occur in quick succession induce elevated AC activation, making the later ISIs longer on average, whereas low AC values make subsequent ISIs shorter on average; Author contributions: W.H.N. designed research; W.H.N. performed research; W.H.N. contributed new reagents/analytic tools; W.H.N. analyzed data; L.M. and A.L. provided important and significant input; and W.H.N., L.M., and A.L. wrote the paper. The authors declare no conflict of interest. This article is a PNAS Direct Submission. 1
To whom correspondence should be addressed. E-mail:
[email protected].
This article contains supporting information online at www.pnas.org/lookup/suppl/ doi:10.1073/pnas.1008587107/-/DCSupplemental.
PNAS Early Edition ∣
1 of 6
NEUROSCIENCE
Spike trains commonly exhibit interspike interval (ISI) correlations caused by spike-activated adaptation currents. Here we investigate how the dynamics of adaptation currents can represent spike pattern information generated from stimulus inputs. By analyzing dynamical models of stimulus-driven single neurons, we show that the activation states of the correlation-inducing adaptation current are themselves statistically independent from spike to spike. This paradoxical finding suggests a biophysically plausible means of information representation. We show that adaptation independence is elicited by input levels that produce regular, non-Poisson spiking. This adaptation-independent regime is advantageous for sensory processing because it does not require sensory inferences on the basis of multivariate conditional probabilities, reducing the computational cost of decoding. Furthermore, if the kinetics of postsynaptic activation are similar to the adaptation, the activation state information can be communicated postsynaptically with no information loss, leading to an experimental prediction that simple synaptic kinetics can decorrelate the correlated ISI sequence. The adaptation-independence regime may underly efficient weak signal detection by sensory afferents that are known to exhibit intrinsic correlated spiking, thus efficiently encoding stimulus information at the limit of physical resolution.
APPLIED MATHEMATICS
Edited by Terrence J. Sejnowski, Salk Institute for Biological Studies, La Jolla, CA, and approved October 1, 2010 (received for review June 16, 2010)
W(t), H(t)
0.4
B
activation phase H ≈ hi e−Δti ∕τ (Fig. 2A). This linear relationship has a y intercept Λ and slope −Λ. Because Λ < 1, the map f , [1], can always be inverted to solve for Δti : h ð1 − ΛÞ : [2] Δti ¼ f −1 ðhiþ1 ;hi Þ ¼ τ ln i hiþ1 − Λ
∆t
hi
hi+1
0.2
W(t) H(t)
0
0
1
2
3
4
5
6
t (s)
C
0.5
k
ρ (∆t)
1
0
D
0.5
k
ρ (h)
1
0
0
0.5
1
1.5
2
2.5
3
3.5
4
k Fig. 1. Morris–Lecar model exhibits fast-fluctuating input-driven spike trains (A). A spike at time t i activates the AC, which can vary in peak amplitude hi , depending on the previous ISI: Δt i−1 ¼ t i − t i−1 (B). Serial ISI correlations ρk between Δt i and Δt iþk exist only between subsequent ISI pairs (k ¼ 1), whereas longer-range pairs (k > 1) are uncorrelated (C). Paradoxically, peak AC amplitudes (hi ) are uncorrelated (D).
both cases result in temporal ISI correlations. Monte Carlo simulations reveal that ISI correlations ρk ¼ CorrðΔti ;Δtiþk Þ (where Δti ¼ tiþ1 − ti ) exist only between subsequent ISIs (Δti and Δtiþ1 ), but all later ISIs (k > 1) have effectively zero correlation in the parameter regime we have chosen (Fig. 1C). Note that without random fluctuations in xðtÞ there would be no ISI correlations and only rhythmic firing or silence, depending on the bias current I m . Hence, the ISI correlations are induced by the fast-fluctuating input xðtÞ, and so the correlations can be used to infer properties of xðtÞ. In addition to ISI correlations, we also investigated correlations in the AC conductance HðtÞ. We measured the peak activation of HðtÞ, defined as hi , that occurs immediately after each spike (see Fig. 1B). The hi value defines the activation level of the current, and so we refer interchangeably to hi as the AC activation state associated with each spike. Paradoxically, the hi activation states are uncorrelated (Fig. 1D). How this zero correlation property emerges through the dynamical coupling between the adaptation HðtÞ and the spike generation mechanism is investigated in the following.
Hence, the hi sequence contains the same information as the ISI sequence. Note that the hi values are contained in the interval ðΛ;1Þ, from which we infer from [2] that Δti ≥ 0 implies hiþ1 ≤ Λ þ hi ð1 − ΛÞ. The sequence from hi to hiþ1 is determined by the intervening ISI, which is stochastically determined by the fluctuating input xðtÞ. Here we derive the stochastic dynamics of the hi sequence by approximating the stochastic spike process that generates Δti . Following Muller et al. (12), the likely times of the next ISI, conditioned on hi , are decided by the probability density pðΔti jhi Þ (defined below). Longer ISIs should be more likely for larger h values and vice versa (Fig. 2C). Assuming for now that pðΔti jhi Þ is known, we define the Markov transition function Q from h0 ≡ hi to h ≡ hiþ1 by substituting Δt ¼ f −1 ðh;h0 Þ: Qðhjh0 Þ ¼ −pðf −1 jh0 Þ
hiþ1 ¼ Λ þ ð1 − ΛÞhi expð−Δti ∕τÞ ≡ f ðΔti ;hi Þ:
[1]
The parameter Λ ∈ ð0;1Þ is the minimum activation level for HðtÞ, and τ is the decay time scale. Setting Λ ¼ 1 is an extreme case in which the current is always maximally activated for every spike, whereas the other extreme (Λ ¼ 0) is a nonadapting process [HðtÞ ¼ 0]. The linear map [1] approximates the activation states hi of HðtÞ as evidenced in the plot of the activation change Δh ≡ hiþ1 − hi e−Δti ∕τ as a function of the AC level just prior to the 2 of 6 ∣
www.pnas.org/cgi/doi/10.1073/pnas.1008587107
[3]
Λ
0.25
A
0.2 0
0.02
0.04
0.06
0.08
0.1
0.12
0.14
0.16
0.18
0.2
H(t) preactivation
B
20
Monte Carlo q analytic ∞
10 0
Mathematical Analysis. What conditions allow for ISI correlations
0.24
0.26
0.28
0.3
0.32
0.34
0.36
0.38
0.4
h
p(∆t|h)
2
C
h = 0.25 h = 0.30 h = 0.35
variable silent 1 period
0
ISI
p (∆t)
to emerge together with uncorrelated AC activation states hi ? To answer this question, we derived a more analytically tractable model of AC-limited spike emission, starting with HðtÞ. To simplify, note that the smooth H dynamics in Fig. 1B exhibit a long exponential decay phase between spikes followed by a fast activation phase during a spike. By approximating these phases of HðtÞ with piecewise exponentials (see SI Text, Approximation of HðtÞ and Derivation of Qðhjh0 Þ), we derived a map from hi to hiþ1 :
∂f −1 ðhÞΘðf −1 Þ; ∂h
where the negative sign ensures positivity of the operator, ∂f −1 ∕∂h is the Jacobian of f −1 [we abbreviate f −1 ¼ f −1 ðh;h0 Þ], and ΘðxÞ is the Heaviside step function (see SI Text, Approximation of HðtÞ and Derivation of Qðhjh0 Þ). The Heaviside factor in [3] disallows ðh;h0 Þ combinations that lead to impossible negative ISIs. We define the stochastic dynamics of the h sequence in terms of the evolution of an h distribution qi ðh0 Þ at the ith spike, to the next qiþ1 ðhÞ: Z 1 qiþ1 ðhÞ ¼ Qðhjh0 Þqi ðh0 Þdh0 ≡ Q½qi ðhÞ: [4]
∆h
A
q∞(h)
V(t) (mV)
20 0 −20 −40
2
D
1 0
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
∆ t (s) Fig. 2. Analytical model [6] captures the spike statistics of the biophysically realistic model with α ¼ 16.7, β ¼ 29, and τ ¼ 400 ms. (A) The HðtÞ pre- minus postactivation change Δh ¼ hiþ1 − hi e−Δt i ∕τ exhibits a linear relationship with the preactivation level HðtÞ ≈ hi e−Δt i ∕τ with slope −Λ ≈ −0.246. The Morris– Lecar system’s empirical limiting distributions (dots) of AC states h (B), and ISI distributions, conditioned on h, with h-dependent variable silent periods (C), and unconditioned (D), are fit by Eqs. 1–6 (solid lines).
Nesse et al.
Fig. 2 B–D shows analytical approximations to the ML Monte Carlo results for q∞ ðhÞ, pðΔtjhÞ, and pISI ðΔtÞ, respectively, which are derived as follows. We generated pðΔtjhÞ with a spike rate function λðHÞ ≥ 0, in which the probability of a spike in a small time interval δt is λðHÞδt (12, 30, 40). To correctly model the effect of the AC on spiking, λðHÞ must elicit lower rates for large H and vice versa. The conditional ISI density is calculated as pðΔtjhÞ ¼ λðhe−Δt∕τ Þ −u∕τ Þdu. Without loss of generality, we used the exp½−∫ Δt 0 λðhe specific model (12): λðHÞ ¼ α expð−βHÞ:
[6]
The parameter α > 0 sets the overall spike rate of the cell, and β ≥ 0 sets the strength of AC. If β is large, then moderate activation levels of H induce a silent period until HðtÞ decays sufficiently (see Fig. 2C). Conversely, a small β reduces the effect that H has on spike probability. Choosing β ¼ 0 or Λ ¼ 0 yields homogeneous Poisson firing statistics from the model in [6] with rate α. Hence, there is a homotopy between adapting and Poisson trains through β or Λ. The exponential dependence of firing rate on H in [6] arises commonly from spiking neural models on the basis of diffusion processes, where the effect of noisy input xðtÞ is represented as a probability of spiking per unit time (12, 14, 30, 40–42). Moreover, Eq. 6 approximates biophysically realistic models such as ML (Fig. 2) effectively over a realistic range of baseline input and fluctuation levels, provided that the autocorrelation time scale of input current is fast relative to the mean ISI (12, 40), which is consistent with the fast-fluctuating noisy input xðtÞ used in Fig. 1. Adaptation Independence. By analyzing the transition function Q
in [3], we can show that the sequential h values can be not only uncorrelated (Fig. 1D) but also statistically independent. For a general rate function λðHÞ, Eq. 3 becomes R h0 ð1−ΛÞ u 1 h−Λ τ [7] e½−τ h−Λ λð1−ΛÞudu Θðf −1 Þ Qðhjh0 Þ ¼ λ 1−Λ h−Λ (see SI Text, Approximation of HðtÞ and Derivation of Qðhjh0 Þ). Independence is established by proving that the Markov transition [7] does not depend on h0 : Qðhjh0 Þ ¼ QðhÞ, which also implies q∞ ðhÞ ¼ QðhÞ. Independence is achieved if the rate function τλðhÞ ∼ 0;
for h ≥ Λ:
[8]
The condition [8] requires the peak activation h ≥ Λ to inhibit spiking for a nonzero time period until the AC decays. For the specific λ in [6], independence occurs if β (the strength of the AC) is sufficiently large: βΛ − lnðατÞ ≫ 1. The physiological meaning of the condition [8] is that each AC activation must induce a nonzero postspike silent period Δw ≡ τ lnðh0 ∕ΛÞ, as is illustrated in Fig. 2C, in addition to the usual absolute and relative refractory periods. The silent period is nonstochastic for a given h0 and ends when HðtÞ decays below the threshold value Λ so that λ½HðtÞ > 0—the larger the h0 value, the longer the silent Nesse et al.
Spike Train Statistics. In the independence regime, realistic correlated spike trains can be generated from independent realizations of q∞ -distributed activation states fhi g by f −1 ðhiþ1 ;hi Þ ¼ Δti . Independence also explains why only adjacent ISIs are correlated in Fig. 1C: Δti and Δtiþ1 both depend on hiþ1 and so are correlated, whereas Δti and Δtiþk for k ≥ 2 are independent because they are determined by distinct and independent activation states. Conversely, in the nonindependent regime, nonadjacent ISIs exhibit correlations (see SI Text, Fitting the ML Model, and Fig. 3 below). The variability in the hi sequence is also the source of the negative ISI correlation structure in the adaptation-independence regime (Fig. 1C). By inserting f −1 ðhiþ1 ;hi Þ ¼ Δti into the ISI covariance formula CovðΔti ;Δtiþ1 Þ ¼ hΔti Δtiþ1 i − hΔti i2 and using Chebyshev’s algebraic inequality (see SI Text, ISI Correlations), we deduce from independence that
0 ≥ CovðΔti ;Δtiþ1 Þ ¼ −τ2 hlnðhiþ1 − ΛÞ ln½hiþ1 ð1 − ΛÞi þ τ2 hlnðhiþ2 − ΛÞihln½hi ð1 − ΛÞi:
[9]
Thus, sequential ISIs have nonpositive correlations in the adaptation-independent regime, consistent with Fig. 1C. Furthermore, ISI correlations are zero only if h does not vary. That is, CovðΔti ;Δtiþ1 Þ → 0 only if q∞ ðhÞ → δðh − hhiÞ, where δð·Þ is the Dirac functional, which occurs only in the limit of zero stochastic input [i.e., xðtÞ → 0]. Negative serial ISI correlations regularize the spike train over long time scales, as can be understood by analyzing the variability of the sum of n sequential ISIs: T n ¼ ∑ni¼1 Δti (10). Recall that Δti ∕τ ¼ ln½hi ð1 − ΛÞ − lnðhiþ1 − ΛÞ, so T n is a telescoping sum of n þ 1 summands: T n ¼ τ ln½h1 ð1 − ΛÞ − τ lnðhnþ1 − ΛÞþ ∑ni¼2 Zðhi Þ, where Zðhi Þ ≡ τ lnf½hi ð1 − ΛÞ∕ðhi − ΛÞg:
[10]
In the adaptation-independent regime, the Zðhi Þ terms are independent time intervals making up T n , with mean hZi ¼ hΔti. If there is independence, the variance of T n is VarðT n Þ ¼ nVarðΔtÞ þ 2ðn − 1ÞCovðΔt1 ;Δt2 Þ
[11]
¼VarðΔtÞ þ ðn − 1ÞVarðZÞ:
[12]
PNAS Early Edition ∣
3 of 6
APPLIED MATHEMATICS
Λ
period. The selection of h is determined by the remaining stochastic portion of the ISI Δt − Δw, which is a renewal process, and thus is independent of h0 . Note also that the independence condition [8] is generic for all AC time scales τ because λτ in the integral of [7] is nondimensional. Of course, an AC-mediated silent period is a common phenomenon, so the result is broadly applicable, including to the model in Figs. 1 and 2 and SI Text, Fitting the ML Model. To prove that [8] implies independence, assume [8] holds. Then the upper integration limit of [7] can be replaced with the lower bound h0 ¼ Λ ≥ Λð1 − ΛÞ or any value above it with no consequence because the integrand is effectively zero above the bound. Furthermore, [8] gives an upper bound for h, because Qðhjh0 Þ ∼ 0 for h ≥ Λð1 − ΛÞ þ Λ ≥ Λ, so h ∈ ½Λ; minð2Λ − Λ2 ;1Þ. Note that h ∈ ½Λ; minð2Λ − Λ2 ;1Þ implies f −1 > 0, and so the Heaviside factor in [7] can be omitted. Finally, the Jacobian ∂f −1 0 ∂h ðhÞ in [3] depends only on h and not h . Taken together, the above deductions imply Qðhjh0 Þ ¼ QðhÞ ¼ q∞ ðhÞ, which is independence (see SI Text, Fitting the ML Model). In the following, we analyze how changes in mean firing rate affects adaptation independence and spike statistics, including correlations.
NEUROSCIENCE
1 1 Note that ∫ ∞ 0 pðtjhÞdt ¼ 1 and ∫ Λ qi dh ¼ 1 imply ∫ Λ qiþ1 dh ¼ 1. Thus, [4] maps densities to densities. Under mild conditions of irreducibility and ergodicity, Frobenius–Perron theory (39) predicts that there exists a unique limiting distribution q∞ ðhÞ such that limi→∞ Qi ½q0 ðhÞ ¼ q∞ ðhÞ, in which Qi is the ith operator composition, for any starting distribution q0 . Given q∞ exists, the ISI density is calculated as Z 1 pðΔtjhÞq∞ ðhÞdh: [5] pISI ðΔtÞ ¼
Note that VarðZÞ is the dominant term in [12] for large n and thus is a good measure of long-time-scale spike train variability. Note, however, with nonindependence [11] and [12] would contain higher-order covariance terms, and, in general, VarðZÞ can be computed from moments of Z given q∞ (as in Fig. 3). However, in the adaptation-independence regime we deduce from [11] that VarðZÞ ¼ VarðΔtÞ þ 2CovðΔt1 ;Δt2 Þ:
[13]
q ¥ (h)
The first term of [13] accounts for intrinsic variability of a single ISI, and the negative-valued second term (see [9]) lowers the resulting variance. Thus, spike pattern regularity is a direct result of ISI covariance. In the following, we study how the variability in the spike train and adaptation independence are modulated as a function of mean firing rate. Suppose we introduce a nonfluctuating baseline input conductance s to the model: λðH − sÞ. Increasing (decreasing) the input s has the same effect as increasing (decreasing) injected current I m in the ML model (see SI Text, Fitting the ML Model). The baseline s sets the overall firing rate of the cell by effectively scaling α. The input s modulates the ISI correlations, ISI variability, and adaptation independence, as we now show. The baseline s defines a set of operators Qs , [4], and operator spectra. For each operator Qs there is a single unit eigenvalue η1 ¼ 1 and corresponding eigenfunction qs∞ (Fig. 3A). The secondary spectrum (ηj for j > 1) is effectively zero for low input 20
A
0
s=−0.06 s=0 s=0.06
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
h
0.5
B
CV
Z
η
ρ
2
150
ρ
1
firing rate
2
100 maximal regularity
50
nonindependence
0
0 adaptation independence
−0.5
DKL f/∆f
Mutual Information
CV, CVZ, η, ρ
CV
Firing rate (Hz)
1
15 10
C
5 Poisson (equivalent−rate)
ISI (renewal)
h
0 2
D
h ISI (renewal) Poisson (equivalent−rate)
1 0 −0.2
0
0.2
0.4
0.6
0.8
1
s
Fig. 3. Input discrimination from AC states is efficient relative to similar uncorrelated spike trains. (A) qs∞ over a range of inputs s from s ¼ −0.06 to 1 for the model (Eq. 6, τ ¼ 400 ms, Λ ¼ 0.2, α ¼ 14.63 s−1 , β ¼ 30; see SI Text, Fitting the ML Model for an analysis of parameter variation). (B) CV versus input s has Poisson statistics (CV ¼ 1) for low input levels (s < 0). Increased input leads to more regular spiking that overlaps the region of adaptation independence (s approximately less than 0.4) where the secondary eigenvalue of the h operator is essentially zero (η2 ∼ 0). As input increases, it causes a loss of independence (η2 > 0), corresponding to increased spike train irregularity measured by CVZ . (C) Mutual information IAC exhibits a plateau region of elevated information for sufficiently activated AC states but is uniformly lower than the mutual information of the renewal ISI process [pISI ðΔtÞ, Eq. 5, lower points omitted but are very near Poisson] and the bounding equivalent-rate Poisson mutual information. (D) The information gain DKL divided by the proportional change in firing rate Δf A ∕f A for Δs ¼ 0.02 shows the AC system has greater discriminability per firing-rate change compared to the equivalent renewal ISI process and the equivalent-rate Poisson process. 4 of 6 ∣
www.pnas.org/cgi/doi/10.1073/pnas.1008587107
values (s ≲ 0.4). Increasing s leads to an increase in the secondary spectrum from zero at s ∼ 0.4, corresponding to a loss of adaptation independence (η2 , Fig. 3B). The secondary spectrum η2 measures the degree of independence because it is the fractional contraction due to Q [4] of the linear subspace orthogonal to q∞ (see SI Text, Fitting the ML Model). The loss of adaptation independence for increasing baseline levels s occurs concomitantly with changes in ISI regularity, which is measured pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi by the coefficient of variation of the ISI [CV ¼ VarðΔtÞ∕hΔti]. There is a nonmonotonicity in the s-input-CV graph (Fig. 3B), first declining from unity, which is associated with a transition from subthreshold and very-low-firing-rate Poisson statistics, to more regular firing at a minimum CV value (s ∼ 0.1). This drop is associated with baseline input levels near the deterministic spike threshold (see SI Text, Fitting the ML Model). The CV then increases for higher baseline levels, which is a unique feature of adapting models (14, 40) and contrasts with the monotonic s-input-CV graph of nonadapting models (42). Very high baselines (s ≳ 0.8) result in exponential firing-rate gains that are considered nonphysiological and will not be considered further (see SI Text, Fitting the ML Model. Long-time-scale pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi spike train regularity, measured by the CVZ [CVZ ¼ VarðZÞ∕hZi], exhibits a local minimum similar to the CV, but it occurs at a higher baseline level that is near the upper boundary of adaptation independence (s ∼ 0.4). This minimum point occurs for input levels above the deterministic threshold for spiking (see SI Text, Fitting the ML Model. The minimum point of CVZ depends on the variability of ISIs but is dominated by the minimum point of the first serial correlation coefficient ρ1 (see Eq. 13). Previous studies have detailed how ISI correlations can increase spike train regularity and enhance coarse-grained firing-rate-coding precision (1, 2). Here we observe a broad range of baseline inputs exhibiting high spike train regularity associated with a sufficiently activated AC. However, our analysis also reveals that there is a subset of this range that exhibits adaptation independence, which we will show provides additional advantages for fine-grained sensory coding. In the next two sections, we investigate information-theoretical measures of the AC (Fig. 3C), and we show how independent adaptation states can be utilized to detect small changes in the baseline Δs (Fig. 3D). Information-Theoretic Analysis of Activation States. The ISI sequence encodes information about the fluctuating stimulus xðtÞ (17) that is accessible from the AC activation states hi . We find that the AC states better encode the effect of stimulus fluctuations xðtÞ when the AC is activated sufficiently to produce discernible variations in the AC from spike to spike. To measure this property, we derived the mutual information (MI) per spike I AC between the stimulus xðtÞ and the AC states and discovered that it was the entropy of the h-process
I AC ¼ Hðh2 jh1 Þ − lnðδhÞ;
[14]
where 0 < δh ≪ 1 sets the resolution of the MI (43). A rigorous derivation of [14] and the representation of xðtÞ in the rate model [6] is given in SI Text, Mutual Information. Note that adaptation independence implies Hðh2 Þ ¼ Hðh2 jh1 Þ, which is significant because the MI of any N-spike sequence length (i.e., “word” length; see ref. 20) scales linearly as NHðhÞ. Fig. 3C shows a broad “plateau” region of elevated MI (red) for baseline levels −0.06 ≲ s ≲ 1 associated with higher-variance qs∞ ðhÞ distributions (Fig. 3A). This plateau decreases to zero MI for decreasing baseline levels because qs∞ ðhÞ → δðh − ΛÞ, because this is a regime of very long ISIs (minimal h values) that can be distinguished only by small differences in AC states. Thus, AC-based encoding carries more information if the AC is sufficiently activated by the input s. Fig. 3C also shows the MI of the renewal ISI process pISI ðΔtÞ (see Eq. 5). The renewal ISI information is close to the MI of a Nesse et al.
Ωðs;Δs;fhi gN i¼1 Þ ≡
N−1
∑ i¼1
ln
sþΔs Q ðhiþ1 jhi Þ : Qs ðhiþ1 jhi Þ
[15]
If the h data originate from a perturbed-input distribution (Δs ≠ 0), the LLR will grow positive on average with increasing N; a threshold can then be set and utilized for hypothesis testing (43). The average rate of growth of the LLR, termed the information gain, is equal to the Kullback–Leibler divergence between sþΔs ðh0 Þ∫ QsþΔs ðhjh0 Þ distinct h distributions: DKL ðQsþΔs ∥Qs Þ ¼ ∫ q∞ ln½QsþΔs ðhjh0 Þ∕Qs ðhjh0 Þdhdh0 , so that hΩi ¼ ðN − 1ÞDKL . For the exponential rate function λðH − sÞ in [6], the information gain is DKL ðs;ΔsÞ ¼ βΔs − ð1 − e−βΔs Þ ≈ ðβΔsÞ2 ∕2;
[16]
as detailed in SI Text, Information Gain. Thus, the information gain using the h data depends only on Δs and β but is independent of s, α, Λ, and τ. The uniformity over s is a unique property of the rate function in [6]. Because [16] is independent of all parameters but β and Δs, the result holds for a continuum of spike train behaviors, from those with strong AC currents (large Λ) that have ISI correlations and adaptation independence to the limiting case of a pure Poisson process with a nonexistent AC ðΛ → 0Þ: λð−sÞ ¼ αeβs . This Poisson process with rate λð−sÞ, which has no ISI correlations and maximal entropy, provides a useful comparison to AC systems and highlights an important aspect of how AC states represent information: Knowledge of the h activation states yields the equivalent information gain to that obtained from Poisson ISIs undergoing the same baseline change Δs; however, to achieve this gain, the Poisson model λð−sÞ undergoes a much larger (exponential) change in firing rate λ½−ðs þ ΔsÞ because there is no additional activation of the AC to counteract the baseline change Δs. Alternatively, the above Poisson equivalence property ([16]) can be expressed by computing the proportional firing rate change Δf A ∕f A given Δs for the AC model and examining the difference in the information gain of a Poisson spike train with the equivalent firing-rate change. Fig. 3D plots DKL [16] divided by the Δf A ∕f A (red) versus s, for Δs ¼ 0.02. The ratio DKL f A ∕Δf A reaches a maximum for baseline levels concomitant with the point of maximal regularity of the T n interval (CVZ ), and the transition point to the non-adaptation-independence regime (Fig. 3B). This concomitance indicates that the maximal inhibitory effect of the AC on firing-rate change coincides with the baseline level where the AC can no longer produce a significant silent period. Conversely, for a Poisson model with the equivalent rate change, DKL f A ∕Δf A is inversely related, to first order, to the AC system; so as one goes up, the other goes down (Fig. 3D; see SI Text, Nesse et al.
Synaptic Decoding of AC States. How then are the h data, a sequence of variables hidden from direct experimental observation, accessed by postsynaptic decoders? Spike-triggered exponential processes such as HðtÞ are ubiquitous in the nervous system. Namely, standard models of postsynaptic receptor binding dynamics are characterized as exponentially activating and decaying processes (31). Similar to hi , we let r i be the peak activation of postsynaptic receptors, the first stage of possibly many stages of postsynaptic processing, and we also define φ to be the minimum activation level, a parameter similar to Λ, and let τr be the decay time scale, which defines a map similar to that in [1]: Δti
r i ¼ φ þ ð1 − φÞr i−1 e− τr ¼ φ
∞
Tn
ð1 − φÞn e− τr ∑
[17]
n¼0
¼hi
iff τ ¼ τr and φ ¼ Λ:
[18]
The r sequence is a function of the n-interval fT n g∞ n¼0 sequence [17], so the statistical properties of T n are transmitted postsynaptically. Moreover, if both φ and τr are near Λ and τ, respectively, then r i will approximate hi , and with identical parameters there is equality: r i ¼ hi [18]. This finding suggests an original experimental prediction: If synaptic kinetics are matched to presynaptic adaptation kinetics, then postsynaptic responses can exhibit the independence property and effectively represent the h-code information postsynaptically. Discussion We have discovered a stochastic regime of input-driven spiking models associated with correlated spiking, in which the activation states of the AC are probabilistically independent from spike to spike. Adaptation independence is met by the mild physiological condition that the minimum activation state is strong enough to stop spiking for a brief period. Independence occurs in a regime associated with perithreshold regular firing, and so we speculate PNAS Early Edition ∣
5 of 6
APPLIED MATHEMATICS
Weak Signal Detection. Changes in the AC states can be used to discriminate small changes in the baseline Δs. We consider Δs as an adiabatic change in the baseline, effectively constant over the time period of many spikes. Such signal detection is commonly performed by correlated fast-spiking sensory afferents that detect small changes in mean input level in the presence of noisy fluctuations (1, 3, 27). We wish to determine if a given h sequence fhi gN i¼1 more likely originated from baseline level s or baseline level s þ Δs. The most statistically efficient discriminator for this task is the log likelihood ratio (LLR) (43)
Poisson Information Gain for calculation). Therefore, signal detection of the fine-grained AC coding per change in firing rate is significantly enhanced for AC systems relative to both ISI coding of Poisson and similar renewal trains. We also plotted DKL f A ∕Δf A for uncorrelated (renewal) ISI sequences from the ISI distribution pISI ðΔtÞ (Eq. 5; Fig. 3D, green). For low s values (s ≲ −0.04), the AC model has no ISI correlations (see Fig. 3B) and thus approximates a Poisson process; thus all three models in Fig. 3D are approximately equivalent. As the input increases, the DKL f A ∕Δf A of both the adapting and renewal ISI models increase and stay approximately the same because there are no significant ISI correlations in the AC model. At s ∼ 0.04 the renewal ISI model peaks (Fig. 3D) at the minimum CV point, whereas the adapting model diverges further as significant negative ISI correlations allow for greater information gain and less firing-rate gain relative to the renewal models. As stated in the previous section, AC states resolve the underlying ISIs better when the baseline level s is high enough to produce broad h distributions (Fig. 3A), thus eliciting the plateau MI region (Fig. 3C). In addition to resolvability, decoding is less computationally costly for independent AC states. In the nonindependent regime, the LLR [15] is computed by using the conditional distribution Qðhjh0 Þ, which requires a decoder to represent and compute multivariate data (h0 and h) in independent memory buffers. In contrast, in the independence regime, only univariate data must be represented for decoding because the AC dynamics “self-decorrelate” the ISI information. Therefore, sensory information is more economically decoded for baseline excitability levels below the transition to non-adaptation independence yet high enough for sufficient AC activation (−0.06 ≲ s ≲ 0.4).
NEUROSCIENCE
maximal-entropy Poisson spike train that bounds all processes with the equivalent mean firing rate: I Poisson ¼ lnðhΔtiÞ þ 1− lnðδtÞ (see SI Text, Mutual Information) for λ ¼ hΔti−1 . Thus, there is a significant loss of MI when using AC states for input coding relative to similar renewal processes, coding that is worse for low baselines where the AC is minimally activated. However, this information loss does not hinder detection of weak changes in the baseline input Δs, as we show in the next section.
that it is a common property of neurons (see SI Text, Fitting the ML Model). Sensory afferent cells are challenged with representing information at the limits of physical resolution, in which noise fluctuations must be quickly disambiguated from baseline signals over a wide range of intensities. The adaptation-independent regime is important in this context because it enhances signal detection with minimal firing-rate change, by using a fine-grained code that utilizes both the ISIs and ISI correlations (11, 29, 36). This finegrained coding contrasts with the previously reported regularization effect of ISI correlations on coarse-grained rate coding (2, 3) or Poisson coding commonly reported in central neurons (44). We have shown that AC activation states represent the same information gain that can be achieved with Poisson spike statistics but at a reduced firing-rate change. It has been proposed that decoding of stimulus information is achieved through inferences on the basis of conditional probabilities from sequential data (e.g., Δtiþ1 , conditioned on Δti , and so on) (1, 11, 20, 29, 30). This scheme requires a postsynaptic decoder to represent multivariate data in independent memory buffers and compute conditional probability distributions. It 1. Ratnam R, Nelson M-E (2000) Nonrenewal statistics of electrosensory afferent spike trains: Implications for the detection of weak sensory signals. J Neurosci 20:6672–6683. 2. Chacron MJ, Longtin A, Maler L (2001) Negative interspike interval correlations increase the neuronal capacity for encoding time-dependent stimuli. J Neurosci 21:5328–5343. 3. Sedeghi S-G, Chacron M-J, Taylor M-C, Cullen K-E (2007) Neural variability, detection thresholds, and information transmission in the vestibular system. J Neurosci 27:771–781. 4. Goldberg J-M, Adrian H-O, Smith F-D (1964) Response of neurons of the superior olivary complex of the cat to acoustic stimuli of long duration. J Neurophysiol 27:706–749. 5. Yamamoto M, Nakahama H (1983) Stochastic properties of spontaneous unit discharges in somatosensory cortex and mesencephalic reticular formation during sleep-waking states. J Neurophysiol 49:1182–1198. 6. Neiman A-B, Russell D-F (2004) Two distinct types of noisy oscillators in electroreceptors of paddlefish. J Neurophysiol 92:492–509. 7. Farkhooi F, Strube-Bloss M-F, Nawrot M-P (2009) Serial correlation in neural spike trains: Experimental evidence, stochastic modelling, and single neuron variability. Phys Rev E 79:021905. 8. Chacron M, Longtin A, St.-Hilaire M, Maler L (2000) Suprathreshold stochastic firing dynamics with memory in p-type electroreceptors. Phys Rev Lett 85:1576–1579. 9. Liu Y-H, Wang X-J (2001) Spike-frequency adaptation of a generalized leaky integrateand-fire model neuron. J Comput Neurosci 10:25–45. 10. Brandman R, Nelson ME (2002) A simple model of long-term spike train regularization. Neural Comput 14:1575–1597. 11. Lüdke N, Nelson M-E (2006) Short-term synaptic plasticity can enhance weak signal detectability in nonrenewal spike trains. Neural Comput 18:2879–2916. 12. Muller E, Buesing L, Schemmel J, Meier K (2007) Spike-frequency adapting neural ensembles: beyond mean adaptation and renewal theories. Neural Comput 19:2958–3010. 13. Prescott S-A, Sejnowski T-J (2008) Spike-rate coding and spike-time coding are affected oppositely by different adaptation mechanisms. J Neurosci 28:13649–13661. 14. Schwalger T, Lindner B (2008) Higher-order statistics of a bistable system driven by dichotomous colored noise. Phys Rev E 78:021121. 15. Bialek W, Rieke F, de Ruyter van Steveninck R-R, Warland D (1991) Reading a neural code. Science 252:1854–1857. 16. Koch K, et al. (2004) Efficiency of information transmission by retinal ganglion cells. Curr Biol 14:1523–1530. 17. Lundstrom B-N, Fairhall A (2006) Decoding stimulus variance from a distributional neural code of interspike intervals. J Neurosci 26:9030–9037. 18. Reinagel P, Reid R-C (2000) Temporal coding of visual information in the thalamus. J Neurosci 20:5392–5400. 19. Panzeri S, Petersen R-S, Schultz S, Lebedev M, Diamond M-E (2001) The role of spike timing in the coding of stimulus location in rat somatosensory cortex. Neuron 29:769–777. 20. Strong S-P, Koberle R, de Ruyter van Steveninck R-R, Bialek W (1998) Entropy and information in neural spike trains. Phys Rev Lett 80:197–200. 21. Borst A, Theunissen F-E (1999) Information theory and neural coding. Nat Neurosci 2:947–957. 22. Thomson E-E, Kristan W-B (2005) Quantifying stimulus discriminability: a comparison of information theory and ideal observer analysis. Neural Comput 17:741–778.
6 of 6 ∣
www.pnas.org/cgi/doi/10.1073/pnas.1008587107
has been proposed, yet is unproven (11), that synaptic processing could perform such a computation or be used for Bayesian inference (45). However, we have shown that correlated spike trains can be represented independently causa sui by the adaptation dynamics, providing a simple biophysical means of information representation that does not require more costly multivariate decoding. We have also shown that simple postsynaptic dynamics can represent the AC states, which suggests a testable experimental prediction that synaptic activation kinetics can decorrelate the correlated ISI sequence and synaptic and dendritic nonlinearities (46) could decode the AC activation. Methods All numerical computations were performed on a Macintosh computer using MATLAB software. See SI Text, Notes to Numerical Computations for specific methods used in Figs. 1–3. ACKNOWLEDGMENTS. The authors acknowledge funding from Canadian Institutes of Health Research Grant 6027 and Natural Sciences and Engineering Research Council. 23. Fairhall A, Lewen G-D, Bialek W, de Ruyter van Steveninck R (2001) Efficiency and ambiguity in an adaptive neural code. Nature 412:787–792. 24. Fellous J-M, Tiesinga P-H-E, Thomas P-J, Sejnowski T-J (2004) Discovering spike patterns in neuronal responses. J Neurosci 24:2989–3001. 25. Middleton J-W, Chacron M-J, Lindner B, Longtin A (2003) Firing statistics of a neuron model driven by long-range correlated noise. Phys Rev E 68:021920. 26. Brenner N, Bialek W, de Ruyter van Steveninck R (2000) Adaptive rescaling maximizes information transmission. Neuron 26:695–702. 27. Theunissen F, Miller J-P (1995) Temporal encoding in nervous systems: A rigorous definition. J Comput Neurosci 2:149–162. 28. Butts D-A, et al. (2007) Temporal precision in the neural code and the timescales of natural vision. Nature 449:92–95. 29. Jacobs A-L, et al. (2009) Ruling out and ruling in neural codes. Proc Natl Acad Sci USA 106:5936–5941. 30. Paninski L, Pillow J, Lewi J (2007) Statistical models for neural encoding, decoding, and optimal stimulus design. Prog Brain Res 165:493–507. 31. Hille B (1984) Ionic Channels of Excitable Membranes (Sinauer, Sunderland, MA), 2nd Ed. 32. Storm J-F (1988) Temporal integration by a slowly inactivating Kþ current in hippocampal neurons. Nature 336:379–381. 33. Sobel E-C, Tank D-W (1994) In vivo Ca2þ dynamics in a cricket auditory neuron: An example of chemical computation. Science 263:823–825. 34. Wang X-J (1998) Calcium coding and adaptive temporal computation in cortical pyramidal neurons. J Neurophysiol 79:1549–1566. 35. Wang X-J, Liu Y, Sanchez-Vives M-V, McCormick D-A (2003) Adaptation and temporal decorrelation by single neurons in the primary visual cortex. J Neurophysiol 89:3279–3293. 36. Kara P, Reinagel P, Reid R-C (2000) Low response variability in simultaneously recorded retinal, thalamic, and cortical neurons. Neuron 27:635–646. 37. Morris C, Lecar H (1981) Voltage oscillations in the barnacle giant muscle fiber. Biophys J 35:193–213. 38. Rinzel J, Ermentrout B (1998) Methods of Neural Modelling, eds C Koch and I Segev (Massachusetts Institute of Technology, Cambridge, MA), 2nd Ed. 39. Lasota A, Mackey MC (1994) Chaos, Fractals, and Noise: Stochastic Aspects of Dynamics (Springer-Verlag, New York), 2nd Ed. 40. Nesse W-H, Del Negro C-A, Bressloff P-C (2008) Oscillation regularity in noise-driven excitable systems with multi-time-scale adaptation. Phys Rev Lett 101:088101. 41. Plesser H-E, Gerstner W (2000) Noise in integrate-and-fire neurons: From stochastic input to escape rates. Neural Comput 12:367–384. 42. Lindner B, Schimansky-Geier L, Longtin A (2002) Maximizing spike train coherence or incoherence in the leaky integrate-and-fire model. Phys Rev E 66:031916. 43. Cover T, Thomas J (1991) Elements of Information Theory (Wiley-Interscience, New York). 44. Ma W-J, Beck J-M, Latham P-E, Pouget A (2006) Bayesian inference with probabilistic population codes. Nat Neurosci 9:1432–1438. 45. Pfister J-P, Dayan P, Lengyel M (2009) Know thy neighbour: A normative theory of synaptic depression. Adv Neural Inf Process Syst 22:1464–1472. 46. Polsky A, Mel B, Schiller J (2009) Encoding and decoding bursts by NMDA spikes in basal dendrites of layer 5 pyramidal neurons. J Neurosci 29:11891–11903.
Nesse et al.