The Analysis of Synaptically Generated Traveling ... - Semantic Scholar

Report 0 Downloads 12 Views
The Analysis of Synaptically Generated Traveling Waves Bard Ermentrout  Department of Mathematics University of Pittsburgh Pittsburgh, PA 15260 October 2, 1997 Abstract Mathematical and computational models for the propagation of activity in excitatorily coupled neurons are simulated and analyzed. The basic measurable quantity, velocity, is found for a wide class of models. Numerical bifurcation techniques, asymptotic analysis, and numerical simulations are used to show that there are distinct scaling laws for the velocity as a function of a variety of parameters. In particular, the obvious linear relationships between speed and spatial spread or synaptic decay rate are shown. More surprisingly, it is shown that the velocity scales as a power law with synaptic coupling strength and that the exponent is dependent only on the rising phase of the synapse.  Supported in part by NSF DMS-96-26728

1

1 Introduction There has been a great deal of recent interest in the propagation of traveling waves in neural tissue (Chagnac-Amitai and Connors, 1989; Kim et.al, 1995; Kleinfeld, et al, 1994; Traub et. al. 1993), The advent of voltage sensitive dyes, intrinsic imaging techniques, and multi-electrode recordings has allowed physiologists to monitor the spatio-temporal activity of a variety of neural systems. Chervin et al (1988) rst measured propagation of activity in cortical slices that were treated with BMI to block the synaptic inhibition. They found traveling bursts of activity. More recently Golomb and Amitai (1997) have modeled this system using a one-dimensional network of single compartment neurons. They quanti ed many of the previous results both computationally and physiologically. The existence of wave phenomena in cortical tissue has been known for some time (Burns, 1950; Petsche et al, 1974). More recently, a number of investigators have studied propagation in slices of cortex, hippocampus, and thalamus using multiple electrodes and voltage sensitive dyes (Connors and his collaborators, 1988,1989; Traub et al, 1993; Kim et al, 1995; Wu, personal communication). The propagation of these waves is believed to be synaptic in origin and thus is quite di erent from propagation of action potentials which is mediated by di usion. There have been a number of recent computational models of these systems in which the slice is considered a spatially distributed one-dimensional system (Traub, et al 1993; Golomb et al,1996; Destexhe et. al, 1996; Golomb and Amitai, 1997). In all of these models, the membrane properties of individual neurons are explicitly modeled and the resultant models thus appear to be mathematically intractable. In particular, what aspects of the model determine the velocity of the waves is often dicult to extract from the computational models. Mathematical analysis of simpli ed 2

membrane models can often lead to some general principles that hold for the more detailed computational models. In this paper, we will consider a very general class of model neurons that are coupled together in an excitatory fashion. We are mainly interested in the initial wave of excitation and how the velocity of this wave depends on parameters such as reversal potential, threshold, and synaptic parameters. We will start with a simple model with discrete nearest neighbor coupling and instantaneous synapses. We then study increasingly more complicated models, ending with a spatially continuous biophysical model of the form simulated by Golomb and Amitai. We discuss the possibility of re-excitation due to the persistence of the synapses. We derive a scaling law for the velocity as a function of the synaptic coupling strength.

2 General considerations and some simulations We start by showing a few simulations and some examples of what we can expect to happen. Our rst example is a line of locally coupled neurons with each cell modeled as a single compartment obeying simple spiking dynamics. (In this case, we use the fast sodium and delayed recti er kinetics of Traub's model for cortical cells; the equations are in the appendix.) The equations are j C dV = ?Iion (Vj ; : : :) ? Ijsyn(t) (2.1) dt where Iion are the local ionic conductances and

Ijsyn(t) = gsyn

X

k

w(jj ? kj)sk (t)(V ? Vsyn): 3

(2.2)

The coupling between cells is modeled as a distance dependent weight such as a Gaussian, exponential, or square-wave function, normalized so that the sum of all connections is 1. In the limit as the number of cells becomes in nite, this approximates a continuous convolution which we will analyze below. The parameter gsyn is the maximal synaptic conductance that would be generated if all synaptic channels are open. The functions sk (t) are the synaptic gating variables. There are several ways in which one can model these. The simplest is to give them a speci ed form such as

sj (t) = ((t ? tj ) )

(2.3)

+

where (x) is zero if x < 0 and x otherwise;tj is the time of ring of cell j ; and (t) is some type of \alpha" function such as e? t or te? t : Another method used by Golomb et al (1996,1997), is to have the synapses obey a di erential equation like any other conductance: dsj = K (V (t))(1 ? s ) ? s (2.4) j j j dt where e.g. K (V ) = K =(1 + e V ?VT =Vshp ): +

2

(

0

)

In this case, when Vj crosses VT , K (Vj ) is large and sj rises to close to 1 at a rate proportional to K . After the voltage spike terminates, sj decays to 0 with a time constant  = ? This synaptic model was rst used by Wang and Rinzel (1992) and theoretically justi ed by Destexhe et al (1994). We assume that the dynamics of a single cell is \excitable"; that is, there is a rest state and sucient excitation will cause the cell to re an action potential before returning to rest. Figure 1a shows a simulation of a line of 200 cells representing a length of tissue 10 spatial units (each cell is connected to 20 neighbors to the left and right). An initial depolarization of 0

1

4

the the cells near the left edge is given. The result is a single traveling pulse of activity and a return to rest. This is in many ways similar to the action potential generated by axonal models except that the coupling is via synaptic transmission and not via di usion of the potential. The important macroscopic quantity that can be readily measured is the velocity of propagation. Since the space constant of the connectivity is 1 unit, the velocity is roughly 400units=sec: This is the same order of magnitude (but about 2-3 times as fast) as Golomb et al report in their simulations (1997). Typical wave speeds measured experimentally range from 1 to 10 cm/sec. If we assume the length scale of connections in our simulations correspond to 1 mm, then a velocity of 10 cm/sec is 100 units/second. The velocity depends on many other aspects of the model and on the dynamics and strength of the synapses. We will see this below. In gure 1a, the synaptic time constant is very small (on the order of 0.25 msec). If the time constant is larger, say, 1 msec, then the result of the same initial depolarization is shown in gure 1b. There are several di erences: (i) the velocity is faster; and (ii) there is a \wake" of oscillation that follows the initial excitation. The velocity is expected to be larger since the total synaptic volley is larger due to the longer persistence of the synaptic conductance. The oscillations that are left behind are a consequence of the synaptic persistence and the very fast recovery time for this model. The synaptic conductance remains high enough to take the cells past threshold even though they have recently red. The mechanism for the switch between a single event and periodic wave trains is complicated and will be the subject of another paper. Figure 1c shows a simulation with a time constant of 0.5 msec. This is near the transition between a solitary pulse and periodic wave trains. There is a long delay before the second pulse is able to prop5

agate; successive pulses are delayed less and less with a regular wavetrain ultimately occurring. The wavelength of the plane wave and the frequency of the oscillations remains an unsolved problem. We will comment on this in the discussion. The solitary pulse occurs only when the synaptic time is (unrealistically) small. The speed of the synapses required for a single pulse is a function of the model dynamics; the Traub model has a very short refractory period. If we use di erent dynamics, or include additional outward currents (such as a slow potassium M current) then it is possible to increase the refractory period and thus allow for solitary waves with more reasonable synaptic time courses. In gure 2a, I show the result of a similar calculation applied to the Traub model with a relatively fast (100 msec time constant) M current. In this case, the synaptic dynamics are much slower with a decay of 10 msec and K = 2 msec. This is similar to the synaptic parameters that Golomb and Amitai use in their simulations. The velocity of this wave is about 250 spatial units per second; in the range of the Golomb and Amitai calculations. Figure 2b shows a calculation using the Hodgkin-Huxley equations instead of the Traub model. These equations lead to pulses with much longer refractory periods and thus, the re-excitation requires a much longer time constant than the Traub model. Finally, gure 2c shows the Traub model with a slower M current. This slowly rising current requires several pulses to build up long enough to suppress the re-excitation. Thus, we obtain a realistic looking propagating burst of activity across the tissue. Note that the velocity of this burst is almost the same as the solitary wave in Figure 2a; the di erence, small, is caused by the additional pulse excitation. An open theoretical question is how the number of pulses depends on the adaptation time constant. The fact that the additional pulses do not have a big e ect on the velocity 0

6

of the excitation is important since if we assume a single pulse, then the analysis of these waves is drastically simpli ed. Our goal in the remainder of this paper is to try to understand how the velocity of these waves depends on a variety of parameters such as the coupling strength, the shape of the synaptic conductance, the spiking threshold of the neurons, etc. We consider general spatially continuous models. We start with an integrate and re system and use this as a motivation for our general results on biophysically based conduction models. Here we derive the scaling result for velocity as a function of the conductance strength.

3 Continuum Models. There has been a great deal of recent interest in treating neural tissue as a continuum in order to study wave propagation in cortical slices. Golomb, et al (1996,1997) have been at the forefront of this line of work and have written a number of papers simulating one dimensional slices. In this section, we consider models of the following form: @V (x; t) = ?I ? I (x; t) (3.1) ion syn @t where Z x1 Isyn(x; t) = w(x ? y)s(y; t) dy: (3.2) x0

In addition, there are all of the recovery and associated gating variables for the membrane model. The one-dimensional slice is de ned on the interval [x ; x ] which we will eventually take to be the real line to avoid boundary e ects and to look for traveling wave solutions. The function w(x ? y) is the connection strength between neurons. It depends only on the distance between cells and is generally symmetric: w(x) = w(?x): Typical examples 0

1

7

are

w(x) = 21 e?jxj= Exponential w(x) = p1 e? x= 2 Gaussian (

w(x) =

8 < 1 2 : 0

(3.3) (3.4)

)

for jxj <  for jxj  

Square.

(3.5)

The parameter  is called the footprint or space constant of the network and characterizes the spatial extent of the coupling. Since the spatial domain can be rescaled by the space constant, the velocity (which is x divided by t ) scales linearly with the space constant or footprint. Doubling  doubles the velocity of propagation. Each of the connection strength functions has been normalized so that its integral over the real line is 1. Golomb and Amitai (1997) use essentially the same formulation but include e ects such as synaptic fatigue in their model. For solitary pulses which we analyze here, the issue of fatigue is unimportant. For slower (more realistic) synaptic time courses, then as in Figure 1b we do not see solitary waves but instead reexcited activity. Synaptic fatigue and slow outward currents like adaptation then play an important role in terminating the discharge (cf Figure 2c). We will comment on these issues in the discussion. For now, we are only interested in the propagation of the initial excitation front. Traveling wave solutions to (3.1,3.2) will be of the form V (x; t) = V (t ? x) where t ? x   is the traveling coordinate and  is the velocity. Similar forms hold for all the gating variables involved in the ionic currents. Suppose, as above, that the synaptic gating variables are alpha functions. If a cell at position x produces exactly one spike at a time t (x) then s(x; t) = (t?t (x)) for t > t(x) and is 0 otherwise. (If the cells spike multiple times as they will for synapses that persist, then s(x; t) is more complicated. In simulations 8

below, we show that multiple spikes do not have that much of an impact on the velocity of the initial spike.) If there is a constant speed traveling wave, then the time of spiking of a cell at x, t (x) is just, t (x) = x=: Thus,

s(x; t) = (t ? x= ) = (= ): Since the traveling wave can be translated along the coordinate,  we will assume that the cell crosses the synaptic threshold at  = 0: For  > 0 cells have already red and for  < 0 no cells have red. Thus, for  < 0 the synaptic gating is zero. The total conductance felt by the cell at x is

gsynstot (x; t) = gsynstot ( ) = gsyn

Z1 0

w( ?  0) ( 0= ) d 0

(3.6)

where we have used the fact that s = 0 before the cell res. Figure 3A shows the voltage pro le and the synaptic gating variable in the traveling frame. The wave travels to the left with velocity : Figure 3B shows the total conductance felt by a cell; the reason that this extends to the left is there are spatial connections between cells that have not red and cells that have red. Indeed, this is what causes cells at rest to be excited into ring. The voltage equation satis es: C dV (3.7) d = ?Iion ? gsynstot ( )(V ( ) ? Vsyn) and the auxiliary gating variables (m; n; h; etc) have the form: dq = a (V )(1 ? q) ? b (V )q; (3.8)  d q q where q is any of the gating variables. A solitary pulse must tend to the rest state at 1: The voltage must also cross threshold exactly at  = 0: Hence we are interested in nding solutions to these equations that satisfy the following conditions 9

(i) V ( ) ! Vrest and q( ) ! qrest as  ! 1; (ii) V (0) = VT ; (iii) V ( ) ! Vrest and q( ) ! qrest as  ! ?1. The rst condition is easy to obtain since  is a positive constant and the rest state for a single cell is stable. Thus, satisfying it is automatic, it does not constrain the solution and so it plays no role in determining the velocity of the rst spike. However, it is possible that if the synapses persist for too long that the cell will re repeatedly before settling down to rest. This renders our initial ansatz incorrect since there will be multiple rings. For now, we ignore this issue. The dicult condition is (iii), which we cannot expect to occur except for special values of : This is because in absence of gsyn the traveling wave equation has n unstable directions, where n is the total number of variables. Thus, if we get close to rest, the dynamics (backward in  ) will take us away exponentially fast. However, the presence of the synaptic conductance saves the day. It is clear that since the synaptic weights decay far away that stot ( ) tends to zero for large negative : Thus, there is hope that by choosing  carefully, we may be able to \push" the voltage toward the rest state. This will \push" the other variables to their rest state. To give some feel for this last point, consider the following simple equation: v0 = ?v + ged for < 0 with v(0) = vT : Think of this like the voltage equation. The exponential term represents the synaptic conductance felt by the cell. We want to solve this problem such that v(?1) = 0: The solution is easy to write down:   v( ) = 1 +g d ed + vT ? 1 +g d e?= : 10

Clearly, this will decay to 0 as  ! ?1 only if the coecient of the second exponential vanishes. This determines  in terms of the known parameters:  = g v? dvT : T Thus, intuitively, we see why there is a possibility of solving the boundary value problem and how various parameters determine the speed. (Below, we will apply these ideas to the solution of an integrate and re model and write a closed form solution for the velocity.) Mathematically, we must prove that a solution to the boundary value problem exists. Since we don't expect there to be a solution for all possible parameters, this presents a dicult mathematical question in general. The main advantage of this approach is that we have reduced the problem to an ordinary di erential equation instead of an in nite-dimensional integral equation. (See for example Idiart and Abbott, 1993; Ermentrout and McLeod, 1993; or Chen et al, 1997a,b.) This is a challenging problem numerically. However, if we add one more restriction, we reduce this problem to a boundary value problem on the unit interval which becomes very easy to solve using numerical methods. So far we have made no assumptions about w(x) other than it is integrable, nonnegative, and symmetric. Suppose that the extent is nite. From a biological point of view, this is completely reasonable. This says, that there are no connections between cells that are farther than some prescribed distance away. Let  be the extent in either direction of w(x): Then, we see that for  < ?; stot ( ) = 0: (Note that it is not true that stot ( ) = 0 for  >  since  > 0 represents the e ects of cells after ring and their activity can persist for arbitrarily long times.) Thus, for  < ? there is no more synaptic current so that the cell had better be at rest at this point for otherwise the exponentially unstable dynamics will pull the cell away from 11

rest as  ! ?1: The third condition is replaced by (V (?); q(?)) = (Vrest; qrest):

(3.9)

We can rescale space by  so we can assume without loss of generality that  = 1: Thus, we must solve the n di erential equations on the interval [?1; 0] with the n + 1 conditions (3.9) and

V (0) = VT

(3.10)

where VT is the potential at which the synapse turns on. There is one free parameter,  so that even though there are n equations and n +1 conditions, we have hope for a solution. The amazing thing is that we need only solve this n dimensional system on the interval [?1; 0] to nd an expression for the velocity of the traveling waves. Thus, even though the interactions are time-dependent and the spatial extent is distant, the numerical solution of the traveling wave equation is easier than the analogous problem for propagation on a cable.

3.1 The integrate and re model. As an example which is exactly solvable, we consider the integrate and re model: @V (x; t) = ?V (x; t) + I (x; t) (3.11) syn @t where Z1 Isyn(x; t) = gsyn w(x ? x0 ) (t ? t (x0 )) dx0: (3.12)

t (x)

?1

As above, is the time at which V crosses threshold, VT and  is the voltage di erence between Vsyn and the rest state (which is 0 in this model). We look for traveling wave solutions, V (t ? x)  V ( ) where  is the velocity 12

of the waves and  is the traveling coordinate. This leads to the following equation: Z1  dV w( ?  0) ( 0= )  ?V ( ) + Isyn( ;  ); (3.13) = ? V (  ) + g  syn d with the following conditions: 0

(i) V ( ) ! 0 as  ! 1; (ii) V (0) = VT ; (iii) V ( ) ! 0 as  ! ?1: (3.13) is just a linear di erential equation whose solution is:   Z V ( ) = e?= VT ? 1 e = Isyn( 0;  ) d 0 :  In order for this to be bounded as  ! ?1 the term inside the large parentheses must vanish as  ! ?1: This gives an implicit relationship between the wave velocity  and the threshold: Z1 Z g syn = d 0w( ?  0) ( 0= ): (3.14) de VT =  ?1 This rather baroque expression can be evaluated for a number of simple cases, yielding a relationship between the parameters and the velocity. Before turning to some examples, it is clear that the combination of parameters, VT =(gsyn) is one determinant of the velocity; increases in the threshold accompanied by proportional increases in the synaptic strength will make no di erence. This is an \artifact" of the integrate and re dynamics, the threshold, maximal conductance, and voltage di erence, will not generally be related this simply. Another observation is that scaling the spatial footprint in the weight function, w just scales the velocity by the same amount; 0

0

0

13

0

the velocity is linearly related to the footprint. (As we observed earlier, the linear dependence of velocity on the footprint occurs in any spatially continuous models.) Rescaling the temporal characteristics of the synapses cannot simply be scaled into the velocity. EXAMPLE 1. Suppose that w(x) is the square footprint with  = 1 de ned by (3.5). Suppose (t) = e? t the instantly rising exponential alpha function. Then, " # V 1  thr ? = ? = Q  g  = 2 1 + ? 1e ? ? 1e : syn Since we are mainly interested in how the velocity depends on the synaptic conductance, which is proportional to 1=Q, we plot 1=Q versus  in Figure 4a. For each value of the synaptic conductance above a minimal value there are two possible values of the velocity, one fast and one slow. Based on numerical simulations, we think that the faster velocity waves are stable and the slower ones are unstable. A rigorous stability analysis has not yet been done. Once past the minimum of the curve, the plots look essentially linear for large  . In fact, a simple calculation yields:  :   gsyn 4VT Thus, for an instantaneous exponentially decaying synapse, the velocity is asymptotically linear with respect to synaptic strength. Figure 4a shows that persistent synapses produce waves that are slower than short acting synapses. This is due to our normalization of the total area under the conductance. Long lasting synapses with our choice of normalization also are small in amplitude since the total area under the alpha function always 1. If no normalization is made, then persistent synapses lead to slightly higher velocities; but this e ect is small since the onset and not the decay of the synapse determines wave speed. 1

14

EXAMPLE 2. Instead of the exponential synaptic function, we consider

the double exponential:

? t e? t (t) = e1= ? ? 1= :

In the limit as ! this goes to the usual alpha function, te? t and as

! 1 this approaches the simple exponential. is related to the rise time of the synapse and is the decay rate. As in the previous case, a closed form expression for this can be obtained; we do not display it here. Instead Figure 4b shows the reciprocal of Q as a function of the velocity. The curves are qualitatively the same as those in Figure 4a with one major exception. The velocity and the synaptic conductance seem to be related in a quadratic fashion, not linearly. That is,   pgsyn: We will later see that the growth of the velocity with the synaptic conductance depends on the way that the synaptic conductance rises from 0. In the exponential case, it instantly jumps to a maximum, where as here, the rise is linear near 0. EXAMPLE 3. In this last integrate-and- re example, we look at the double exponential synapse along with the exponential connection function (3.3) with a footprint of  = 1: We obtain the rather compact expression: Q = g VT = 2(1 +  )(  +  )( +  ) : syn The square-root dependence of the velocity on the synaptic conductance is evident from and the formula; for large  we obtain: s   gsyn2V  : T Thus, the square root dependence does not depend on the details of the footprint. If one looks at gure 5 in Golomb and Amitai (1997), the relationship between conductance and velocity seems to lie between linear and square root. 2

15

3.2 General biophysical models We end this section with a numerical calculation for a biophysical model. The numerics suggest an asymptotic form for the velocity as a function of the synaptic conductance. We show that there is power law relationship that depends only on the rising phase of the synaptic conductance. We use the model of Figure 1a based on the fast sodium and delayed recti er kinetics of Traub's cortical cell models and a square weight function with footprint 1. 201 cells representing a length of 10 footprints are simulated using Runga-Kutta with a time-step of 0.025 msec. The reversal potential for the excitatory synapses is Vsyn = 0 mV. When the ith cell crosses VT = ?30 mV from below, the synaptic variable si is set to and then it decays exponentially at a rate : Figure 1a shows that the ring time t(x) is linear except near the boundaries. Thus, we can compute the slope of the ring time away from the boundaries. The inverse of this gives the velocity:

  2:5=(t(5) ? t(2:5)): We can repeat this calculation for a range of parameters, in the present case, gsyn: To compare, we solve the boundary value problem described above. We do this by shooting by hand for a xed set of parameters until we get a good value for : Then we use the continuation program AUTO as implemented in the author's dynamics package, XPPAUT to compute the velocity as a function of the desired parameter. Since AUTO does not solve nonautonomous systems, we have extended the dimension by 1 to trick AUTO. Once a good solution is manually found for the boundary value problem, the continuation allows us to very quickly compute an entire branch of solutions. Furthermore, the stability of the solutions is irrelevant to the calculation so unlike the solution via simulation, we also can compute the velocity of unstable 16

waves. (The details of the numerical method are given in the appendix.) Figure 5 shows the solution to the boundary value problem as well as the velocities computed by simulating the full model over a range of values of gsyn with = 4: The di erence between the two calculations is negligible except for large values of the conductance (where the di erence is likely due to numerical problems with the full simulation.) The picture is qualitatively similar to the integrate and re with the fast velocity nearly linear for large gsyn and the slow velocity almost independent of the parameters.

3.2.1 Perturbation calculation For large gsyn we can do a perturbation calculation. We make no a priori assumptions about the dependence of the velocity on the synaptic conductance other than that it gets larger with larger conductance. We will see that the asymptotic behavior is critically dependent on the behavior of the total synaptic input. This is given by:

stot ( ;  ) =

Z1 0

w( ?  0) ( 0= ) d 0:

(3.15)

(For the numerical case in Figure 5, stot ( ;  ) = (=2)(1 ? e?  pose that as  ! 1, stot ( ;  )  S ( ) ?p:

= ):)

( +1)

Sup-

For example, if (t) is the exponential function, p = 0 since as  ! 1, ( 0= ) ! (0) = . If is the double exponential or the linear exponential function, then p = 1: That is, for the instantly rising exponential function, stot is O(1) in the velocity while for the classic alpha function, te? t or the double exponential, (e? t ? e? t )=( ? ), stot gets smaller with larger  . Since the term w( ?  0 ) in the integral is bounded and independent of  the 2

17

determining factor for the asymptotics is only the synaptic time course, (t): For  large, (t= ) = (0) + 0(0) t + : : : so that the behavior of the synapse at initiation determines the asymptotics. For the exponential function, (0) = so stot is O(1) in  while for the double exponential and the linear exponential, (0) = 0 but 0(0) 6= 0 so the asymptotics is O(1= ): If we took as an example, (t) = t e? t then since the rising phase is quadratic in t, p = 2 and stot   ? : We now assert that  p  gsyn is the asymptotic behavior of the velocity for large gsyn: Let stot ( ;  ) = S ( ) ?p as above where p is the order of the rst nonzero term of the Taylor series of (t) around 0. Then for large gsyn and  the di erential equations are: 1 2

3 2

2

+1

V 0 = ?Iion = ? gsyn ? p S ( )(V ? Vsyn) q0 = (aq (V )(1 ? q) ? bq (V )q)= ( +1)

(3.16) (3.17)

For  large, the q equation is essentially q0 = 0 so that we take q = qrest to satisfy the boundary conditions. The ionic component of the voltage equation is also small since again  is large. Since gsyn is large we will get \cancelation" with the small  ? p if we choose gsyn =  p to be a constant. This constant is unknown, but it is clear now that we have the desired scaling that  p  gsyn: With this substitution, we have the simple linear system as  ! 1: ( +1)

+1

+1

V 0 = ?S ( )(V ? Vsyn) with V (0) = VT and V (?1) = Vrest: (Note that for weight functions that extend only 1 footprint away, the condition is at  = ?1 instead of  = ?1: 18

) The solution to the linear di erential equation is Z ln VVsyn ??VV( ) =  S (t) dt ? syn T The condition at  = ?1 or  = ?1 determines : ?Vrest ln VVsyn syn ?Vthr = R ?0 S (t) dt where  = 1 or  = 1: Figure 6 shows the numerically computed velocity versus gsyn for the Traub dynamics with exponential and alpha function synapses on plotted on a log-log scale. The two lines on the plots have slopes of 1 and respectively. The asymptotics gives a good t even for reasonably small values of the maximal synaptic conductance and at fairly low velocities. Note also that the velocity for the exponential synapses is much larger than that for the alpha function. This is intuitively appealing since the alpha functions must integrate from s = 0 rather than starting at some large positive value. We have shown that for any membrane model, the velocity for strong synaptic conductance is essentially independent of the ionic details and depends only on the e ective rise time of the synapse, the \synaptic footprint", and the resting-, synaptic reversal-, and synaptic threshold-potentials. This makes intuitive sense since, as we have seen, the velocity depends mainly on the integration time from rest to the synaptic threshold in the presence of the total synaptic driving force. The driving force is independent of the intrinsic ionic properties. For reasonably strong synaptic conductances, this driving force dominates most of the other ionic processes, so that the system behaves almost as if it was a linear integrator { i.e. the integrate-and- re model. Thus, although the integrate-and- re model is only a caricature of a true membrane, it captures much of what happens during an excitation 0

0

0

0

1 2

19

wave. The \e ective" synaptic rise time is the rise time felt by the neuron at the spike generating zone. This will depend not only on the synaptic kinetics but also on dendritic properties, attenuation of EPSPs, delays, etc. In the discussion, we contrast the behavior of this model with the analysis of smooth traveling pulses in an inhibitorily coupled system. In such a model, the slow recovery dynamics play a key role in the computation of the velocity of the waves since the cell can only re when it rebounds from the inhibition.

Remarks.

1. As this is a perturbation equation, we have chosen the velocity faster than even the sodium current so that all gates are held essentially at their resting value. This is not a \problem" for the activation of the waves since the asymptotically large synaptic conductance overwhelms all other conductances and is the dominant source of depolarization of the membrane toward threshold. The actual action potential plays no role. 2. The asymptotics here assume that the velocity tends to in nity. In reality, the axonal velocity will serve as an upper limit for the velocity so that in a sense the calculations above are in a \nonbiological" regime. However, from a practical point of view, the real velocity (as computed by solving either the boundary value problem of the full spatially discretized model) for reasonable values of the synaptic strength and velocity agree well with the asymptotics (see Figure 6). In the present computations and in the asymptotics, we have ignored what happens to the membrane after the wave has passed. We have also assumed that each cell only res a single spike. This latter assumption is clearly false if the synapses persist longer than the some fraction of the refractory period 20

of the cell. In this case, the cell will re again and perhaps many times. Thus, the simple form for stot that is de ned with the implicit assumption that there is a single ring of each cell, is not correct when there are multiple spikes per cell and thus stot is likely larger than with a single spike. However, if gsyn is large enough, then the rising phase of stot is all that is important and thus, multiple spikes would not be expected to have a major e ect on the velocity. To this end, we have computed the velocity as a function of the synaptic conductance in a model where we have (i) constrained the cell to re once or (ii) allowed the cell to re repeatedly. We take = :5 corresponding to a 2 msec time constant. This is long enough for there to be re-excitation. Plots of the velocity as a function of gsyn for both the constrained and unconstrained cases are indistinguishable. There is essentially no di erence over the entire range of synaptic strengths and velocities.

3.3 Some comments on re-excitation. Re-excitation occurs if after the wave of excitation passes, there is sucient persistence of the synaptic current to overcome the after-hyperpolarization and/or refractory period of the neuron. In general, this is hard to study. One approach is to study a pair of mutually coupled excitatory cells and ask how the emergence of a periodic solution occurs as a function of the time constant of the excitation or the synaptic strength. Preliminary results for this indicate that the transition from a pair of spikes (one cell res, then the other, and then they both stop) to repetitive ring is complex and occurs via successive addition of spikes. Another approach, easier, but less related to \realistic" biophysical models is to consider a modi ed integrate-and- re neuron. After the neuron res it is reset to 0 (or even some lower value to \imitate" a refractory period.) 21

Let ?Vmin denote the potential to which the cell is reset after ring. Then after ring: Z V ( ) = ?e?= Vmin + 1 e? ? = Isyn( 0;  ) d 0: (3.18) Thus, our the requirement that no re-excitation takes place is equivalent to the requirement that V ( ) < VT for all : This allows us to set bounds on the parameters which guarantee the existence of a solution with a single spike. More generally, we write (3.18) as (

0

)

0

V ( ) = R(= ) + E (;  ) where R(= ) is the recovery to rest after a spike and E (;  ) is the integral term in (3.18). In our present example R(= ) = ?Vmin e?= : Thus, the model begins to look like the spike-response model of Gerstner et al (1996). The function R typically begins large and negative and tends to 0 as  ! 1: The excitation function E starts at VT rises to some maximum and then decays to 0. The sharper the alpha function and the shorter the spatial interactions, the faster it reaches the decay phase. Thus, for fast local synapses, E will tend to 0 rapidly. Re-excitation can occur only if

VT ? R(= ) = E (;  ) for some value of : If the recovery period is long, that is, R is either very negative or very slow, then VT ? R will be much larger than E and so no re-excitation can occur. For the example integrate-and- re models above, these curves can be explicitly calculated although the resulting expressions are unwieldy and yield no simple closed form conditions.

22

3.4 Gated synapses. The methods derived in this section, both the asymptotics and the boundary value methods, assume that the synapses can be described by prescribed functions such as exponentials. This allows us to solve a nonautonomous boundary value problem for the solution to the traveling wave equation. However, this approach will not work for systems in which the synaptic gates are themselves di erential equations that depend on the potential of the presynaptic cell such as the rst order model (2.4). Instead, we must solve a functional equation on the real line:

CV 0 = ?Iion ? gsyn( s0 = f (s; V )

Z1

?1

w( ?  0)s( 0) d 0)(V ? Vsyn)

(3.19) (3.20)

as well as the gating variables. We want to choose  such that at  = 1, all the variables tend to rest and V (0) = VT : This is analogous to the type of equations which must be solved to nd traveling pulses on the cable (see Murray, 1989). However for the cable the interaction is via di usion and in the traveling coordinate system, the resulting system is just an ordinary di erential equation. In the present case, the voltage equation (3.19) requires global information about the synaptic functions s( ) thus standard numerical methods cannot be readily employed. There is an approximate remedy to this problem. Suppose that the parameters for the synapse equation (2.4) are xed. Then a simulation of a single cell that has been excited to a produce a spike will reveal the time course for s(t): This can then be tted to a known function form, (t) which can then be used to solve the nonautonomous system we have previously described. For example, we have tried this approach with the synaptic equa23

tion:

ds = 20 dt 1 + exp(?(V ? vT )=2) (1 ? s) ? s: The solution to this, s(t) is nicely tted to the double exponential function: 0





(t) = A e? t ? e? t : With VT = ?30 and = 1 in the synaptic di erential equation, we nd A = 1:4; = 1; = 10 for the alpha function is a close t in shape. The waves and velocity that result from simulating the synaptic and the alpha function model are indistinguishable. 0

4 Discussion We have developed some simple models, methods, and asymptotics for the computation of wave speeds for excitatorily coupled systems. The systems we have considered are based on ionic conductance models with either alpha function or gated synapses. There have been a number of related studies providing expressions for for the velocity of traveling waves in neural networks. The main class of models assume piecewise constant or linear interactions between networks of ring rate models. For example, Ermentrout and McLeod (1993) and Idiart and Abbott (1993) consider systems of the form: Z1 du ( x; t )  dt = ?u(x; t) + w(x ? y)F (u(y; t)) dy (4.1) ?1 where F (u) is a \squashing" function. ?u + F (u) has three roots, two of which are stable. These authors derive formulae for wavefronts which join the two stable rest states. For example, if F (u) = gH (u ? uT ) where H is the Heaviside step function and w is the exponential function, the velocity

24

of the wave fronts is:

 = 2g2?u uT T

where we have de ned = 1=: Thus, like our model with purely exponential coupling, the velocity and coupling strength, g are related in a linear fashion. (If we replace the rst order temporal integration by a second order one, the velocity and the coupling are related in a square-root fashion.) The simple neural net models and the biophysically based neuron models have much the same behavior as the coupling strength is varied. Note that there is only one velocity for any set of parameters in contrast to the ring models, where there is a fast and slow velocity.

4.1 Adaptation and fatigue Chen et al (1997a,b) derive models of the form:

du(x; t) = F (Z 1 w(x ? y)u(y; t) dy)(1 ? u) ? u (4.2) dt ?1 from a biophysical model for GABA coupled thalamic neurons. As above, the velocity can be explicitly computed when F is proportional to a Heaviside step function. Results similar to ours are also found. By using the same averaging methods one can derive a similar set of reduced equations for synaptic excitation (see Ermentrout, 1994). The problem with the averaging methods is that they require the synapses to be slow which is not generally the case for normal glutamanergic synapses. However, a system in high magnesium with AMPA blocked could be a candidate for this type of reduction. The reduction has the added advantage that it is easy to incorporate adaptation or synaptic fatigue into the models. The analogue of (4.2) with adaptation 25

is

du(x; t) = F (Z 1 w(x ? y)u(y; t) dy ? bz)(1 ? u) ? u dt ?1 Z 1  dz(dtx; t) = F ( w(x ? y)u(y; t) dy ? bz)(1 ? z) ? z: ?1 The additional term z is the gating variable for the adaptation and b is a positive parameter proportional to the strength of adaptation. Simulations of the reduced model produce a smoothed version of gure 2C (that is, instead of spikes, the envelope of spikes is seen). Synaptic fatigue can also be incorporated into these spike-averaged models as long as it is regarded as a slow process. Amari (1977) considers a model like (4.1) but adds a negative feedback term representing a local inhibitory population of neurons: du(x; t) = ?u(x; t) + Z 1 w(x ? y)F (u(y; t)) dy ? av(x; t) dt ?1 dv(x; t) = (?u(x; t) + cv(x; t))R: dt 1

2

He constructs a traveling pulse solution to this when F is the Heaviside function. David Pinto (Ph. D. thesis) has also constructed traveling pulses for this model when F is general, but R is very small. The key point in both the Amari and Pinto calculations for the pulse is that the velocity is pretty much independent of the inhibition and is well approximated by the initial excitation front. These two models represent \neural network" or population models. Thus a front of activity is equivalent to the picture shown in Figure 1b, while the pulse is analogous to that shown in Figure 2c.

26

4.2 Other modes of propagation In some recent analysis, Rinzel et al (in preparation) consider a biophysical model of the form: dV = ?g (V ? V ) ? g m (V ) h(V ? V ) ? g Z 1 w(x ? y)s(y; t) dy(V ? V ) syn L L Ca 1 Ca syn dt ?1 dh = (a (V )(1 ? h) ? b (V )) h h dt ds = K (V )(1 ? s) ? s dt Like our models, there is a single stable rest state. However, the synapse is hyperpolarizing. Thus, when the synapse is on, the neurons are more hyperpolarized than they are at rest. Once the synaptic inhibition disappears, the cells re a rebound burst. Since  is small, the voltage remains high for a long period of time and thus can hyperpolarize neighboring cells enough for them to rebound. The result of simulating this model is a \lurching" wave which does not progress smoothly forward. The velocity of these lurching waves is dependent on the rate at which the inhibition wears o not the rate at which it turns on. Thus, these waves are several orders of magnitude slower that waves of excitation. Golomb et al (1996) describe behavior of lurching waves in a model set of equations and they conjecture that the scaling of the velocity with synaptic conductance is intimately related to the form of the footprint. Rinzel et al have noted that if the coupling function w(x) has a gap near x = 0, it is possible to obtain smooth traveling pulses. However, the relationship of the velocity to the other parameters is much more complicated than in our case since the voltage pulse is produced by the wearing o of the synapse rather than the onset as it is here. 3

27

4.3 Re-excitation and multiple pulses The present paper ignores the consequences of re-excitation. We have explored this numerically and have found that the velocity is almost independent of whether or not the medium is re-excited. However, there remains the dicult question of what determines whether or not the system is reexcited. Furthermore, we still have the open question of what determines the spatial and temporal properties of the waves left in the wake of the re-excitation. Golomb and Amitai (1997) have observed that the burst of propagating spikes is translation invariant. Our simulations (notably Figure 1b,2c) con rm this result. Thus, it is likely that we will be able to answer this question mathematically in the near future. Golomb and Amitai also notice a multi-stability in the number of spikes that are produced during a traveling burst of activity such as in Figure 2c. This is an example of the mathematical problems faced when looking for waves with more than one spike. Consider the following \experiment:" We apply a constant current at one end of the tissue in the parameter regime where a single spike is emitted. If the applied current lasts long enough, then the end will be re-excited (due to the stimulus rather than the recurrent excitation) and result in another pulse propagating. Terminating the applied current leaves a pair of propagating pulses. One could obviously insert multiple pulses into the medium. Thus, it is hard to separate this e ect from the spontaneously re-excited tissue. It is well known from the work on di usive waves that an \excitable" medium has a one-parameter family of traveling periodic wave trains as solutions (Murray, 1989). Thus, even in absence of re-excitation, there is the possibility of getting multiple pulses in the tissue. When re-excitation does occur, then the medium is essentially bi-stable and admits both a stable xed point and an oscillatory solution. (The sys28

tem is bistable at least at the time scale of the velocity; slower timescale processes such as adaptation and synaptic fatigue will ultimately terminate the activity.) In a recent paper (Ermentrout, et. al. 1997), we studied a wave fronts joining a stable xed point to an oscillatory wave in a reactiondi usion model. There we were able to show that a unique traveling wave is selected and that it depends on the details of the coupling and the model kinetics. We expect that a similar result will hold in these models.

Acknowledgments. Many of the early simulations of the discrete models were done by Julie Shafer, an undergraduate summer research student, as part of her summer project. I also thank David Golomb for many useful discussions about waves and models.

5 Appendix 5.1 The model equations. We have used two di erent models for membrane dynamics. Both have a leak, delayed recti er, and sodium channel: C dV dt = ?gL(V ? VL) ? gK n (V ? VK ) ? gNa m h(V ? VNa) + I ? Isyn: The variables, m; n; h obey dynamics of the form 4

3

m0 = am (V )(1 ? m) ? bm (V )m: For the \Traub" model, we take:VK = ?100mV; VNa = 50mV; VL = ?67mV , C = 1F=cm , gNa = 100; gK = 80; gL = :2: (All conductances are mS=cm .) The functions are: 2

2

am (v) = :32(54 + v)=(1 ? exp(?(v + 54)=4)) 29

bm (v) ah(v) bh(v) an(v) bn (v)

= = = = =

:28(v + 27)=(exp((v + 27)=5) ? 1) :128 exp(?(50 + v)=18) 4=(1 + exp(?(v + 27)=5)) :032(v + 52)=(1 ? exp(?(v + 52)=5)) :5 exp(?(57 + v)=40):

The \Traub" model with an M current has an additional current:

gmq(V ? VK ) where and

q0 = (q1(v) ? q)=q (v) q1(v) = 1 + e? 1v q (v) = 3:3e v

=

( +35) 20

q = + e? v = : In the simulations with \fast" adaptation (Fig2a), gm = 3 and w = 100 while for \slow" adaptation (Fig 2c), w = 400: For the Hodgkin-Huxley model, we use a version in which the rest state has been shifted to 0 mV. For this model VK = ?12; VNa = 115; VL = 10:5989, gNa = 120; gK = 36; gL = :3: The functions are: ( +35) 20

am (v) bm (v) ah(v) bh (v) bn (v) an(v)

= = = = = =

( +35) 20

:1(25 ? v)=(exp(:1(25 ? v)) ? 1) 4:0 exp(?v=18) :07 exp(?v=20) 1:0=(exp(:1(30 ? v)) + 1:0) :125 exp(?v=80:) :01(10 ? v)=(exp(:1(10 ? v)) ? 1:0): 30

5.2 Numerical boundary value solutions. We have used the package XPPAUT and the numerical package AUTO to solve the traveling wave equations by treating them as a boundary value problem. This approach only works if the synaptic footprint is nite in extent. We assume with no loss in generality that length of the footprint is 1. Recall from the discussion preceding (3.9) that we must solve that we must solve: C dV d = ?Iion ? gsynstot (;  )(V ( ) ? Vsyn) dq = a (V )(1 ? q) ? b (V )q  d q q with the conditions

V (0) = VT V (?1) = Vrest q(?1) = qrest where

stot (;  ) =

Z1 0

w( ?  0) ( 0= ) =

Z

?1

w( 0) (  ?  ) d 0: 0

(The form of the second integral is a consequence of the assumption that the function w(x) vanishes outside the interval [?1; 1]:) For given functions, (t) and w(x), this integral is easily evaluated. We only need its value on the interval [-1,0]. Since AUTO does not handle nonautonomous di erential equations (which this is due to the forcing function, stot ( )) we will add another variable to the system of di erential equations. Thus, we solve dV = (?I ? g s (;  )(V ( ) ? V ))=(C ) ion syn tot syn dz 31

dq = (a (V )(1 ? q) ? b (V )q)= q q dz d = ?1 dz d = 0 dz on the interval z 2 (0; 1) subject to V (0) V (1) q(1)  (0)

= = = =

VT Vrest qrest 0:

Note if the original system has n variables (a voltage and n ? 1 recovery variables) then the new system has n + 2 equations and n + 2 conditions. We have eliminated the velocity as a parameter and added it as another \variable" to the di erential equations. Since we have rescaled the problem to lie on the interval (0,1), we can use AUTO to continue the solution as some parameter varies once we have found a solution for one value of the parameter. The initial solution is found by shooting and re ning the solution with the shooting algorithm in XPPAUT.

References [1] Amari, S.-I. (1977) Dynamics of pattern formation in lateralinhibition type neural elds, Biological Cybernetics, 27:77-87. [2] Burns, B.D., (1950) Some properties of the cat's isolated cerebral cortex, J. Physiol. Lond., 111:50-68. 32

[3] Chagnac-Amitai, Y. and Connors, B.W. (1989) Horizontal spread of synchronized activity in neocortex and its control by GABAmediated inhibition, J. Neurophysiol. 61:747-758 [4] Chen, Z., Ermentrout, B. and Wang, X.-J. (1997a) Wave propagation mediated by GABAA synapse and rebound excitation in an inhibitory network:a reduced model approach, J. Comput. Neuro., (in press). [5] Chen, Z., Ermentrout, B. and McLeod, J.B. (1997b) Traveling fronts for a class of non-local convolution di erential equations, Applicable Analysis, (in press). [6] Chervin, R.D, Pierce, P.A., and Connors, B.W., 1988, Periodicity and directionality in the propagation of epileptiform discharges across neocortex, J. Neurophysiol. 60:1695-1713. [7] Destexhe,A., Mainen, Z.F., and Sejnowski, T.J., 1994, Synthesis of models for excitable membranes, synaptic transmission, and neuromodulation using a common kinetic formalism, J. Comput Neurosci, 1:195-230. [8] A. Destexhe, Z.F. Mainen, and T.J. Sejnowski, 1994, Synthesis of models for excitable membranes, synaptic transmission, and neuromodulation using a common kinetic formalism, J. Comput Neurosci, 1:195-230. [9] Destexhe,A., Bal T., McCormick, D.A. and Sejnowski, T.J. (1996) Ionic mechanisms underlying synchronized oscillations and propagating waves in a model of ferret thalamic slices, J. Neurophysiol., 76: 2049-2070. 33

[10] Doedel, E. (1986) AUTO: Software for continuation and bifurcation problems in ordinary di erential equations, CalTech Technical Document. [11] Ermentrout, G.B. (1994) Reduction of conductance based models with slow synapses to neural nets, Neural Computation, 6:679-695. [12] Ermentrout, G.B. and McLeod, J.B. (1993) Existence and uniqueness of traveling waves for a neural network, Proc. Roy. Soc. Edinburgh, 123A:461-478. [13] Ermentrout, G.B., Chen, X., and Chen, Z., (1997) Transition Fronts and Localized Structures in Bistable Reaction-Di usion Equations, Physica D (in press). [14] Gerstner, W. van Hemmen, J.L. and Cowan, J.D. (1996) What matters in neuronal locking? Neural Computation 8:1653-1676. [15] Golomb, D., Wang, X.J. and Rinzel, J. (1996) Propagation of spindle waves in a thalamic slice model. J. Neurophysiol., 75:750-769. [16] Golomb, D. and Amitai, Y. (1997) Propagating neuronal discharges in neocortical slices: Computational and experimental study. J. Neurophysiol., (in press). [17] Idiart, M.A.P. and Abbott, L.F. (1993) Propagation of excitation in neural network models. Network, 4:285-294. [18] Kim, U., Bal. T., and McCormick, D.A. (1995) Spindle waves are propagating synchronized oscillations in the ferret LGN in vitro. J. Neurophysiol., 74:1301-1323. 34

[19] Kleinfeld, D., Delaney, K.R., Fee, M.S., Flores, J.A., Tank, D.W. and Gelperin, A.K (1994) Dynamics of propagating waves in the olfactory network of a terrestrial mollusk: An electrical and optical study. J. Neurophysiol., 72:1402-1419. [20] Koch, C. and Segev, I. (1989) Methods in Neuronal Modeling, chapt 5, MIT Press. [21] Murray, J.D. (1989) Mathematical Biology, Springer, New York. [22] Petsche, H., Prohaska,O., Rappelsberger, P., Vollmer, R. and Kaiser, A. (1974) Cortical seizure patterns in the multidimensional view: the information content of equipotential maps, Epilepsia, 15:439-463. [23] Traub, R.D., Je erys, J.G.R. and Miles, R. (1993) Analysis of propagation of disinhibition-induced after-discharges along the guineapig hippocampal slice in vitro. J. Physiol. Lond., 472:267-287. [24] Wang, X.J. and Rinzel, J. (1992), Alternating and synchronous rhythms in reciprocally inhibitory model neurons, Neural Computation 4:84-97 [25] Wilson, H.R. and Cowan, J.D. (1973) A mathematical theory for the functional dynamics of cortical and thalamic tissue. Kybernetic, 13:55-80.

35