Biol Cybern (2006) 95:185–192 DOI 10.1007/s00422-006-0076-6
O R I G I NA L PA P E R
An algorithmic method for reducing conductance-based neuron models Michael E. Sorensen · Stephen P. DeWeerth
Received: 21 November 2005 / Accepted: 25 April 2006 / Published online: 21 June 2006 © Springer-Verlag 2006
Abstract Although conductance-based neural models provide a realistic depiction of neuronal activity, their complexity often limits effective implementation and analysis. Neuronal model reduction methods provide a means to reduce model complexity while retaining the original model’s realism and relevance. Such methods, however, typically include ad hoc components that require that the modeler already be intimately familiar with the dynamics of the original model. We present an automated, algorithmic method for reducing conductance-based neuron models using the method of equivalent potentials (Kelper et al., Biol Cybern 66(5): 381–387, 1992) Our results demonstrate that this algorithm is able to reduce the complexity of the original model with minimal performance loss, and requires minimal prior knowledge of the model’s dynamics. Furthermore, by utilizing a cost function based on the contribution of each state variable to the total conductance of the model, the performance of the algorithm can be significantly improved.
1 Introduction Selecting the level of complexity required for a neuronal model is a difficult problem. A simplified model may facilitate implementation and analysis, but may not fully reproduce the behavior that is necessary for understanding neuronal function. A rich, physiologically realistic M. E. Sorensen · S. P. DeWeerth (B) Department of Biomedical Engineering, Georgia Institute of Technology, Atlanta, GA 30332, USA e-mail:
[email protected] model may provide a great deal of biological realism and relevance, but may be so complicated that model implementation and analysis become intractable. The selection of an appropriate level of complexity is further complicated by the context in which the model is to be used; complex phenomena that are important in single-neuron modeling studies may prove to be irrelevant when that model is incorporated into a larger system. In order to determine the appropriate level of model complexity, it is often useful to employ a scheme of model reduction. Such reduction can decrease the mechanistic complexity of the model while still retaining much of its mechanistic realism and scientific utility. Several different reduction schemes have been proposed for reducing the complexity of conductance-based neuron models. Most of these schemes rely on separating the model into multiple subsystems, and then reducing the complexity of the subsystems or their interaction. Such methods include quantifying the interaction between different subsystems (Butera et al. 1996), or replacing the dynamics of one subsystem with a simplified mechanism (Breen et al. 2003). One method for combining multiple variables into a single equivalent variable is the method of equivalent potentials (Kepler et al. 1992). In this method, a nonlinear transformation is applied to the state variables of the model in order to reveal underlying similarities between the state variables. These transformed state variables, called equivalent potentials, are then formed into groups based on their similarity, and each group is replaced with a new variable that approximates the function of the original state variables. This method has been successfully employed to reduce the complexity of several neuronal models, including an augmented Hodgkin–Huxley model (Kepler et al. 1992; Connor
186
et al. 1977; Hodgkin and Huxley 1952) and a crustacean LP cell model (Buchholtz et al. 1992; Golomb et al. 1993). One drawback to the method of equivalent potentials is that recognition of appropriate groupings often relies on the modeler’s familiarity with the dynamics of the model in order to recognize appropriate groupings of equivalent potentials. In practice, this results in an ad hoc assessment of the similarities between the variables Golomb et al. (1993). This reliance on prior experience with the model fundamentally limits the potential for automation of the reduction process, decreases the accessibility of the technique, and reduces the justifiability of the reduction process. We describe an algorithmic method for assigning appropriate groupings of state variables based on their equivalent potentials. This algorithm relies on calculating a linear fit between all possible pairs of equivalent potentials, and using the error of this fit to create a prioritized list of substitutions. We demonstrate how a weighting function, based on the contribution of each state variable to the membrane potential’s evolution, can be applied to the calculation of the linear-fit error to improve the efficacy of the algorithm. We apply these methods to a model of a bursting neuron from the leech heartbeat neuronal network (Hill et al. 2001), and we demonstrate that the method produces a reduced model that generates rhythmic bursting activity that is qualitatively similar to the original model.
2 Description of algorithm Before embarking on any scheme of model reduction, the modeler must first determine what features of the basis model are to be retained by the reduced model. An ideal reduced model would exactly duplicate all activities and behaviors of the basis model with a reduced number of variables and parameters. Of course, such a reduction is rarely, if ever, possible, so some criteria must be formulated in order to determine what constitutes an acceptable reduction. Our two reduction algorithms require that we first isolate Vm (t), the membrane potential trajectory of the basis model’s behavior that the reduced model needs to reproduce, along with the trajectories of the basis model’s state variables. Although Vm (t) may represent some intrinsic activity of the basis model, it could also represent some other behavior of the basis model, such as the response of the basis model to parameter perturbation or externally applied inputs. Hereafter, we refer to Vm (t) as the canonical activity of the basis model. The quantifiable characteristics of Vm (t), (such as spike
Biol Cybern (2006) 95:185–192
heights, equilibrium potentials, etc.) along with acceptable bounds, constitute the reduction target. The goal of our reduction algorithm is to create a reduced model that meets the defined reduction target. Because we desire an automated reduction, we do not allow any parameter tuning in order to achieve that reduction goal. Although parameter tuning may produce a more optimal result, selecting appropriate parameters for tuning often requires intuition about the model’s dynamics that hinders automation. The reduction algorithm involves calculating the error of a linear fit between all possible pairs of equivalent potentials. In the the basic implementation of the algorithm, equivalent potentials of the state variables are grouped by evaluating the total error of a linear fit between two equivalent potentials. In the weighted implementation of the algorithm, the error of the linear fit is weighted by a function based on the contribution of the associated state variable to the total conductance of the model. The error of the fits is then used to create an ordered list of equivalent-potential substitutions. We proceed through this list, evaluating the new model produced by each substitution to ensure that the model produces acceptable activity. 2.1 The equivalent potential transformation The first step of the reduction algorithm is to isolate the canonical activity of the original model, along with the associated trajectories of the model’s state variables. For each state variable trajectory, mi (t) we calculate the trajectory of the associated equivalent potential, Ui (t). The steady-state value of the state variable, mi∞ is given by mi∞ = fi∞ (Vm )
(1)
and the equivalent potential trajectory, Ui (t), is given by −1 Ui (t) = fi∞ (mi (t))
(2)
The equivalent potential is therefore the value of the membrane potential at which the state variable’s steadystate value would be equal to its current value. The great utility of the equivalent potential transform is that its nonlinearity reveals underlying similarities between the state variables. If the trajectories of two equivalent potentials are similar in scale and shape, then one equivalent potential can be substituted for the other; therefore, one state variable becomes an instantaneous function of another, and the complexity of the model is reduced. For example, if sufficient similarity is found between the equivalent potentials Ui and Uj , then
Biol Cybern (2006) 95:185–192
187
Ui could be substituted for Uj , and the associated state variable mj would be written mj = fj∞ (Ui )
(3)
It is notable that the equivalent potentials can only be calculated for steady-state functions that are monotonic functions of the membrane potential. State variables whose steady-state functions are non-monotonic or noninvertible cannot be used with this reduction method. 2.2 The reduction algorithm After the canonical activity has been isolated and the equivalent potentials transform has been applied to the state variable trajectories, the next step is to perform a linear fit between every possible pair of equivalent potentials. For a model with N equivalent potentials, there are N(N −1) possible pairs. For each pair of equivalent potential trajectories, Ui (t) and Uj (t), we calculate the values of Ai,j and Bi,j that minimize the error ξ , where t2 2
ξ =
2 (Ai,j × Ui (t) + Bi,j ) − Uj (t) dt
(4)
t1
where t1 and t2 are the beginning and end times of the canonical activity. Essentially, we are finding the best-fit line (in a least-squares sense) to the plot of Uj (t) versus Ui (t) in the Uj –Ui phase plane. This fit provides us the the optimized coefficients Ai,j and Bi,j that allow us to best describe the state variable mj (t) using Ui (t). A similar method has been used previously to approximate the activation of a calcium current using the activation of a delayed-rectifier potassium current (Golomb et al., 1993). The next step is to create a list of substitutions rankordered by the total error of each linear fit. Beginning with the pair with the least amount of total error, we implement the substitution mj = fj∞ (Ax,y × Ui + Bx,y )
(5)
This substitution produces a new model with n − 1 state variables. We then test this new model to determine whether or not it meets the reduction target. If it does, the substitution is retained; if it does not, the substitution is discarded. We then proceed to the next substitution with the least amount of total error in the associated linear fit, and repeat the process. Although the number of pairings scales quadratically with the number of equivalent potentials, two rules are used to reduce the number of potential pairings during the reduction process. First, if a variable has already been substituted by a previous
pairing, the greater-error pairing is not made. For example, if Uj has been substituted by Ui , then we ignore the greater-error pairing where Uj is substituted by Uk . Second, if the basis of a substitution has itself been previously substituted, then the basis of the previous substitution is used as the basis for the latter substitution. For example, if we are substituting Uj for Uk , but Uj has already been substituted by Ui , then we use the associated substitution of Ui for Uk . The substitution process continues until all possible substitutions have been exhausted. Alternatively, the reduction process could terminate after the number of consecutive failed substitutions exceed some threshold. This termination assumes that the rank-ordering of substitutions by the error of the linear fit reflects the “goodness” of the substitutions. If several substitutions fail consecutively, it is reasonable to assume that greatererror substitutions will also fail. 2.3 Weighted error function The performance of the algorithm described above can be easily modified by changing the equation used to calculate the error of the linear fit (4). We developed an alternative weighting function based on the following reasoning: a substitution of one state variable for another will necessarily lead to an error in the calculation of the original state variable. The effect of this error will be related to the relative contribution of each state variable to the total conductance of the model. In order to evaluate the contribution of each state variable to the model’s total conductance, we first pick a constant error ε. For each state variable we sum ε with the state variables trajectory, bounding the sum to the allowable limits of the state variable (zero and one). For an individual ionic conductance gi given by p q
gi = g¯ i mi hi
(6)
where g¯ i is the maximal conductance, mi and hi are activation and inactivation state variables, respectively, and p and q are power terms (note that q is equal to zero for a non-inactivating conductance), we calculate the error weighting function Wmi (t) for an activation state variable trajectory mi (t) as Wmi (t) =
g¯ i (mi (t) + ε)p − g¯ i mi (t)p hi (t)q εG(t)
2 (7)
where G(t) is the trajectory of the total conductance of the neuron. For an inactivation state variable trajectory, hi (t), the weighting function is calculated as
188
Biol Cybern (2006) 95:185–192
Whi (t) =
g¯ i (hi (t) + ε)q − g¯ i mi (t)p hi (t)q εG(t)
2 (8)
The weighting function therefore describes the relative influence of an individual state variable on the total conductance of the neuron through the trajectory of the canonical activity. This influence is dependent on the maximal conductance associated with the state variable, as well as the trajectories of an associated activation or inactivation state variable. Now, the parameters Ai,j and Bi,j are chosen to minimize the error expression: t2 2
ξ =
2 Wi (t) Ai,j × Ui (t) + Bi,j − Uj (t) dt
(9)
t1
The total error of each pairing is then used to create a rank-ordered list of substitutions, and the algorithm then proceeds as before.
3 Application of the algorithm As an example, we apply our algorithm to a mathematical model of a bursting neuron. Our sole criterion for the reduction is that the reduced model exhibits intrinsic rhythmic bursting activity without any change in the basis model’s parameters. The algorithm is able to produce a reduced model that meets this reduction criterion. By utilizing the weighted error function, however, it is able to produce a simpler model that better matches the characteristics of the original model. 3.1 The basis model and modeling methods We apply our reduction algorithm to a mathematical model of the leech heart interneuron Hill et al. (2001). This model consists of a single isopotential compartment with nine voltage-gated ionic currents and a passive leak current. The ionic currents present in the model are as follows: 1. 2. 3. 4. 5. 6. 7. 8. 9.
INa , a fast-activating, fast-inactivating, sodium current responsible for action potential generation. IP , a persistent sodium current. ICaF , a fast, low-threshold calcium current. ICaS , a slow, low-threshold calcium current. Ih , a slow, hyperpolarization-activated inward current. IK1 , a delayed rectifier-like potassium current. IK2 , a persistent potassium current. IKA , a fast, transient potassium current. Ileak , a passive leak current.
The model used for our reductions differs from the original published model in the absence of the FMRFactivated potassium current IKF , and the absence of an externally injected current. The parameters used for the basis model are the same as those used in the original published model, with two exceptions: the reversal potential of the leak current is set to−62 mV, and the conductance of the leak current is set to 11nS. These changes were made so that the model would produce endogenous rhythmic bursting activity (Cymbalyuk et al. 2002). This set of parameters is referred to hereafter as the canonical parameters, and the bursting activity of the basis model with these parameters constitutes the canonical activity of the model. The basis model and the reduced models that follow were implemented in the Simulink modeling package (Mathworks). The equations were integrated using a forward Euler integrator with a fixed step size of 0.0001 s. A set of initial conditions was created by setting all state variables to 0.5 and the membrane potential to −50 mV, and allowing the simulation to progress for 400s; the values of the state variables and membrane potential at the end of this simulation were taken as our initial conditions. 3.2 Application of the algorithm Because we were interested in reproducing the bursting activity of the basis model, we isolated a single period of the canonical burst activity for the basis model. The first step of the algorithm is to compute the equivalent potential for each state variable. For all gating variables in the basis model (except mh ), the steady-state function takes the form: m∞ (Vm ) =
1 1 + ea(Vm +b)
(10)
where a and b are parameters and Vm is the membrane potential. The equivalent potential Ux is therefore given by: 1 1 −1 −b (11) Um (m) = ln a m As originally published, the steady-state activation function of Ih is non-invertible (Hill et al. 2001); therefore, no equivalent potential can be constructed for mh , and it is not considered with our reduction methods. The next step of the algorithm is to find the linear fit that minimizes the total error for each pair of equivalent potentials, and rank-order the pairings based on the error of each fit. The phase plane plots of all pairs of equivalent potentials are shown in Fig. 1. After rank-ordering the pairings, we begin implementing the
Biol Cybern (2006) 95:185–192
189
Vm mNa hNa mP mCaF hCaF mCaS
hCaS mK1 hK1 mK2 mKA hKA mNa
hNa
mP
mCaF
hCaF
mCaS
Fig. 1 Correlations between equivalent potentials of HN model over one burst period. Broken gray boxes indicate reductions determined by the non-weighted reduction algorithm. Solid black
substitutions. After each substitution was made, the resulting new model was run for 100 s in order to verify that the new model produces acceptable bursting activity. We proceeded with the algorithm until we encountered five successive substitution failures. Using this algorithm, the following reductions were made to the original model: 1. 2. 3. 4.
mNa = m∞Na (Vm ) mKA = m∞KA (UmP ) hK1 = h∞K1 (2.79 × UhCaS + 0.09) hNa = h∞Na (1.01 × UmP )
These reductions produce a ten-state variable model that produces rhythmic bursting activity at canonical parameters (Fig. 2, Table 1). In order to test the weighting function, we constructed the error weighting functions as described by (7). We calculated the weighting functions for several different values of ε between −0.01 and 0.01; we noticed that
hCaS
mK1
hK1
mK2
mKA
hKA
boxes indicate reductions made by the weighted reduction algorithm
10 0 -10 Vm (mV)
Vm
-20 -30 -40 -50 -60
0
1
2
3
4 Time (s)
5
6
7
Fig. 2 Result of non-weighted reduction method. Intrinsic bursting at canonical parameters for the reduced model (gray trace) is compared to the full model’s canonical activity (black trace). Traces are aligned by the first spike of the burst
190
Biol Cybern (2006) 95:185–192
Table 1 Canonical burst data for original model and two reduced models Period(s)
Duty cycle (%)
Spikes per burst
Spike f (Hz)
Slow-wave trough (mV)
Slow-wave peak (mV)
14 10 7
6.7 5.4 6.2
37.8 27.0 33.3
12 6 8
4.7 3.6 4.1
−52.9 −52.3 −52.4
−44.2 −46 −44.6
our choice of ε had very little effect on the resulting weighting functions. We ultimately used the weighting functions produced by a value of ε = 0.01. As an example, the weighting functions for mNa and mP are shown in Fig. 3. During a spike, an error in estimating mNa will create a relatively large error in the total conductance of the neuron; immediately after a spike, when INa is inactivated, and error in estimating mNa will have little or no effect on the total conductance of the neuron. Contrariwise, the error in estimating mP is more tolerable during the burst phase, and is most severe during the quiescent phase of the burst. Using the weighting functions for each state variable, we calculated the minimal error linear fit for each pair of state variables. We then proceeded with the algorithm as with the non-weighted case, allowing 100s of simulation time at each reduction step, and proceeding with reductions until five successive failures were encountered. By employing our weighting function, the algorithm yielded the following valid reductions:
6 mNa mP
5 4 Error
Basis Model Non-weighted Reduction Weighted Reduction
No. of state variables
3 2 1 0
0
1
2
3 4 Time (sec)
5
6
Fig. 3 Example of weighting functions used in the weighted reduction algorithm
10
mNa = m∞Na (Vm ) hNa = h∞Na (1.05 × UmP ) mCaF = m∞CaF (0.99 × UmP ) mCaS = m∞CaS (0.98 × UhCaF ) hK1 = h∞K1 (1.07 × UhCaF ) mKA = m∞KA (UmP ) hKA = h∞KA (0.95 × UmP )
Although the application of the weighting function produced many of the same substitutions produced without the weighting function, it also produced several additional reductions that were not suggested. The model produced by utilizing the weighting function contains seven state variables, and produces rhythmic bursting activity at canonical parameters (Fig. 4, Table 1). 3.3 Comparison of reduced models to basis model Our reduced models both offer improved performance over the basis model. The basis model required 36.8 ± 0.1s (mean ± standard deviation, n = 10 simulations) to compute 100s of simulation time. The non-weighted reduction required 34.3 ± 0.1s, an improvement of 7%
0 -10 Vm (mV)
1. 2. 3. 4. 5. 6. 7.
-20 -30 -40 -50 -60
0
1
2
3 4 Time (s)
5
6
7
Fig. 4 Result of weighted reduction method. Intrinsic bursting at canonical parameters for the weighted reduced model (gray trace) is compared to the full model’s canonical activity (black trace). Traces are aligned by the first spike of the burst
from the basis model. Finally, the weighted reduction required only 30.2 ± 0.1 s, an 18% improvement in simulation time over the basis model.
Biol Cybern (2006) 95:185–192
Period (s)
12 10 8 6 4 2 0
0.5
1
1.5
2
2.5
0
0.5
1
g
1.5
2
2.5
1.5
2
2.5
g
Na
CaF
Period (s)
12 10 8 6 4 2 0
0.5
1
1.5
2
2.5 0
0.5
1
g
g
K1
K2
Period (s)
12 10 8 6 4 2
0
0.5
1
1.5
2
2.5 0
12 10 8 6 4
0
0.5
1
1.5 g
KA
1
1.5 h
CaS
2
0.5
g
g
Period (s)
Fig. 5 Response of burst period reduced models to variation of maximal conductances. Period of basis model (black line), weighted reduced model (solid gray line), and non weighted reduced model (dashed gray line) are shown as maximal conductances are varied about the canonical parameter point. All maximal conductances are normalized to their canonical value
191
2
2.5
2
2.5
192
At canonical parameters, both reduced models produce rhythmic bursting activity. In the subthreshold region, the activity of each reduced model is almost identical to the basis model; during the burst phase, however, the activity of the reduced models is noticeably different from the basis model. The measurements of the canonical activity for each model are given in Table 1. In the non-weighted reduction, the burst period is 5.4 s, 80.6% of the canonical period of the basis model. The duty cycle, number of spikes per burst, and mean spike frequency are also lower than in the basis model. The weighted reduction produces bursts with a period of 6.2 s, which is 92.5% of the canonical period of the basis model. The values of the duty cycle, spikes per burst, and mean spike frequency are also smaller in the weighted reduction than in the basis model, although they are not as small as the non-weighted reduction. In addition to the reproduction of the reduction target, it is also desirable that our reduced models reproduce the behaviors of the basis model, even if these behaviors were not specified in the reduction target. In order to test how well the reduced models reproduce the effects of parameter variation, we varied each maximal conductance from 0 to 2.5 times its canonical value, in steps equal to 5% the canonical value; the results are shown in Fig. 5. Around the canonical parameter point, the period of the reduced models shows a dependence on the maximal conductances that is the same as the basis model. Two notable differences are that the ranges over which the models will produce rhythmic bursting are different, and that low values of g¯ K1 in the reduced models produce a bursting activity with a sharply reduced period, whereas the basis model does not produce such activity. The effect of the maximal conductances on duty cycle and mean spike frequency were very similar between the three models (data not shown).
4 Summary and discussion We have created a neuronal model reduction algorithm based on the equivalent potentials transformation that requires minimal prior knowledge of the original model’s dynamics. This algorithm groups equivalent potentials together based on the error of a linear fit. In one implementation of this algorithm a weighting function, based on the contribution of each state variable to the total conductance of the model, was applied to the
Biol Cybern (2006) 95:185–192
calculation of the error. Both the weighted and nonweighted implementations of the algorithm were able to reduce the complexity of a bursting neuron model, although the weighted implementation of the algorithm performed much better than the non-weighted function. Although our reduction criteria specified only that the reduced models produce rhythmic bursting activity, nearly all of the parameter sensitivities in the original model were preserved by the reduced models. In both computational neuroscience and neural engineering, there are many instances when the investigator does not care about the underlying dynamics of a neural model, only that the model produce some specific function. In order to facilitate implementation and analysis, it is desirable that the neural model be as simple as possible. An automated means of model reduction is a valuable tool because it allows investigators to adapt a previously existing model for their purposes, without extensive prior knowledge of the original model dynamics. Acknowledgements This work was supported by NIH R21 NS4 3098-01 to Ron Calabrese and Steve DeWeerth, and NINDS/NIMH /NIBIB NS046851 to Robert H. Lee.
References Breen B, Gerken W, Butera R (2003) Hybrid integrate-and-fire model of a bursting neuron (in press) Buchholtz F, Gogowasch J, Epstein I, Marder E (1992) Mathematical model of an identified stomatogastric ganglion neuron. J Neurophysiol 67:332–340 Butera R Jr, Clark J Jr, Byrne J (1996) Dissection and reduction of a modeled bursting neuron. J Comput Neurosci 3(3):199–223 Connor J, Walter D, Mckown R (1977) Neural repetitive firing: modifications of the Hodgkin–Huxley axon suggested by experimental results from crustacean axons. Biophys J 18: 81–102 Cymbalyuk GS, Gaudry Q, Masino MA, Calabrese RL (2002) Bursting in leech heart interneurons: cell-autonomous and network-based mechanisms. J Neurosci 22(24):10580–10592 Golomb D, Guckenheimer J, Gueron S (1993) Reduction of a channel-based models for a stomatogastric ganglion LP neuron. Bio Cybern 69:129–137 Hill A, Lu J, Masino M, Olsen O, Calabrese R (2001) A model of a segmental oscillator in the leech heartbeat neuronal network. J Comput Neurosci 10(3):281–302 Hodgkin A, Huxley A (1952) A quantitative description of membrane current and its application to conduction and excitation in nerve. J Physiol 117:500–544 Kepler T, Abbott L, Marder E (1992) Reduction of conductancebased neuron models. Biol Cybern 66(5):381–387