Competitive Behaviors of A Spiking Neural ... - Semantic Scholar

Report 2 Downloads 49 Views
2012 5th International Conference on BioMedical Engineering and Informatics (BMEI 2012)

Competitive Behaviors of A Spiking Neural Network with Spike Timing Dependent Plasticity Chengmei Ruan, Qingxiang Wu*, Lijuan Fan, Zhiqiang Zhuo, Xiaowei Wang College of Photonic and Electronic Engineering Fujian Normal University Fuzhou, China

Abstract—Spike timing dependent plasticity (STDP) learning rule is one of hot topics in neurobiology since it’s been widely believed that synaptic plasticity mainly contribute to learning and memory in brain. Up to now, STDP has been observed in a wide variety of areas of brain, hippocampus, cortex and so on. Competition among synapses is an important behavior for this learning rule. In present study, we propose a single layer spiking neural network model using STDP learning rule in inhibitory synapses to investigate the competitive behavior. The experiments show that the synapses among neurons are both strengthened on the whole training process. Thus neurons inhibit the activities of one another, eventually the neuron with the highest input spike rate win the competition. We have found that the behavior is efficient when the differences of firing rates of input neurons without STDP are great than 5Hz, otherwise the winner neuron is random. In order to use the principle to artificial intelligent system, we use a mechanism of dynamic learning rate to let the neuron with the highest input to be selected by the competitive behavior as the winner. Therefore, a robust competitive spiking neural network is obtained. keywords-spike timing dependent plasticity; spiking neural network; competitive learning; inhibitory synapse; dynamic learning rate

I.

INTRODUCTION

It’s widely believed that how the learning and memory storage is achieved in brain mainly depends on the modification mechanism in synapses among neurons. In recent years, it’s been shown that synaptic modification depends on the temporal order of the pre- and postsynaptic spiking time[1][2]. This mechanism is called spike timing dependent plasticity[3] which has been observed in a wide variety of areas of brain, hippocampus, cortex and so on[5-13]. The most common form of STDP found at excitatory synapses is that presynaptic neuron firing earlier than that of postsynaptic neuron leads to a long-term potential(LTP) of synapse, otherwise a long-term depression(LTD) will be induced[1]. This STDP seizes the significance of causality in determining the direction of synaptic modification, which induces an important feature referred as competitive behavior. The competitive behavior can cause different synapses to compete with one another so that some synapses to the postsynaptic neuron are strengthened, and

others are weakened[3]. Literature [3] showed that synaptic conductance distribution allowed STDP to selectively strength the synapses with strong correlation inputs, while weakening the others. It’s easy to differentiate the strong correlation inputs from the uncorrelated inputs by using this conclusion. However, it is difficult to distinguish the strong correlation inputs. The authors in [14] used the similar mechanism for spatial filter, which strengthened the synapse between presynaptic neuron with low firing rate and postsynatic neuron with high firing rate, weakened the synapse connected reversely using the STDP of excitatory synapse. It is still difficult to distinguish the neurons with similar firing rates. Recently, the STDP at inhibitory synapses has been found[10-13]. The new experimental work have demonstrated that the inhibitory synaptic plasticity play a key role in brain development and function[15-17]. Here, we put forward a single layer spiking neural network(SNN) model using STDP learning rule in inhibitory synapses to realize the competition in which the neuron with highest input current (firing rate) win the competition. The behavior is efficient when the differences of firing rates of input neurons are great than 5Hz. In order to use the principle to artificial intelligent system, we use a mechanism called dynamic learning rate to let the neuron with the highest input to be selected by the competitive behavior as the winner. Therefore, a robust competitive spiking neural network is obtained. The rest of this paper is organized as follows: In section II, a neuron model and the inhibitory STDP learning rule are considered in this paper, meanwhile the proposed spike neural network is presented. The simulation results are shown and discussed in section III. Final section gives the conclusion. II.

NEURON MODEL AND METHOD

A.

Neuron model In this paper the conductance based integrate-and-fire (I-F) neuron model is used, which has been widely used in spike neuron networks for its simplicity of computation and it is very close to the best-known Hodgkin-Huxley model. In the I-F model, the membrane potential of the neuron i is determined by [18]

*Corresponding author **This work is funded by the Natural Science Foundation of China(No.6117901) and the Natural Science Foundation of Fujian Province (Grant No. 2011J01340)

978-1-4673-1184-7/12/$31.00 ©2012 IEEE

1015

s

w ji g ji (t ) dv (t ) cm i = gl ( El − vi (t )) + ∑ ( Es − vi (t )) dt As j

(1)

The parameters are explained as follows: cm : specific membrane capacitance. gl : specific membrane leak conductance. El : membrane reversal potential. wji: weight of synapse from neuron j to i. s g ji : conductance of synapse from neuron j to i, the subscript s includes e and i, indicating the excitatory and inhibitory synapse respectively. As: membrane surface area connected to a synapse. Es: reversal potential of membrane. If the membrane potential vi exceeds the threshold voltage, vth, it is reset to vreset for a refractory τref and a spike is fired. The conductance of synapse from neuron j to i, g sji is a variable, it’s changed as follows: When there is an action potential reaching synapse: g sji (t + t dji + dt ) = g sji (t + t dji ) + qs

(2)

where t dji is the delay of the action potential reaching synapse from neuron j to i, qs is the conductance increment following an action potential based on STDP learning rule. Otherwise, g sji decays as the following differential equation: dg sji (t ) dt

=−

1

τs

g sji (t )

(3)

where τ s is the decay time constant of conductance of synapse. Spiking neural network The spiking neural network(SNN) topology constructed for simulation is shown in Fig. 1. The SNN is just consisted of a single layer. The network contains multi-neurons which are fully connected with inhibitory synapses each other. This connection is called lateral inhibitory synapse connection. The presynaptic neuron is the postsynaptic neuron at the same time, so the neuron can affect one another. In Fig. 1, wji means the synapse weight connected from neuron j to neuron i, j∈{1,2…n, j ≠ i}, n is the neuron number of input layer. For simplicity, synapse ji is used to indicate synapse connected from neuron j to i. The weights of inhibitory synapses are adjusted by the inhibitory STDP learning rule referred in section III. Apart from the constant direct input current Ic, the input of the first layer neurons also contains the inhibitory currents from other neurons in the same layer. So the constant direct input B.

Fig 1. Network topology consisting of inhibitory synapses

Fig 2. STDP learning rule of inhibitory synapse

currents are excitatory inputs which trigger the input neurons to fire and the inhibitory currents which refrain them from firing. When neurons start to fire, they compete with one another by trying to suppress others through inhibitory current. The neuron stops firing when the inhibitory currents coming from other neurons grow large enough. Finally, there is only one neuron active. Then the neuron wins the competition. So the layer can also be referred as competitive layer. C.

Spike timing dependent plasticity learning rules The STDP of inhibitory synapses(IN-STDP) is shown in Fig. 2, which is first observed in hippocampal cultures and cute hippocampal slices[10], the authors found that the relative time interval Δt is within ±20ms leading to a persistent change in synaptic strength. The synaptic modification function is approximately expressed as[19]: F (Δt ) = 1.5exp(−0.004(Δt ) 2 ) − 0.5exp(−0.007(Δt )2 )

(4)

where Δt = t post − t pre is relative time interval between postsynaptic neuron spike time tpost and presynatic neuron spiking time tpre. The coefficients are partly different from that in [19], and these coefficients have been adjusted to match the curve from the biologic findings as shown in Fig.2. Weight of synapse ji varied as follow:

1016

w ji _ new = w ji _ old + η w ji _ old F (Δt )

a) b) c)

(5)

where η is the learning rate, wji-old is the weight of synapse ji before being adjusted by STDP, whereas wji-new is that after being adjusted. In fact, in this neural model, qs of synapse ji is proportional to the weight of synapse ji, wji. In the present work, the scaling factor is set to 1. So qs equals to wji. The values of parameters used in the present work are: cm = 10nF / mm 2 , gl = 1.0 μ s / mm 2 , El = 70mv , d ji

Set the parameters: cm, gl, El, As; Initialization: gji, vi(t); calculate the conductance g sji of neuron i:

⎧ 1 s ⎪ − τ g ji (t )dt + qs ⎪ s s g ji (t + dt ) = ⎨ ⎪ − 1 g s (t )dt ⎪⎩ τ s ji

2

t = 0 , vreset = −70mv , Ai = 0.014103mm , Ei = −75mv ,

for an action potential otherwise

vth = −60mv , τ i = 10ms , τ ref = 3ms .

III.

(8) (8)

d)

SIMULATION OF SPIKE NEURAL NETWORK

A.

Input coding scheme For implementing this learning mechanism to image processing, the firing rate of input neuron without IN-STDP learning rule is used to indicate the image gray value. Generally, the grayscale is 255, so the firing rates of input neurons are from 41Hz to 295Hz. Each frequency indicates one grayscale value, for example, 41Hz represents grayscale 1, and 295Hz represents grayscale 255. The firing rate is transferred into current for simulation. The transformation function from firing rate to current is: f ( x) = 2.256 ×10−10 x 5 − 8.017 × 10−8 x 4

+ 1.05 ×10−5 x3 + 3.262 × 10−4 x 2 + 0.02886 x + 9.149

(6)

where x is the firing rate ranging from 41Hz to 295Hz, f(x) is the transferred input current. The reason why the coding begins with 41Hz is that the currents less than 40Hz are too small and easily affected by noise. We obtained this function by polynomial fitting all the excitatory currents Ic that are corresponding to the frequencies from 41Hz to 295Hz, which has been tested by trail and error. For simplicity, in the present paper, let Δt represent the difference between the firing time of the presynatic and postsynaptic neurons, and ΔI represent the difference of the input current between two neurons, Δf represent the difference of the initial firing rate between two neurons.

B.

Computer simulation procedures In the simulation, the direct input is current, so the membrane potential vi(t) of neuron i can be calculated by the following equation: vi (t + dt ) = vi (t ) + +∑ j

1 ( gl ( El − vi (t )) cm

w ji g sji (t ) As

(7)

( Es − vi (t )) + I c )dt

where dt represents the simulation time step. The simulation in a time step dt can be carried out by the following procedures:

calculate vi(t+dt) of neuron i: vi (t + dt ) = vi (t ) +

1 (gl (El − vi (t )) cm

+∑ j

wji g sji (t) As

(9)

(Es − vi (t)) + Ici )dt

e) Judge membrane potential vi(t): if vi(t+dt)≥vth, then an action potential event occurs, qs is calculated by (5), and the neuron i falls into a refractory period τ ref . Otherwise go back to procedure c) to begin a new simulation time step. Simulation results The network is shown in Fig. 1, 5 input neurons are chosen for simulation of competitive behaviors. The random 5 currents obtained by (6) are directly input into the first layer neurons. For comparison, at first, the weights of inhibitory synapses are fixed. The network is trained without STDP. The firing states of neurons are shown in Fig. 3.(a). As shown in Fig. 3.(a), neuron with the lowest input current is soon refrained from firing by the inhibitory current from other neurons, and other neurons with medium input currents are still active due to the inhibitory current not large enough to suppress others. The inhibitory competition doesn’t lead to only one winner. The behaviors are very different when IN-STDP is used. The modification of synapse, qs, is calculated by (5), the learning rateη of traditional IN-STDP is set to 1, but we find that this value does not work very well, so the learning rateη is set to 0.2 by trial and error. Fig. 3.(b) shows the results of using the IN-STDP. As is shown by Fig. 3.(b), neuron with the highest input current win the competition by inhibiting other neurons from firing. IN-STDP observed in hippocampus[10] is a symmetric STDP learning window, which means that when the interval firing time Δt is within time window of LTP, the synapses between the neurons are both strengthened, and both weakened otherwise. After a period of competition, the weights of the synapses connected between two neurons are the same. So the total input current of neuron are constant excitatory currents and changing inhibitory currents from other neurons. Neurons compete with one another by C.

1017

(a)

initial value. The currents from 41Hz to 295Hz are separated into 10 groups’ data. Each 5 inputs are selected from a group. All the combinations of 5 inputs have been tested. It has been found that when Δf between two neurons is larger than 5Hz, the neuron with the higher initial firing rate are easy to win the competition, when less than 5Hz, the selection of winner during the competition is random. As shown in Fig. 3.(c), the neuron 4 with input current, 31.53mA, win over neuron 5 because of the difference of two inputs is too small. It’s hard to differentiate one from the other by biological mechanism only. If this mechanism is applied to artificial intelligent system, we proposed a dynamic learning rate method to enhance the competitive behavior. Using the proposed method the neuron with the highest excitatory input always win the competition for any random combination of 5 inputs. This method is introduced in the next section. D.

(b)

Improving methods The reason why the neuron with high input current does not win the competition is that firing rates of neurons are too close, and the weight of synapses connected between two neurons are the same. If the weight of inhibitory synapse of neuron with higher current is less than that of smaller one, then the higher input neuron might win the competition. A solution called dynamic learning rate can reach this. The function of learning rate of neurons are given as follows:

1 F (rij ) = (− exp(−0.5(200rij )2 ) + exp(−0.001(200rij )2 )) 5 ⎧0.21− F ( rij ) if rij ≥ 0 (11) ⎪ L(rij ) = ⎨ 1+ F ( rij ) ⎪⎩ 0.2 else

(c) Fig 3. Firing State of neurons with inhibitory synapses connected one another. (a)with the fixed inhibitory synapses. (b) and (c) with the inhibitory synapses adjusting by IN-STDP.

increasing the inhibitory current. Finally the neuron with the smaller excitatory inputs is likely to stop firing first, and the neuron with highest current wins the competition. From the STDP function shown in Fig. 2, if ΔI of two neurons is too large, then Δt possibly falls in the LTD window, thus the weight of synapses might decrease to 0 and the IN-STDP does not work, there might be two or more neurons active till the end of simulation. To avoid this problem, the minimal weight of synapse was set to the

where rij is the difference of input current of neuron i to j over the input current of neuron i, L(rij) is the learning rate function. As mentioned above, 0.2 is the standard learning rate obtained by trial and error. By using this function, the learning rates are around 0.2. Fig. 4 shows the learning rate function. From the figure of learning rate function, when the variable r is ranging from -0.2 to 0.2, learning rateη is changed as variable r which means that when ΔI is similar enough, the dynamic learning rate method works, as the competition ongoing, the weight of synapses are asymmetric, which is different from using the IN-STDP only. While ΔI is large enough, this method doesn’t work, and the learning rate are 0.2 or nearly 0.2, the IN-STDP learning rule only can lead to the neuron with the highest current winning the competition. This mechanism skillfully combines the IN-STDP with the dynamic learning rate method, so a robust competitive spiking neural network is achieved. Fig. 5 shows the result of the failure case demonstrated in Fig. 3.(c). By using this mechanism, the result is improved completely, with 100% correct rate.

1018

REFERENCES [1]

[2]

[3]

[4] [5] Fig 4. Function of learning rate [6] [7] [8] [9] [10]

[11] [12] Fig 5. Firing state of neurons using IN-STDP and dynamic learning rate mechanism

By using the IN-STDP learning rule combined dynamic learning rate method, a mechanism that the neuron with the highest input winning the competition is achieved. This mechanism can be used to image sharpening and skeletonization. In image sharpening, we can highlight the transition intensity which detected by this mechanism as long as the brightest points(the largest gray value) are not at the edge of the neighborhood. The brightest points can be preserved while depressing the others points in the neighborhood to skeletonize the image. IV.

CONCLUSION

In the present paper, we proposed a single layer spike neural network model using STDP learning rule in inhibitory synapses to realize the competition among different neurons. When using IN-STDP only, the network can achieve a mechanism that the neuron with the highest input current win the competition by inhibiting other neurons whose firing rate differences are great than 5Hz. For neurons with firing rate differences less than 5Hz, the results of competition are random. A method called dynamic learning rate mechanism is proposed. Based on this mechanism a good competitive behavior can be achieved for neurons with firing rate differences less than 5Hz. This spiking neural network model can be used to image sharpening and skeletonization.

[13] [14] [15]

[16] [17] [18] [19]

1019

G. Q. Bi and M. M. Poo, “Synaptic modifications in cultured hippocampal neurons: dependence on spike timing, synaptic strength, and postsynaptic cell type”, J.Neuroscience, vol.18, Dec. 1998, pp.10464–10472. H. Markram, J. Lubke, M. Frotscher, and B. Sakmann, “Regulation of synaptic efficacy by coinci-dence of postsynaptic APs and EPSPs”, Science 10, vol. 275, Jan. 1997, pp. 213–215. S. Song, K.D. Miller, and L.F.Abbott. “Competitive Hebbian learning though spike-timing dependent synaptic plasticity”, Nature Neuroscience, vol.3, Sep. 2000, pp.919–926. L.F Abbott and S.B. Nelson. “Synaptic plasticity: taming the beast”, Nature Neuroscience, Suppl 3. Nov. 2000, pp.1178–1183. R.C. Froemke, and D. Yang, “Spike-timing-dependent synaptic modification induced by natural spike trains”, Nature, vol. 416, Mar. 2002, pp.433–438. V. Egger, D. Feldmeyer, and B. Sakmann, “Coincidence detection and changes of synaptic efficacy in spiny stellate neurons in rat barrel cortex”, Nature Neuroscience, Vol. 2, Dec. 1999, pp.1098–10105. D.E. Feldman, “Timing-based LTP and LTD at vertical inputs to layer II/III pyramidal cells in rat barrel cortex”, Neuron, vol. 27, Jul. 2000, pp.45–56. R. C. Malenka, and M. F. Bear, “LTP and LTD: an embarrassment of riches”, Neuron, vol. 44, Sep. 2004, pp.5–21. N. Caporale, and D. Yang, “Spike timing-dependent plasticity: a Hebbian learning rule”, Annu Rev Neurosci, vol. 31,2008,pp. 25–46. M. A. Woodin, K. Ganguly, and M. M. Poo, “Coincident pre- and postsynaptic activity modifies GABAergic synapses by postsynaptic changes in Cl- transporter activity”, Neuron, vol. 39, Aug. 2003, pp.807–820. J. S. Haas, T. Nowotny, and H. D. Abarbanel, “Spike-timingdependent plasticity of inhibitory synapses in the entorhinal cortex”, J. Neurophysiol, vol. 96, Dec. 2006, pp.3305–3313. S Sivakumaran, M. H. Mohajerani, and E. Cherubini, “At immature mossy-fiber-CA3 synapses, correlated Presynaptic and postsynaptic activity persistently enhances GABA release and network excitability via BDNF and cAMP-dependent PKA”, J.Neuroscience, vol. 29, Feb. 2009, pp.2637–2647. D. E. Feldman, “Synaptic mechanisms for plasticity in neocortex”, Annu Rev Neurosci, vol. 32, Jul. 2009, pp.33–55. K. Fujita, “Spatial feature extraction by spike timing dependent synaptic modification”, Lecture Notes in Computer Science, vol. 6443, 2010, pp.148-154. J. L. Gaiarsa, O. Caillard, and Y. Ben-Ari, “Long-term plasticity at GABAergic and glycinergic synapses: mechanisms and functional significance”, Trends in Neurosciences Vol. 25, Nov. 2002, pp.564-570.. A. Maffei, K. Nataraj, S.B. Nelson, G.G. Turrigiano, “Potentiation of cortical inhibition by visual deprivation”, Nature, vol. 443, Sep. 2006, pp. 81–84. L. Wang, and A.Maffei, The Many Faces of Inhibitory Plasticity: Adding Flexibility to Cortical Circuits Throughout Development. Inhibitory Synaptic Plasticity, Chapter 1, Springer, 2011. E. Müller, “Simulation of High-Conductance States in Cortical Neural Networks”, Masters thesis, University of Heidelberg, HD-KIP-03-22, 2003. K. Murakoshi, and K. Suganuma, “A neural circuit model forming semantic network with exception using spike-timing-dependent plasticity of inhibitory synapses”, Biosystems, Nov-Dec 2007; pp.903-910.