Dynamical properties of continuous attractor neural network with ...

Report 2 Downloads 128 Views
This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and education use, including for instruction at the authors institution and sharing with colleagues. Other uses, including reproduction and distribution, or selling or licensing copies, or posting to personal, institutional or third party websites are prohibited. In most cases authors are permitted to post their version of the article (e.g. in Word or Tex form) to their personal website or institutional repository. Authors requiring further information regarding Elsevier’s archiving and manuscript policies are encouraged to visit: http://www.elsevier.com/copyright

Author's personal copy Neurocomputing 99 (2013) 439–447

Contents lists available at SciVerse ScienceDirect

Neurocomputing journal homepage: www.elsevier.com/locate/neucom

Dynamical properties of continuous attractor neural network with background tuning Jiali Yu a,n, Huajin Tang a, Haizhou Li a,b, Luping Shi c a

Institute for Infocomm Research, Agency for Science Technology and Research, Singapore 138632, Singapore School of Electrical Engineering and Telecommunications, University of New South Wales, Australia c Data Storage Institute, Agency for Science Technology and Research, Singapore 117608, Singapore b

a r t i c l e i n f o

abstract

Article history: Received 21 September 2011 Received in revised form 16 February 2012 Accepted 24 June 2012 Communicated by S. Arik Available online 10 August 2012

Persistent activity holds the transient stimulus for up to many seconds even after the stimulus is gone. It has been implemented in a class of models known as continuous attractor neural networks, which have infinite stable states corresponding to persistent activity patterns. Continuous attractor neural network remains stable so does not change systematically in the absence of stimulus input. Continuous attractor is a set of connected stable equilibrium points and has been used to describe the storing of continuous stimuli in neural networks. The background input of the networks plays an important role in continuous attractor neural network. In this paper, dynamical properties of continuous attractor neural network with two background input tuning schemes are investigated: constant input shifting and oscillation background activity. Simulations are employed to illustrate the theory. & 2012 Elsevier B.V. All rights reserved.

Keywords: Persistent activity Continuous attractors Linear recurrent neural networks Background input

1. Introduction Neural activity persisting for up to many seconds even after the transient sensory stimulus is gone is called persistent activity [1]. It is thought to be the neural substrate for short-term memory in a wide variety of brain areas [2]. Persistent activity is considered to be a neuronal correlate of working memory [3] and interpreted as the memory of eye position [4], head direction [5] and so on. Persistent activity has been implemented in a class of models known as attractor neural networks, which have multiple stable states corresponding to persistent activity patterns. Attractor neural networks may function as memory devices. To do so, memories are encoded in the synaptic connections of the network as shown in [6–8], multiple patterns can be implemented as fixed-point attractors of the networks. An initial state may lead to dynamical flow into one of the attractors and thus recalls the stored pattern [9–12]. In graded persistent activity, neurons can sustain firing at many levels, suggesting a wide type of networks can relax to any one of a continuum of stationary states [13]. Such a continuum of stationary states is usually called continuous attractors. n

Corresponding author. E-mail addresses: [email protected], [email protected] (J. Yu), [email protected] (H. Tang), [email protected] (H. Li), [email protected] (L. Shi). 0925-2312/$ - see front matter & 2012 Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/j.neucom.2012.06.029

Continuous attractor is a set of connected stable equilibrium points or continuous manifold of fixed-points. Continuous attractor neural network can be an integrator for any stimulus which causes the networks state to shift along the attractor. Once the stimulus is removed, the network remains stable, so does not change systematically in the absence of input [14]. Continuous attractors have been used to describe the encoding of continuous stimuli such as the eye position [4,15], head direction [5], the moving direction [16,17], path integrator [18–20], cognitive map [21] and population decoding [22,23]. Moreover, continuous attractor networks are able to maintain a localized packet of neuronal firing activity [24] and were used to store a pair of correlated maps such as a morph sequence between two uncorrelated maps [25]. Continuous attractors have some different shapes. Attractor that forms a loop is called a ‘ring attractor’; otherwise, attractor that does not loop back on itself is called ‘line attractor’ [26]. Ring attractor is used to represent periodic angular variables such as direction [27]. In order to study ring attractor, the neurons are aligned and the firing rates are bell-shaped or bump-shaped function of the stored variable, so ring attractor is also called bell-shaped attractor [28]. Line attractor is often applied to the memory of eye position [29,30]. The background input plays an important role in the performance of the neural networks. It may act as a switch that allows networks to be tuned on or off [31]. Small changes in the background input level may shift a network from a relatively quiet state to some other state with highly complex dynamics.

Author's personal copy 440

J. Yu et al. / Neurocomputing 99 (2013) 439–447

In [3], with an increased background input, the target population of neurons reactivates spontaneously with a set of population spike, and a further increase in background input leads to working memory with asynchronous elevated firing in the target population. In addition, periodic membrane oscillations due to the rhythmic background activity are typical for various brain regions, such oscillations may play a beneficial role in contentaddressable memory processes [11]. Theta oscillations have been recorded in hippocampus [32]. Temporal correlations between active cells in medial septum and the hippocampal systems indicate that the medial septum provides a constant cholinergic modulation that facilitates oscillations and induces a phasic drive [33]. Successful memory formation is correlated with tight coordination of spike timing with the local theta oscillation [34]. In our previous works [35,36], the continuous attractor of neural networks with invariant constant background input is studied. One will naturally ask, when the background input increases or oscillates, what will happen to the continuous attractor. Whether the continuous attractor will disappear or not? How does the continuous attractor change with different background tuning schemes? Whether the trajectories of the network are periodic or not when the background input is oscillation? With the motivation for answering these questions, we investigate the dynamics properties of continuous attractor with background tuning in this paper. Two different tuning mechanisms are investigated in detail: constant background tuning and periodic background oscillations tuning. We found some novel and interesting results which are helpful for us to understand the essence of continuous attractor. This paper is organized as follows. Linear neural network model is investigated in Section 2. Model with constant background activity is studied in Section 3. Model with oscillation background is studied in Section 4. Finally, conclusions are drawn in Section 5.

Given any initial state of the network xð0Þ A Rn , let x(t) be the trajectory starting from xð0Þ, then x(t) can be represented as xðtÞ ¼

n X

zi ðtÞSi

ð2Þ

i¼1

for t Z0, where zi ðtÞ ði ¼ 1, . . . ,nÞ are some functions. In fact, they are the projections of the trajectory x(t) on the eigenvectors Si. It is clear that xð0Þ ¼

n X

zi ð0ÞSi :

i¼1

Suppose that bðtÞ ¼

n X

b~ i ðtÞSi ,

i¼1

b~ i ðtÞði ¼ 1, . . . ,nÞ are the projections of the background input b(t) on the eigenvectors Si.

3. Models with constant background activity In this section, the background input is constant and bðtÞ ¼ b, the elements of b can be positive, zero and negative. Suppose that b¼

n X

b~ i Si :

i¼1

The theorem in our previous work [35] gives sufficient conditions for network (1) to possess continuous attractor.

2. Neural networks model

Lemma 1 (Yu et al. [35]). Suppose l1 ¼ 1 and b ? V l1 . Then, the linear network (1) has a continuous attractor and the continuous attractor can be represented by 8 9   m n <X = X  b~ j  C¼ ci Si þ Sj ci A Rð1 ri r mÞ : : ; 1lj  i¼1 j ¼ mþ1

Linear neural network is the simplest recurrent network model. In order to find out the dynamics properties of neural network with background tuning, we study the linear recurrent neural networks

If all the eigenvalues of W is 1 with multiplicity n, that is to say V l1 ¼ Rn . Because b ? V l1 , then b¼0. In this case (1) can be rewritten as

_ þ xðtÞ ¼ WxðtÞ þ bðtÞ xðtÞ

_ ¼0 xðtÞ

ð1Þ T

n

for t Z 0, where x ¼ ðx1 , . . . ,xn Þ A R is the state vector, xi is the activity of neuron i, W is the synaptic connection matrix which is obtained at the coding or training stage according to the Hebbian rule, we assume that W ¼ ðW ij Þnn is a symmetric real constant matrix. bðtÞ ¼ ðb1 ðtÞ, . . . ,bn ðtÞÞT denotes the background input which is independent on the initial conditions. Moreover, b(t) can be constant variable and function of time t. The linear network with constant background was investigated in our previous work [35] and the clear representation of continuous attractor was given. As an extension of the work, we focus on the continuous attractor shifting properties of linear network with this kind of constant background. Then, we study the dynamic properties of network when the background input is the periodic function of time t and find some other interesting results. Since the synaptic connection matrix W is symmetric, it possesses an orthonormal eigensystem. Let li ði ¼ 1, . . . ,nÞ be the eigenvalues of W ordered by l1 Z l2    Z ln . Suppose that Si ði ¼ 1, . . . ,nÞ compose an orthonormal basis in Rn such that each Si is an eigenvector of W belonging to li . Let the multiplicity of l1 be m and denote by V l1 the eigensubspace associated with the eigenvalue l1 .

for t Z0. So we get xðtÞ  xð0Þ. The trajectory remains at the initial point. It means that all points in V l1 are equilibria. On the other hand, if all the eigenvalue of W is less than 1, then the attractors of network are discrete but not continuous. The network with this kind of W does not belong to continuous attractor neural network. So generally speaking, when the largest eigenvalue of W is 1 and the others are less than 1, the network possess continuous attractor. Moreover, the dimension of C is m o n. C is a lowdimensional manifold embedded in the n–D state space. In the neural coding of eye position, the manifold representing the eye position in the brain is one dimension. The manifold example which has a high dimensionality more than one is the coding of the image. If we vary the orientation, translation and scaling of a face simultaneously, then we will get a three-dimensional facial images set. We can derive that the continuous attractor C is in V l1 which is spanned by the eigenvectors of W with eigenvalues 1. We call the direction on the linear combination of the m eigenvectors with eigenvalues 1 is the tangent direction of C and any direction orthogonal to the C is the normal direction. We should be noted

Author's personal copy J. Yu et al. / Neurocomputing 99 (2013) 439–447

that although each point in C is stable, it does not mean that all points in C are asymptotically stable. Definition 1. An equilibrium point xn is said to be asymptotically stable, if xn is stable, and there exist a Z 4 0 such that Jxð0Þxn J r Z

C ¼ fc  S1 þ b9c A Rg:

lim xðtÞ ¼ xn

t! þ 1

for all t Z0. Theorem 1. Suppose l1 ¼ 1 is the largest eigenvalue of W with the multiplicity m and b ? V l1 , then the continuous attractor is only asymptotically stable on the normal directions. The trajectory starting from xð0Þ converges to m X

zi ð0ÞSi þ

i¼1

b~ j S: 1lj j j ¼ mþ1 n X

Proof. Since b ? V l1 , then b~ 1 ¼    ¼ b~ m ¼ 0. Since the multiplicity of the largest eigenvalue l1 ¼ 1 is m, it follows from (1) and (2) that z_ i ðtÞ ¼ 0,

1 r ir m

and z_ j ðtÞ ¼ ðlj 1Þ  zj ðtÞ þ b~ j ,

m þ 1 rj rn

for t Z 0. It is easy to see that the derivative of zi(t) is zero on the tangent direction of C. Solving this equation, it gives that zi ðtÞ ¼ zi ð0Þ,

1 ri rm

and zj ðtÞ ¼

It can be checked that the eigenvalues of W are l1 ¼ 1 and l2 ¼ 0. The eigenvectors of W belong to the above eigenvalues are     0:7071 0:7071 S1 ¼ , S2 ¼ : 0:7071 0:7071 Moreover, b ? V l1 . The continuous attractor is

implies that

xy ¼

441

! b~ j b~ j þ zj ð0Þ eðlj 1Þt , 1lj 1lj

The dashed line in Fig. 1 is the continuous attractor C. S1 is the tangent direction of the continuous attractor C. If the initial point is (1,1), the converge point is the projection of (1,1) on C and the trajectory converges to this point asymptotically only on the normal direction of C. If we give the initial point (1,1) a change along the direction parallel to C to get another point (0.5,0.5), the terminal point is also on the continuous attractor and is the projection of (0.5,0.5) on the C. If we change the initial point along the normal direction which is orthogonal to C to another point (1.5,0.5), then the converge point does not change. The reason is that the projections of the two initial points (1,1) and (1.5,0.5) on the continuous attractor are same. If we change the initial point along the direction which is neither tangent nor normal direction to (2,1.5) continuously, then the steady state shifted continuously along the continuous attractor. The dotted line is the shift of initial points and the bold solid line is the shift of continuous attractor. The thin solid lines are the trajectories from different initial points. Let us study the shift reduced by background shift. If the background b is changed, then the continuous attractor will translate but the direction will not change. Theorem 2. Suppose l1 ¼ 1 and b ? V l1 . Given b a small change continuously along the normal direction of the continuous attractor, then the continuous attractor C is shifted continuously to another continuous attractor along the normal direction of C. ]

m þ1 r j rn

Proof. Given b a small change and b is obtained, where b¼

for t Z 0. Thus

b~ i Si

i ¼ mþ1

! n X b~ j b~ j xðtÞ ¼ zi ð0ÞSi þ Sþ z ð0Þ S eðlj 1Þt 1lj j j ¼ m þ 1 j 1lj j i¼1 j ¼ mþ1 m X

n X

n X

3

ð3Þ 2.5

for t Z 0. We can get from (3) that when t! þ1, the third term on the right hand of (3) ! n X b~ j S eðlj 1Þt !0, zj ð0Þ 1lj j j ¼ mþ1

C

2

1.5

xðtÞ!

m X

zi ð0ÞSi þ

i¼1

b~ j S, 1 lj j j ¼ mþ1 n X

1

when t! þ 1. Then from Definition 1, the trajectories converge to the point on the continuous attractor C asymptotically only on the normal direction of C. The proof is complete. & Consider the model:     0:5 0:5 0:7071 x_ þ x ¼ xþ : 0:5 0:5 0:7071 Denote  0:5 W¼ 0:5

 0:5 , 0:5



 0:7071 b¼ : 0:7071

(2,1.5)

x2

so

(1,1)

0.5

(1.5,0.5) (0.5,0.5)

0

ð4Þ −0.5 −0.5

0

0.5

1

1.5

2

2.5

3

x1 Fig. 1. The shift of the initial points of (4) along different directions respectively.

Author's personal copy 442

J. Yu et al. / Neurocomputing 99 (2013) 439–447

infinite stable periodic trajectories and has the same period as the background input. The background input of the linear neural network is some continuous periodic vector function defined on ½0, þ1 with period o, i.e., there exits a constant o 4 0 such that bi ðt þ oÞ ¼ bi ðtÞ ði ¼ 1, . . . ,nÞ for all t Z0. Give any x A Rn , we define a normal by

and ]

b ¼

n X

~] b i Si :

i ¼ mþ1

Then the continuous attractor 8 9   <X = m n X  b~ j  ci Si þ Sj ci A Rð1 r ir mÞ C¼ : ; 1 l j  i¼1 j ¼ mþ1 is shifted to 8 <X m ci Si þ C] ¼ : i¼1

JxJ ¼ max f9xi 9g: 1rirn

Let D  Rn . For any initial point f A D, we denote the solution of (1) starting from f by xðt, fÞ ¼ ðx1 ðt, fÞ,x2 ðt, fÞ, . . . ,xn ðt, fÞÞT . It means that xðt, fÞ is continuous, satisfies (1) and xð0, fÞ ¼ f.

9   ~] = b j  Sj ci A Rð1 r ir mÞ : ; 1lj  j ¼ mþ1 n X

We can see that the first m projections on the eigenvectors belong to eigenvalue 1 of C and C ] are same, so C shift to C ] continuously only along the normal direction of C. The proof is complete. & Reconsider the model (4), when we increase the background input b along the normal direction of the continuous attractor and obtain other two background inputs: b2 ¼ 1:5b, b3 ¼ 2b. Thus all the three inputs are orthogonal to V l1 . The simulation results are given in Fig. 2. The continuous attractor and trajectories of network with background input b are plotted in red. The continuous attractors and trajectories of network with background input b2 and b3 are plotted in blue and black, respectively. We can see that the continuous attractor is shifting continuously from the initial one to the final one and all the continuous attractors are parallel lines. The different projections on the Sj ðj ¼ m þ 1, . . . ,nÞ of the three inputs made this translation of continuous attractors.

4. Model with oscillating background input Generally, when the background input is not constant but some continuous periodic vector function, the attractor is not a unique equilibrium but a periodic trajectory. In fact, periodic oscillations are prominent in the hippocampus. In this section, we investigate the dynamics properties of continuous attractor neural network with periodic background. Actually, the analysis of oscillation is more general than stability analysis since a fixed point is the special case of oscillation with any arbitrary period. We will show that under some conditions the network (1) has

Jxðt, fÞxðt, cÞJ r gJfcJeEt

ð5Þ

for all t Z0, then there is only one periodic trajectory of the network (1) in D, which exponentially attracts all trajectories of D. Compared with the constant background, we found that the linear network with oscillating background has similar but more interesting results. Theorem 3. Suppose l1 ¼ 1 is the largest eigenvalue of W with the multiplicity m and b ? V l1 , then the set D ? V l1 is an invariant set of the linear network (1). Moreover, the linear network (1) has only one periodic trajectory located in D, it exponentially attracts all trajectories of D. Thus, linear network (1) have infinite exponentially stable periodic trajectories. Proof. The proof will be divided into two parts. In the first part, we will prove that D is an invariant set, i.e., given any initial xð0Þ A D, the trajectory x(t) starting from xð0Þ will stay in D. Since bðtÞ ? V l1 , then b~ 1 ðtÞ ¼    ¼ b~ m ðtÞ ¼ 0. Since the multiplicity of the largest eigenvalue 1 is m, it follows from (1) and (2) that z_ i ðtÞ ¼ 0

ði ¼ 1, . . . ,mÞ

for t Z0. Then, we have ði ¼ 1, . . . ,mÞ

for t Z0. Since xð0Þ A D and D ? V l1 , then z1 ð0Þ ¼    ¼ zm ð0Þ ¼ 0. So, z1 ðtÞ ¼    ¼ zm ðtÞ ¼ 0 for t Z0. So it is easy to see that xðtÞ A D. Clearly D must be an invariant set. Next, in the second part, we will prove that there is one periodic trajectory located in D and this periodic trajectory exponentially attracts all the trajectories of D. Let xðt, fÞ and xðt, cÞ be the two trajectories of the network (1) with initial conditions f and c, respectively, where f, c A D. Then from (1), we have

4.5 4 3.5 x2

Lemma 2 (Yu et al. [37]). Given any f, c A D, where D is an invariant set of the network (1), if constants g 4 0 and E 4 0 exist such that

zi ðtÞ ¼ zi ð0Þ

5

3 2.5 2 1.5 1 0.5 −0.5

Definition 2. A set D is called an invariant set, if each trajectory starting from D will stay in D forever.

_ fÞ þxðt, fÞ ¼ Wxðt, fÞ þ bðtÞ xðt, 0

0.5

1 x1

1.5

2

2.5

Fig. 2. The shift of the continuous attractor of linear networks (4) with three different background inputs. The continuous attractors and trajectories of network with background input b, b2 and b3 are plotted in red, blue and black, respectively. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.)

and _ cÞ þxðt, cÞ ¼ Wxðt, cÞ þ bðtÞ xðt, for all t Z 0. Moreover xðt, fÞ ¼

n X i¼1

zi ðt, fÞSi

Author's personal copy J. Yu et al. / Neurocomputing 99 (2013) 439–447

periodic trajectories are oscillating on the normal direction of V l1 and through V l1 . The two red lines in Fig. 3 are the boundaries of the periodic trajectories. We can see that the center of every periodic solution is on V l1 . This figure is similar to Fig. 2 but they are different. The trajectories are attracted to the point on the continuous attractor in Fig. 2, while in Fig. 3 the trajectories are vibrating around the black line. Let us consider a 3-D network: 2 3 2 3 cosðtÞ 0 1 1 6 7 6 _ þ xðtÞ ¼ 4 1 0 1 5xðtÞ þ 4 cosðtÞ 7 xðtÞ ð7Þ 5 cosðtÞ 1 1 0

and xðt, cÞ ¼

n X

zi ðt, cÞSi

i¼1

for t Z 0. Denote uðtÞ ¼ xðt, fÞxðt, cÞ and zi ðtÞ ¼ zi ðt, fÞzi ðt, cÞ

ði ¼ 1,2, . . . ,nÞ

it follows that _ þuðtÞ ¼ WuðtÞ: uðtÞ

for t Z0.

Then we have n X

z_i ðtÞSi þ

i¼1

n X

zi ðtÞSi ¼ W

i¼1

n X

6

zi ðtÞSi

i¼1

5

for all t Z0 and i ¼ 1,2, . . . ,n. It follows that z_ i ðtÞ ¼ 0

4

ði ¼ 1, . . . ,mÞ

3

and   z_ j ðtÞ ¼ lj 1  zj ðtÞ

ðj ¼ m þ 1, . . . ,nÞ

2 x2

for t Z 0. Then, we have zi ðtÞ ¼ zi ð0Þ

ði ¼ 1, . . . ,mÞ

1 0

and zj ðtÞ ¼ zj ð0Þ  eðlj 1Þt

ðj ¼ m þ 1, . . . ,nÞ

−1

for t Z 0. Let



443

−2

max

mþ1rjrn

−3

flj g o1

−4 −4

and

−3

−2

−1

E ¼ l1, thus uðtÞ ¼

n X

zi ðtÞSi ¼

i¼1

¼

n X

n X

zj ð0ÞSj  eðlj 1Þt r

j ¼ mþ1

n X

0 x1

1

2

3

4

Fig. 3. Periodic trajectories of model (6). The black line is V l1 . The open circles are randomly selected initial points. The blue lines starting from these initial points are trajectories of the network. All the periodic trajectories are oscillating back and forth on the normal direction of the black line and the center of every periodic solution is on the black line. The two red lines are the boundaries of the periodic trajectories. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.)

zj ð0ÞSj  eðl1Þt

j ¼ mþ1

zi ð0ÞSj  eEt ¼ ðxð0, fÞxð0, cÞÞeEt ¼ ðfcÞeEt

i¼1

for t Z 0. Then Jxðt, fÞxðt, cÞJ rJfcJeEt

3

This can be illustrated by the following 2-D network: " #   cosðtÞ 0:5 0:5 _ þ xðtÞ ¼ xðtÞ xðtÞ þ cosðtÞ 0:5 0:5

2.5 2 x3

for t Z 0. By Lemma 2, the network (1) exists one periodic trajectory located in D and it exponentially attracts all trajectories in D. Given any initial point xð0Þ, there is an invariant set D such that xð0Þ A D, so there are infinite invariant sets of (1) which are orthogonal to V l1 in the state space, thus the linear network (1) has infinite exponentially stable periodic trajectories. The proof is complete. &

1.5 1 0.5 0

ð6Þ

for t Z0. The difference in this model and network (4) is the background. In this model the background b(t) is a periodic vector function. The black line in Fig. 3 is the 1-D V l1 in the 2-D space. The open circles are randomly selected initial points. The blue lines starting from these initial points are trajectories of the network. All the

−0.5 3 2 x2

1 0

−0.5

0

0.5

1

1.5

2

2.5

3

x1

Fig. 4. Periodic trajectories of model (7). The gray plane is V l1 . The open circles are 30 randomly selected initial points and the trajectories are lines oscillating on the normal direction of V l1 .

Author's personal copy 444

J. Yu et al. / Neurocomputing 99 (2013) 439–447

It can be checked that the largest eigenvalue is l1 ¼ 1 with multiplicity 2, and another eigenvalue l2 ¼ 2. The eigenvectors belong to the eigenvalue l1 ¼ 1 are 2 3 2 3 0:3938 0:7152 6 7 6 7 S1 ¼ 4 0:8163 5, S2 ¼ 4 0:0166 5: 0:4225 0:6987 Clearly, bðtÞ ? V l1 . By Theorem 3, the network possesses infinite periodic trajectories. The gray plane in Fig. 4 is the 2-D V l1 in the 3-D space. The open circles are 30 randomly selected initial points. The trajectories are also lines oscillating on the normal direction of V l1 . We can see that the center of every periodic solution is on this plane in Fig. 5. The homogeneous form of network (1) is _ þ xðtÞ ¼ WxðtÞ xðtÞ

ð8Þ

for t Z 0. Under the condition of Theorem 3, homogeneous network (8) has a continuous attractor. Actually the continuous attractor of (8) is V l1 . The black line in Fig. 3 and the gray plane in Fig. 4 are the continuous attractors of the homogeneous form of the networks, respectively. In the Fourier analysis, all periodic functions can be decomposed into the sum of a set of simple oscillating functions, namely sines and cosines. In order to understand the above theorem clearly, we investigate the simplest cases that the background inputs are cosine function and sine function. Theorem 4. Suppose bðtÞ ¼ cosðtÞ, l1 ¼ 1 is the largest eigenvalue of W with the multiplicity m. Then, the periodic trajectories of linear network (1) can be represented by 8 9 m n <X = X P¼ ðci þ di sinðtÞÞSi þ kj ðtÞSj , : ; i¼1

j ¼ mþ1

Proof. Since bðtÞ ¼ cosðtÞ, then b~ i ðtÞ ¼ di cosðtÞ, where di A R. Since the multiplicity of the largest eigenvalue l1 ¼ 1 is m, it follows from (1) and (2) that 1 ri r m

z_ i ðtÞ ¼ di cosðtÞ, and

m þ 1 rj rn

z_ j ðtÞ ¼ ðlj 1Þ  zj ðtÞ þ dj cosðtÞ,

for t Z0. It is easy to see that the derivative of zi(t) is zero on the tangent direction of V l1 . Solving this equation, it gives that zi ðtÞ ¼ zi ð0Þ þ di sinðtÞ,

1r i rm

and zj ðtÞ ¼ kj ðtÞ þðzj ð0ÞZj Þeðlj 1Þt ,

m þ 1r j r n

for t Z0, where kj ðtÞ ¼ dj 

Zj ¼

ð1lj Þ cosðtÞ þ sinðtÞ 1 þ ð1lj Þ2

dj ð1lj Þ 1 þ ð1lj Þ2

:

Thus m X

xðtÞ ¼

n X

ðzi ð0Þ þ di sinðtÞÞSi þ

i¼1

n X

kj ðtÞSj þ

j ¼ mþ1

ðzj ð0ÞZj Þeðlj 1Þt Sj

j ¼ mþ1

ð9Þ for t Z0. Then we can get from (9) that when t! þ 1, the third term on the right hand of (9) n X

ðzj ð0ÞZj Þeðlj 1Þt Sj !0,

j ¼ mþ1

where ci A R

so when t! þ 1,

ð1r i rmÞ,

kj ðtÞ ¼ dj 

ð1lj Þ cosðtÞ þsinðtÞ 2

1þ ð1lj Þ

xðtÞ! ðm þ1 r j rnÞ:

ðzi ð0Þ þdi sinðtÞÞSi þ

i¼1

Moreover, the periodic trajectories have the same period as the background input b(t).

3

n X

kj ðtÞSj :

&

j ¼ mþ1

Proof. We choose different initial point xð0Þ, then the continuous periodic attractor is 8 9 m n <X = X ðci þ di sinðtÞÞSi þ kj ðtÞSj , P¼ : ; i¼1

2.5

j ¼ mþ1

where ð1 r ir mÞ,

ci A R

2 1.5 x3

m X

kj ðtÞ ¼ dj 

1

ð1lj ÞcosðtÞ þsinðtÞ 1 þð1lj Þ2

ðm þ1 r j r nÞ:

From the form of ci þ di sinðtÞ and kj(t) we can see that the period of P is as same as the background input. The proof is complete. &

0.5 0 −0.5 3 2 x2

1 0

0

1

2

3

x1

Fig. 5. Periodic trajectories of model (7) and V l1 from another view. The center of every periodic solution is on V l1 .

Corollary 1. Suppose bðtÞ ¼ sinðtÞ, l1 ¼ 1 is the largest eigenvalue of W with the multiplicity m. Then, the periodic trajectories of linear network (1) can be represented by 8 9 m n <X = X P¼ ðci þ di ð1cosðtÞÞÞSi þ kj ðtÞSj , : ; i¼1

where ci A R

ð1 r ir mÞ,

j ¼ mþ1

Author's personal copy J. Yu et al. / Neurocomputing 99 (2013) 439–447

kj ðtÞ ¼ dj 

ð1lj Þ sinðtÞcosðtÞ 1 þð1lj Þ2

445

for t Z0. Clearly

ðmþ 1 r j rnÞ:

2

cosðtÞ

3

Moreover, the periodic trajectories have the same period as the background input b(t).

6 7 bðtÞ ¼ 4 cosðtÞ 5 cosðtÞ

Proof. Let bðtÞ ¼ sinðtÞ, the result follows from the proof of Theorem 4. The proof is complete. &

is periodic vector function. It can be checked that the W has the largest l1 ¼ 1 with multiplicity 1, and W has another eigenvalue l2 ¼ 0. The eigenvector of W belong to the eigenvalue l1 ¼ 1 is

_ þ xðtÞ ¼ xðtÞ



0:5 0:5

" #  cosðtÞ xðtÞ þ sinðtÞ 0:5 0:5

ð10Þ

for t Z 0. Here the background is not orthogonal to V l1 . The network possesses infinite periodic trajectories. In order to see clearly, only five trajectories are plotted. The curves in different colors are stable periodic trajectories of the network. The black line in Fig. 6 is the 1-D V l1 in the 2-D space. All the periodic trajectories are oscillating back and forth and orthogonal to V l1 . We can see that the center of every periodic solution is on V l1 . Consider another 3-D network 2

1 _ þ xðtÞ ¼ 6 xðtÞ 4 1 1

2 2 2

1

3

6 7 S1 ¼ 4 1 5: 1 By Theorem 4, the network possesses infinite periodic trajectories. The black line in Fig. 7 is the 1-D V l1 in the 3-D space. The open circles are five randomly selected initial points. The curves in different colors starting from these initial points are stable periodic trajectories of the network. All the periodic trajectories are oscillating back and forth and orthogonal to V l1 . The center of every periodic solution is on V l1 . Although all the periodic trajectories look like being on a cylinder, they are located on a plane. We can see it clearly in Fig. 8.

5

3

2 3 cosðtÞ 7 6 cosðtÞ 7 0 5xðtÞ þ 4 5 cosðtÞ 0 0

2

x3

With the results of Theorem 4 and Corollary 1, we can get the clear representation of the stable periodic trajectories of linear networks with any periodic background input. In fact, the analysis of network with periodic background input is very different from the one with constant background. In Theorem 4 and Corollary 1, the background input cosðtÞ and sinðtÞ are not orthogonal to V l1 and the stable periodic trajectories can be obtained in the same way. So whether the background input is orthogonal to V l1 or not, the network possesses stable periodic trajectories. Let us see some examples: Consider a two-dimensional network

0

−5 −5

ð11Þ

5

x2

3

0

0

x1

5 −5

2

Fig. 7. Periodic trajectories of model (11). The black line is V l1 in the 3-D space. The open circles are five randomly selected initial points. The curves in different colors starting from these initial points are stable periodic trajectories of the network. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.)

x2

1

5

0

x3

−1

0

−2

−3 −3

−2

−1

0 x1

1

2

3

Fig. 6. Periodic trajectories of model (10). The black line is V l1 . The open circles are five randomly selected initial points. The curves in different colors starting from these initial points are stable periodic trajectories of the network. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.)

−5 −5

5 0

0 x2

x1

5 −5 Fig. 8. Periodic trajectories of model (11) from another view. All the periodic trajectories of model (11) are located on a plane.

Author's personal copy 446

J. Yu et al. / Neurocomputing 99 (2013) 439–447

4

x3

3 2 1 0 4

x2

2 0

1

0

2 x1

3

4

Fig. 9. Five elliptical periodic trajectories of model (12) are showed. The gray plane is V l1 .

4

space, a low-dimensional plane in 3-D space or the whole highdimensional state space. There may be also other kinds of trajectories. What unchangeable is that all the trajectories are located in the invariant set which is orthogonal to V l1 . V l1 is dependent on the connection matrix W. The connection matrix and the background input decide the periodic trajectories form together. Some basic theories of continuous attractor neural networks with oscillating background tuning have been investigated in this section. One example of the application of this theory is manifold learning. Recently, manifold learning is a popular recent approach to find a low-dimensional basis for describing high-dimensional data. Seung and Lee have shown in their paper [16] that as the faces are rotated, they trace out continuous curves embedded in image space. These curves are the continuous periodic trajectories and the rotated faces are the oscillating background inputs. These curves are continuous because the images vary smoothly as the faces are rotated. They are curves because they are generated by varying a single degree of freedom, the angle of rotation. Moreover, these curves are low dimensional, although they are embedded in image space, which has a high dimensionality equal to the number of image pixels. In fact, memories of the face patterns are stored in the low-dimensional continuous curves. The connections between such neural manifolds and the image manifolds help us to understand the memory storage in the brain. More applications about this theory may be discovered gradually in the future. 5. Conclusions

3

x3

2 1 0 4 2 x2

0

0

1

2 x1

3

4

Fig. 10. Periodic trajectories of model (12) and V l1 from another view. The focuses of every elliptical trajectory are on the gray plane.

Consider another 3-D network: 2 3 2 3 cosðtÞ 0 1 1 6 7 6 _ þ xðtÞ ¼ 4 1 0 1 5xðtÞ þ 4 cosðtÞ 7 xðtÞ 5 cosðtÞ 1 1 0

ð12Þ

for t Z 0. Clearly 2 3 cosðtÞ 6 7 bðtÞ ¼ 4 cosðtÞ 5: cosðtÞ By Theorem 4, the network possesses infinite stable periodic trajectories. Each trajectory is an ellipse. The gray plane in Fig. 9 is the 2-D V l1 . V l1 is combined by the two eigenvectors of connection matrix associated with the largest eigenvalue 1. Five trajectories starting from the initial points are oscillating back and forth on the normal direction of V l1 . It is interesting that the focuses of every ellipse trajectory are on V l1 in Fig. 10. In all the above examples the periodic trajectory may be line or ellipse and all the trajectories may be located in a part of 2-D state

In this work we studied the dynamic properties of continuous attractor neural networks with background tuning. We used a linear neural network to demonstrate that our neural network model can be programmed to relax to appropriate initial conditions to continuous attractors. By tuning background input the continuous attractor shift scheme was studied. Two background tuning mechanisms were investigated in detail. First, when the background is set to be constant and changed smoothly, the continuous attractor is also shifted continuously to another one. Second, the explicit representations of periodic trajectories are given with periodic oscillation background. The continuous attractor neural network is understood deeply. The method given in this paper may be further developed to other network models.

References [1] O. Barak, M. Tsodyks, Persistent activity in neural networks with dynamic synapses, PLoS Comput. Biol. 3 (2) (2007) 0323–0332. [2] C.D. Brody, R. Romo, A. Kepecs, Basic mechanisms for graded persistent activity: discrete attractors, continuous attractors, and dynamic representations, Curr. Opin. Neurobiol. 13 (2003) 204–211. [3] G. Mongillo, O. Barak, M. Tsodyks, Synaptic theory of working memory, Science 319 (2008) 1543–1546. [4] H.S. Seung, Continuous attractors and oculomotor control, Neural Networks 11 (1998) 1253–1258. [5] K.C. Zhang, Representation of spatial orientation by the intrinsic dynamics of the head-direction cell ensemble: a theory, J. Neurosci. 16 (1996) 2112–2126. [6] D.J. Amit, Modeling Brain Function: The World of Attractor Neural Networks, Cambridge University Press, Cambridge, 1989. [7] E.T. Rolls, An attractor network in the hippocampus: theory and neurophysiology, Learn. Mem. 14 (2007) 714–731. [8] Z. Yi, L. Zhang, J. Yu, K.K. Tan, Permitted and forbidden sets in discrete-time linear threshold recurrent neural networks, IEEE Trans. Neural Networks 20 (2009) 952–963. [9] J.J. Hopfield, Neural networks and physical systems with emergent collective computational abilities, Proc. Natl. Acad. Sci. U.S.A. 79 (1982) 2554–2558. [10] J.J. Hopfield, Neurons with graded response have collective computational properties like those of two-state neurons, Proc. Natl. Acad. Sci. U.S.A. 81 (10) (1984) 3088–3092. [11] R. Mueller, A.V.M. Herz, Content-addressable memory with spiking neurons, Phys. Rev. E 59 (3) (1999) 3330–3338. [12] Y. Aviel, D. Horn, A. Abeles, Memory capacity of balanced networks, Neural Comput. 17 (2005) 691–713.

Author's personal copy J. Yu et al. / Neurocomputing 99 (2013) 439–447

[13] C.K. Machens, C.D. Brody, Design of continuous attractor networks with monotonic tuning using a symmetry principle, Neural Comput. 20 (2008) 452–485. [14] P. Miller, Analysis of spike statistics in neuronal systems with continuous attractors or multiple, discrete attractor states, Neural Comput. 18 (2006) 1268–1317. [15] H.S. Seung, How the brain keeps the eyes still, Proc. Natl. Acad. Sci. U.S.A. 93 (1996) 13339–13344. [16] H.S. Seung, D.D. Lee, The manifold ways of perception, Science 290 (2000) 2268–2269. [17] S.M. Stringer, E.T. Rolls, T.P. Trappenberg, I.E.T. Araujo, Self-organizing continuous attractor networks and motor function, Neural Networks 16 (2003) 161–182. [18] D.A. Robinson, Integrating with neurons, Ann. Rev. Neurosci. 12 (1989) 33–45. [19] A. Koulakov, S. Raghavachari, A. Kepecs, J.E. Lisman, Model for a robust neural integrator, Nat. Neurosci. 5 (8) (2002) 775–782. [20] S.M. Stringer, T.P. Trappenberg, E.T. Rolls, I.E.T. de Araujo, Self-organizing continuous attractor networks and path integration: one-dimensional models of head direction cells, Network: Comput. Neural Syst. 13 (2002) 217–242. [21] A. Samsonovich, B.L. McNaughton, Path integration and cognitive mapping in a continuous attractor neural network model, J. Neurosci. 17 (1997) 5900–5920. [22] A. Pouget, P. Dayan, R. Zemel, Information processing with population codes, Nat. Rev. Neurosci. 1 (2000) 125–132. [23] S. Wu, K. Hamaguchi, S. Amari, Dynamics and computation of continuous attractors, Neural Comput. 20 (2008) 994–1025. [24] S. Amari, Dynamics of pattern formation in lateral-inhibition type neural fields, Biol. Cybern. 27 (2) (1977) 77–87. [25] S. Romani, M. Tsodyks, Continuous attractors with morphed/correlated maps, PLoS Comput. Biol. 6 (8) (2010) 1–19. [26] L. Zou, H. Tang, K.C. Tan, W. Zhang, Analysis of continuous attractors for 2-D linear threshold neural networks, IEEE Trans. Neural Networks 20 (2009) 175–180. [27] C.C.A. Fung, K.Y.M. Wong, S. Wu, A moving bump in a continuous manifold: a comprehensive study of the tracking dynamics of continuous attractor neural networks, Neural Comput. 22 (2010) 752–792. [28] S. Wu, S. Amari, Computing with continuous attractors: stability and online aspects, Neural Comput. 17 (2005) 2215–2239. [29] W. Becker, H.M. Klein, Accuracy of saccadic eye movements and maintenance of eccentric eye positions in the dark, Vis. Res. 13 (1973) 1021–1034. [30] K. Hess, H. Reisine, M. Dursteler, Normal eye drift and saccadic drift correction in darkness, Neuro-Ophthalmology 5 (1985) 247–252. [31] E. Salinas, Background synaptic activity as a switch between dynamical states in a network, Neural Comput. 15 (2003) 1439–1475. [32] A. Bragin, G. Jando, Z. Nadasdy, J. Hetke, K. Wise, G. Buzsaki, Gamma (40–100 Hz) oscillation in the hippocampus of the behaving rat, J. Neurosci. 15 (1995) 47–60. [33] A. Alonso, J.M. Gaztelu, W. Bunvo, E. Garcı´a-Austt, Cross-correlation analysis of septohippocampal neurons during theta-rhythm, Brain Res. 413 (1987) 135–146. [34] U. Rutishauser, I.B. Ross, A.N. Mamelak, E.M. Schuman, Human memory strength is predicted by theta-frequency phase-locking of single neurons, Nature 464 (2010) 903–907. [35] J. Yu, Z. Yi, L. Zhang, Representations of continuous attractors of recurrent neural networks, IEEE Trans. Neural Networks 20 (2009) 368–372. [36] J. Yu, Z. Yi, J. Zhou, Continuous attractors of Lotka–Volterra recurrent neural networks with infinite neurons, IEEE Trans. Neural Networks 21 (2010) 1690–1695. [37] J. Yu, Z. Yi, andL. Zhang, Periodicity of a class of nonlinear fuzzy systems with delays, Chaos, Solitons Fractals 40 (2009) 1343–1351.

Jiali Yu received the Master degree in Mathematics from University of Electronic Science and Technology of China, Chengdu, China, in 2003. She received the Ph.D. degree in Computer Science from University of Electronic Science and Technology of China, Chengdu, China, in 2009. She is currently a Research Scientist with the Institute for Infocomm Research, Singapore. Her current research interests include neural networks, continuous attractors and cognitive memory.

447

Huajin Tang received the B.Eng. degree from Zhejiang University, Hangzhou, China, the M.Eng. degree from Shanghai Jiao Tong University, Shanghai, China, and the Ph.D. degree in Electrical and Computer Engineering from the National University of Singapore, Singapore, in 1998, 2001, and 2005, respectively. He was a Research and Development Engineer with STMicroelectronics, Singapore, from 2004 to 2006. From 2006 to 2008, he was a Post-Doctoral Fellow with Queensland Brain Institute, University of Queensland, Australia. He is currently a Research Scientist with the Institute for Infocomm Research, Singapore. He has published one monograph (Springer-Verlag, 2007) and over 20 international journal papers. His current research interests include neural computation, machine learning, neuromorphic systems, computational and biological intelligence, and neuro-cognitive robotics. He is an Associate Editor of IEEE Transactions on Neural Networks and Learning Systems.

Haizhou Li is currently a Principal Scientist and Department Head of the Human Language Technology at the Institute for Infocomm Research, Singapore. He is also a conjoint Professor at the School of Electrical Engineering and Telecommunications, University of New South Wales, Australia. Li has worked on speech and language technology in academia and industry since 1988. He taught in the University of Hong Kong (1988–1990), South China University of Technology (1990–1994), and Nanyang Technological University (2006–). He was a Visiting Professor at CRIN/INRIA in France (1994–1995). As a technologist, he was appointed as Research Manager in Apple-ISS Research Centre (1996–1998), Research Director in Lernout & Hauspie Asia Pacific (1999–2001), and Vice President in InfoTalk Corp. Ltd. (2001–2003). Li’s research interests include automatic speech recognition, natural language processing and information retrieval. He has published over 200 technical papers in international journals and conferences. He holds five international patents. Li now serves as an Associate Editor of IEEE Transactions on Audio, Speech and Language Processing, ACM Transactions on Speech and Language Processing, and Springer International Journal of Social Robotics. He is an elected Board Member of the International Speech Communication Association (ISCA, 2009–2013). He was appointed the General Chair of ACL 2012 and INTERSPEECH 2014. He was the recipient of National Infocomm Award of Singapore in 2001. He was named one of the two Nokia Professors 2009 by Nokia Foundation in recognition of his contribution to speaker and language recognition technologies.

Luping Shi received Doctorate of science in Cologne university, Germany in 1992. He joined Data Storage Institute Singapore (DSI) in 1996 and now is a senior scientist, division manager of Optical Materials & System division. His research areas include nonvolatile solid-state memory, optical data storage, integrated opto-electronics, nanoscience, and artificial cognitive memory and sensor. He has authored and co-coauthored four book chapters, more than 200 scientific papers and more than 40 invited talks. He is the recipient of the National Technology Award 2004 Singapore.