CHAOS 13 (1), 327-334 (2003) (Focus Issue: Control and Synchronization in Chaotic Dynamical Systems, Edited by Jürgen Kurths, S. Boccaletti, C. Grebogi, and Y.-C. Lai)
Synchronization of reconstructed dynamical systems H. U. Voss Freiburg Center for Data Analysis and Modeling (FDM), Eckerstr. 1, 79104 Freiburg, Germany Abstract. The problem of constructing synchronizing systems to observed signals is approached from a data driven perspective, in which it is assumed that neither the drive nor the response systems are known explicitly but have to be derived from the observations. The response systems are modeled by utilizing standard methods of nonlinear time series analysis applied to sections of the driving signals. As a result, synchronization is more robust than what might be expected, given that the reconstructed systems are only approximations of the unknown true systems. Successful synchronization also may be accomplished in cases where the driving signals result from nonlinearly transformed chaotic states. The method is readily extended and applied to limited real-time predictions of chaotic signals.
Research on chaotic synchronization is focused mainly on the situation that the synchronizing systems are given by known models, like coupled ordinary differential equations or iterative maps. For technical applications and a better understanding of natural synchronization phenomena, however, the following situation is also worthwhile to investigate: Given a chaotic signal of unknown origin, find a system that synchronizes with that signal. It is shown that well-known methods of nonlinear time series analysis can be used to reconstruct chaotically synchronizing systems from observations, if only scalar chaotic signals of a-priori unknown systems are provided. Taking into account that synchronization of two systems happens in real-time, i.e., without any computation required, this setup may be a useful one for the construction or understanding of custom-fit dynamical systems that synchronize with received signals. By employing coupling schemes with a memory, it is also possible to perform a limited real-time prediction of the chaotic signal. Thinking again of custom-fit dynamical systems, this would allow for a prediction of signals without any explicit computation involved.
INTRODUCTION Since the discovery of synchronization in coupled chaotic systems [1, 2, 3], scientists have been thinking about useful applications of the theory of synchronization [4, 5, 6, 7, 8, 9]. A recently appearing branch related to applications focuses on the problem of “synchronization synthesis,” or the search for strategies to obtain a stably synchronizing response system, given a chaotic driving signal. This search comprises the construction of improved coupling schemes [10, 11, 12, 13, 14] and the stabilization of the synchronization manifold [15, 16, 17, 18] by means of modifying the response system [19, 20]; methods related to the chaos control problem [21, 22, 23, 24, 25, 26, 27, 28]. Further, robustness properties with respect to noise in the coupling signal have been investigated and led to improved methods [29, 30], and the behavior of parametrically mismatched systems has been studied [31, 32, 33]. In view of possible applications of synchrony, in this contribution only one-way coupling from a drive to a response system is considered, and the idea of synchronization between two dynamical systems is loosened somewhat. In particular, it will be assumed that the drive system is unknown but only a scalar signal is observed. One can think of two coupled pendula; each pendulum may synchronize with the other one, without “knowing” the other pendulum as a system but only by receiving its signal [34]. If the drive system is not known, a proxy model of it needs to be reconstructed from the driving signal. Under certain conditions, this approximative model then synchronizes with the driving signal, also with signals that have not been used for deriving the model. Thereby it is assumed that the response is still an autonomous system if the connection to the drive system is cut. Only then the response can maintain its dynamics if the driving signal is not received all the time or disturbed by noise. The outline of this paper is as follows: First, the notion of synchronization of dynamical systems reconstructed from data is detailed further. Then a way of deriving an approximative model is described, and ways of coupling the model to the driving signal are discussed. The method is shown to be applicable for nonlinearly transformed signals, for which an explicit drive system could not be found anyhow, and, finally, real-time prediction under this scheme is discussed briefly. A discussion ends the paper.
SYNCHRONIZATION WITH OBSERVED DATA Consider two dynamical systems that are coupled in a unidirectional way, where one system, the drive, produces a scalar signal that is injected into the other system, the response. The drive system is assumed to be a time-continuous autonomous generic dynamical system ˜˙ = ˜f (˜ x x) (1) ˜ and a smooth function ˜f : IRDx˜ → IRDx˜ , with a Dx˜ -dimensional real state vector x generating the flow of the system. The scalar signal injected into the response system is assumed to result from a smooth and generic observation function h : IR Dx˜ → IR, ˜ to a scalar quantity x: mapping the state vector x x = h(˜ x) .
(2)
The value of x depends continuously on time, but in the following it is assumed that the observations x can only be sampled at discrete instants of time, yielding a time series x t (t = 1, . . .). Sampling starts at a time when the system has relaxed to its invariant density (e.g. a chaotic attractor). The response system is assumed to have the time-discrete form yt+1 = f (yt , xt ) ,
(3)
where the nonlinear function f : IRDy → IRDy maps the states y onto states at the next time step, depending on the parameter xt , the driving signal. Under this setup, the goal is the following: Without knowing anything about the driving system but a limited portion of the observed signal xt (t = 1, . . . , N ), find a response system (3) that synchronizes with the driving signal xt for all future times t > N. It may sound unfamiliar that synchronization between a signal and a system is considered rather than synchronization between two systems. This is justified by the fact that for the here considered case of unidirectional coupling the response system cannot affect the drive system, and therefore, the drive system itself is not important for the subsequent analysis. To find a synchronizing response system, there are several options: (i) After some analysis of a part of the signal xt , guess which system produced it and use an identical one which then will be coupled to that signal. Since this method is not general enough in view of real applications, and it can be rather sensitive with respect to small system mismatches, it will not be followed here.
(ii) Guess the form of the drive system and fit its unknown parameters to the observed signal, using methods of estimation of coefficients in ordinary differential equations or maps. This is also not a useful option here since not all systems are identifiable by observing only a scalar output, and the process of fitting coefficients can be rather time consuming and computationally involved. (iii) Since it is known that the driving signal stems from a dynamical system, one can apply, after an attractor reconstruction by embedding, methods of nonlinear time series modeling to yield an approximative map yt+1 = f (yt ) .
(4)
Under certain conditions, this map then reproduces the invariant density of the reconstructed drive attractor. The question remains if this so-obtained reconstructed system is also able to synchronize with the observed signal, if suitably coupled to it. Due to its wide applicability, in the following the third approach is elaborated on. Later on it will be found that despite of the fact that Eq. (3) is in general a completely different system than the real unknown drive system (1), it may synchronize to the driving signal, if suitably coupled. Further it will be found that synchronization may be quite robust considered that the reconstructed system is an approximation obtained from only a finite, scalar, and nonlinearly transformed time-discrete data sample of the driving system. Due to the fact that the response system is not given in a closed form, a further analysis of the stability of the synchronization manifold cannot easily be undertaken, and to gain confidence into the results, numerical simulations will be performed.
DERIVING A MODEL FROM OBSERVATIONS The first step in finding an approximative response system consists of an attractor reconstruction from a finite scalar data sample xt (t = 1, . . . , N ) of the driving signal by a delay embedding [35, 36]. This yields embedding vectors x t = {xt , xt−1 , . . . , xt−(D−1) } (t = D, . . . , N ). By virtue of well-known embedding theorems [36, 37], embedding of a scalar signal into a higher-dimensional space generically provides a one-to-one correspondence between states of system (1) and vectors in the embedding space, if for system (1) the conditions as specified above hold and the embedding dimension D is larger than twice the attractor dimension (which is smaller than or equal to D x˜ and may be fractal).
The second step is to find a dynamical model for the reconstructed attractor. This is done by utilizing the observed statistical dependence between the embedding vectors for consecutive time steps. In practice, it amounts in estimating a system that has optimal prediction power for this signal. In other words, that function f is searched for, which minimizes the mean squared error between the observed signal x t and its prediction given by xˆt+1 = f (xt ) . (5) This relation can be used for several purposes: prediction and interpolation of nonobserved data points, noise reduction, and dynamical modeling. In dynamical modeling, the estimated function f is used in the map f : IRD → IRD applied to state y in the reconstructed phase space, f (y1 , . . . , yD ) y1 . (6) f (y) = .. . yD−1
With this map, a model of the dynamical system then is given by Eq. (4). Since only the first component of yt+1 is needed (during iteration the other components of yt+1 are obtained as the time delayed first vector components), Eq. (4) is usually written in the more compact notation yt+1 = f (yt ) . (7) In case of a successful reconstruction, the attractor produced by system (7) mimicks the invariant density of the reconstructed attractor of the signal xt . The state variable has been denoted by y since Eq. (4) or Eq. (7) will be coupled to the driving signal to obtain the response model later on. There are many different approaches to represent and estimate the function f [38, 39]. Here, the function will be represented by a sum of radial basis functions, which where introduced into dynamical modeling by Broomhead and Lowe [40]. Besides their good statistical modeling performance, radial basis functions are chosen because the numerical procedure to fit a function that is represented by a sum of them is particularly simple; a least-squares fit of the weighting coefficients of the terms in the sum will do. Hence, the function f is written as f (x) = α0 +
k X i=1
αi Φi (||x − ci ||) .
(8)
The radial basis functions Φi (·) depend on a scalar which in Eq. (8) is chosen to be the distance || · || between the embedding vectors x and a set of k center vectors c i (i = 1, . . . , k). These are a small representational sample from the set of embedding vectors. The radial basis functions are given here by Gaussian bell curves Φ(r) = exp −r 2 /(2σ) .
(9)
The normalization factors are omitted since these can be absorbed by the coefficients in Eq. (8). To estimate the map f from the observed data xt (t = 1, . . . , N ), the coefficients αi (i = 0, . . . , k) are estimated using a least squares fit [41] for fixed σ, k, and D. This ensures optimal prediction performance in a least-squares sense as well. The values of the latter three parameters are fixed by the following consideration: In order to obtain good synchronization performance, the reconstructed model (4) should be expected to have as good as it gets the same invariant density as compared with the set of points obtained from the attractor reconstruction from the time series. Therefore, the system (4) is numerically simulated with different choices of σ, k, and D, and the results are compared with the phase space reconstruction of the time series. Rather to develop an objective measure to quantify the similarity between the simulated invariant density and the phase space reconstruction, which is a highly nontrivial problem of its own, in the examples to be followed this comparison is done by visual inspection. It is observed anyway that under moderate variations of the optimal values of these parameters the results do only slightly change. As one of three examples that will be used throughout the paper, the chaotic Rössler oscillator [42] is modeled by this approach. It is given by x˜˙ 1 = −˜ x2 − x˜3 , x˜˙ 2 = x˜1 + a˜ x2 , x˜˙ 3 = b + x˜3 (˜ x1 − c) ,
(10)
with the coefficients a = 0.15, b = 0.2, and c = 10. The system is numerically integrated with a sufficiently small time step, and the observation function is defined to select the ˜ , i.e., xt = h(˜ first component of x xt ) = x˜1,t . These values are resampled with a larger time step, yielding the observation time series xt (t = 1, . . . , N ). The sampling time step is set to unity (like in Eq. (4)) in what follows and corresponds to a true time step of 0.1 in the model (10). This learning data set is used for estimating the coefficients in Eq. (8) to obtain the model (4) (Fig. 1).
3 (a) 2
x
t
1 0 −1 −2
2
0
100
200
300
400
500 t
600
2
(b)
1
0.5
0.5
1,t+15
1
0
y
t+15
800
900
1000
(c)
1.5
1.5
x
700
0
−0.5
−0.5
−1
−1
−1.5
−1.5 −1
0
1 x
t
2
−1
0 y
1
2
1,t
FIGURE 1. (a) Data set used for deriving the model (4). The resulting time step after resampling is 0.1, corresponding to a time step of 1 in the used maps and the plot, yielding a total of N = 1000 data points. (To obtain a more stable least-squares fit, and to generalize the usability of the fitting parameter σ, the data are linearly transformed to have mean zero and variance one.) (b) Reconstructed Rössler attractor from the learning data set in (a). (c) Model of the Rössler data given in (a), using model (4). The parameters used in Eqs. (8) and (9) for estimating the model coefficients are k = 28, σ = 2.3, and D = 10. The center vectors ci (i = 1, . . . , k) are chosen to be equidistantly distributed over the time series, i.e., their locations are not optimized further. This rule applies for all examples in this paper.
COUPLING THE MODEL TO A DRIVE SYSTEM Having estimated an approximative reconstructed model (4) from a piece of observations, it now can be coupled to the unknown system that produced these data. The aim is to obtain synchronization to this system for arbitrarily long time spans. There are at least two ways to inject a signal xt into the approximative model (4), which is now used as the response system with state variable yt in the form of Eq. (3).
(i) Complete replacement coupling: The emitted signal xt of the drive is injected into the response system (4) via
yt+1 = f (yt , xt ) =
f (xt , y2,t , . . . , yD,t ) y1,t .. . yD−1,t
.
(11)
(ii) Dissipative (or diffusive) coupling:
yt+1 = f (yt , xt ) =
f (y1,t , . . . , yD,t ) + K(xt − y1,t ) y1,t .. . yD−1,t
,
(12)
with a coupling constant K > 0. Since the driving signal is assumed to be a scalar, this is already the most general linear dissipative coupling scheme possible. One could also inject complete embedding vectors; but then one is approaching simply the prediction scheme (5), where xˆt+1 is replaced by yt+1 , and the response system would require some memory. In this case, one could not strictly speak of synchronization any longer [43], and we do not adhere to that coupling scheme. The other extreme case (to inject no signal at all) amounts in the attractor reconstruction model (4). As a measure for the quality of synchronization, the sample correlation coefficient between the normalized drive and response signal is used, i.e., M 1 X R(x, y) = x¯t y¯t , M t=1
(13)
where the bar denotes normalization to mean zero and variance one, and M is the number of time steps taken into account. Both coupling configurations are studied numerically for the Rössler oscillator. For complete replacement coupling it is found that any of the first two Rössler components can be used to drive the response system, yielding identical synchronization. The observed dynamics appears to be always in close proximity to the synchronization manifold xt = yt , displaying, therefore, almost identical synchronization (Fig. 2). It cannot be expected that R(x, y) attains unity, since the reconstructed model is only an approximation of the underlying system in reconstructed phase space. The reason for the success is that the approximative model constitutes a way of augmenting the phase space of the
2
(a)
1.5
1.5
1
1
0.5
0.5
y1,t
y1,t
2
0
0
−0.5
−0.5
−1
−1
−1.5
−1.5 −1
0
1 xt
2
(b)
−1
0
1
2
xt
FIGURE 2. (a) The synchronization manifold as adopted by complete replacement coupling to the driving signal, using model (11). The driving signal is the first component of the Rössler oscillator. It consists only of data which have not been used for deriving the model (one could speak of “out-of-samplesynchronization” in analogy to prediction). The correlation coefficient is R(x, y) = 0.9998. (b) The same for dissipative coupling with K = 1, using model (12). The correlation coefficient is R(x, y) = 0.9999, and in general is observed to be larger than 0.9990 for K = 0.3, . . . , 1.3. For larger K the response system becomes unstable. The same approximative model parameters as in Fig. 1 are used.
driven system (note that here it is of dimension D = 10 rather than three), which can stabilize the dynamics on the synchronization manifold [25, 44].
NONLINEAR TRANSFORMATIONS AND FURTHER EXAMPLES Qualitatively similar results as above have been obtained for the Lorenz model [45]. If the model is derived from clean data, but the response system is driven by noise disturbed data, using dissipative coupling, the following is obtained: For even high amounts of white Gaussian noise added to the driving signal (e.g. with a standard deviation of 30% of the learning data set’s standard deviation), for the Rössler model R(x, y) ≈ 0.8, and for the Lorenz model R(x, y) ≈ 0.9. In the latter case, there are no occasions of “outbreaks” observed, where the driving signal spirals around one of the two repellers in the Lorenz phase space whereas the response spirals around the other one. Again with applications in mind, it could turn out to be useful that the driving signal does not need to be the immediately emitted signal of the drive system. Rather, by virtue of the embedding theorems used in the attractor reconstruction, also smooth nonlinear observation functions h(˜ x) can be applied to the signals, maintaining attractor reconstruction and therefore, dynamical modeling as well. In the case of a non-unique
1.5 1
(c) 1
1
0.5
0.5
(a)
0
xt, y1,t
0.5 y1,t
xt+15
(b)
0
0
−0.5
−0.5
−0.5
−1
−1
−1
−1
0 xt
1
−1
0 xt
1
50
100 t
150
200
FIGURE 3. (a) The reconstructed attractor of a nonlinearly transformed Rössler system with a noninvertible transformation yielding back-folding. 2000 data points are shown, but for deriving the model 10000 data points are used. (b) The synchronization manifold as adopted by dissipative coupling with K = 1. The correlation coefficient is R(x, y) = 0.9997. The respective one for complete replacement coupling is R(x, y) = 0.9998. (c) Section from the driving signal and the synchronized response signal (dashed line). (Parameters used: k = 92, σ = 2.3, and D = 14.)
inverse observation function, the attractor is folded back onto itself. This can cause self-intersections, if the embedding dimension is too small. It was shown recently that prediction via Eq. (4) suffers only on a small subset of the attractor even if the embedding dimension is too small for a complete unfolding of the attractor [46]. This result together with the fact that back-folding could be compensated by a larger embedding dimension makes it probable that synchronization is possible also for this case, but one would need probably more data for deriving the model than in the examples provided up to now. A result for the Rössler oscillator for the non-invertible observation function xt = h(˜ xt ) = sin(˜ x1,t /6)
(14)
is given by Fig. 3. Up to now it was assumed that the underlying system is of the form of Eq. (1), in particular an ordinary differential equation. But the reconstruction theorems used here are finally based on geometric arguments [47]. The geometric object here is an attractor in the phase space of system (1), but it could result also from another kind of system. As an example, we consider a system given by a delay-differential equation. In this case, the equations of motion on the inertial manifold of the system cannot readily derived from the system equation because the system’s state space is infinite-dimensional. But as soon as the dynamics has relaxed to a lower-dimensional attractor, it should be possible to reconstruct this attractor by embedding again. This is performed for the example of the
2
1.5 (a) 1
1.5 (c) 1
(b) 1
0.5
0
1,t
0
−0.5
y
xt
xt+18
0.5
−1
−1
0
−0.5 −1
−1.5
−2
−1.5
−2
−2 −2
−1
0
1
−3
0
xt
500 t
1000
−2
−1
0
1
xt
FIGURE 4. (a) The reconstructed attractor of the Mackey-Glass system (15) and (b) the time series to derive the model, consisting of 1000 data points. (c) The synchronization manifold for dissipative coupling with K = 1. The correlation coefficient is R(x, y) = 0.9999. The respective one for complete replacement coupling is R(x, y) = 0.9977. (Parameters used: k = 34, σ = 3.8, and D = 22.)
well-known Mackey-Glass equation [48] whose chaotic dynamics has been investigated numerically by Farmer [49]. It is given by x(t) ˙ = −bx(t) +
ax(t − τ ) , 1 + x(t − τ )c
(15)
with a = 0.2, b = 0.1, c = 10, and a delay of τ = 18. This time delay causes a suitably low-dimensional chaotic attractor with a box-counting dimension somewhere between two and three. This system is numerically simulated and then resampled to a time step of one (i.e., τ = 18 then corresponds to 18 data samples), the resulting attractor and time series being shown in Figs. 4a,b. As expected, the system can be reconstructed well, and it even yields the best result of all examples considered in this paper for synchronization with dissipative coupling (Fig. 4c).
REAL-TIME PREDICTION OF CHAOTIC SIGNALS Finally, the methods described so far are applied to real-time predictions of chaotic signals [44]. A real-time prediction of a signal can be seen as an extension of the concept of synchronization, and has been coined “anticipating synchronization” [50], or, in a wider sense, “achronal synchronization” [51]. The idea is to anticipate a chaotic driving signal in real-time, such that there is additional time gained which can be used for e.g. deciding about some action taken to prevent unwanted dynamics of the response
system [52]. Similar ideas have been proposed in control theory [53]. The applicability of this scheme has been demonstrated on an electronic circuit [54], and this kind of synchronization was found in other experimental and realistic numerical simulations of physical systems [55, 56, 57, 58, 59, 60, 51, 61]. Since anticipating synchronization in general is less stable than identical synchronization, it is tested whether due to the approximative nature of the reconstructed system (4) this behavior immediately breaks down or can be maintained. For a numerical test, the same data and approximative model is used as above. Assume that the drive system is now given by f (x1,t , . . . , xD,t ) x1,t , xt+1 = f (xt ) = (16) .. . xD−1,t
and the observation function picks again the first component of x t to yield the driving signal xt . The response system is f (y1,t , . . . , yD,t ) + K(xt − y1,t−τ ) y1,t , yt+1 = f (yt , xt ) = (17) . .. yD−1,t in which instead of the dissipative coupling term in Eq. (12), a time-delayed value of the response system’s state is involved. The delay time τ is fixed to a positive integer. The reason for this particular coupling scheme is the following: Synchronization can only occur if the trivial solution of the transversal system is at least locally attractive, because then the system can approach the synchronization manifold and stay on it. The transversal state now is defined a bit differently as compared (τ ) with the case of identical synchronization: It is defined by ∆t := xt − yt−τ , because if (τ ) ∆t = 0, then the system states are on the synchronization manifold xt = yt−τ .
(18)
In this case, the state of the driven system, yt , anticipates the drive system’s state xt+τ (τ ) (think of a time shift in Eq. (18) to yield xt+τ = yt ). That the solution ∆t = 0 is indeed a fixed point of the transversal system can be seen by a linearization around small transversal states: In the vicinity of the synchronization manifold, the transversal
state evolves by the law (τ )
(τ )
(τ )
∆t+1 = F(xt , yt−τ )∆t − K∆1,t−τ ,
(19) (τ )
where F is the linearized term f (xt ) − f (yt−τ ) with respect to small ∆t . Obviously, for (τ ) arbitrary integer time delays τ , ∆t = 0 is a fixed point of this system, and therefore, the anticipatory synchronization manifold (18) exists. An open question is the stability of synchronization which will again be analyzed numerically for the example of the Mackey-Glass system (15). All parameters are the same as in the last section; the only difference is that rather than to use the dissipative coupling scheme (12), now the anticipatory coupling scheme (17) is used. It is found that for small anticipation time the driving signal can be anticipated by the approximative model (17). The maximum anticipation time which can be attained is τ = 5 (both in absolute time units and in the model (3)). The corresponding correlation coefficient is R = 0.9973 (Fig. 5). It should be noted that this anticipatory model has been derived from data identically used to derive the model with complete synchronization in the last section. The only difference is in the application of this model, namely the different choice of coupling.
DISCUSSION It has been demonstrated that it is possible that approximative models based on attractor reconstructions can synchronize with chaotic driving signals of a-priori unknown dynamical systems. Both dissipative and complete replacement coupling can be used to connect the response system with the driving signal. This behavior is different from a mere forecasting of a signal by a reconstructed model, since in the approach used here the response is only driven by a scalar signal rather than the full observation vector. It is also different from a mere reproduction of the reconstructed attractor’s invariant density without coupling since real synchronization is observed; the two trajectories are always on or very close (as far as the approximative nature of the response model allows for) to a synchronization manifold. The given assumption of an underlying dynamical drive system which is amenable to a reconstruction by embedding puts restrictions onto the underlying system, the sampling procedure, and the observation function. Whereas, provided this assumption is not violated, the existence of a synchronization manifold is guaranteed, it is difficult to derive general results concerning stability of the synchronization manifold. The reason is
1.5
(a)
1
xt, y1,t
0.5 0
−0.5 −1 −1.5 −2 50
100
150
200
250
t
1.5
(b)
1
0.5
0.5
0
0
1,t
1
y
y1,t
1.5
−0.5
−0.5
−1
−1
−1.5
−1.5
−2
−2 −2
−1
0 xt
1
(c)
−2
−1
0
1
x
t+τ
FIGURE 5. (a) The driving signal (fat line) and the anticipating response for a time delay of τ = 5. (b) The would-be synchronization manifold for complete synchronization, and (c) the anticipatory synchronization manifold. In the latter case, the manifold is much better approached than in (b), showing that the dynamics is indeed anticipating the driving signal. This is confirmed quantitatively by the values of the correlation coefficients: R(x, y) = 0.7897 (b) vs. R(x, y) = 0.9973 (c). (Parameters used: k = 34, σ = 3.8, and D = 22. For model fitting 1000 data samples are used.)
that the model coefficients depend on the situation considered, and the response system is in general of a rather high dimension. For real applications of this scheme, sufficient simulations or tests to ensure stability should be performed. There are dynamical systems for which such a reconstruction is infeasible in practice because of a high-dimensional state space. This can happen especially if the underlying system is not given by a low-dimensional ordinary differential equation but by a partial differential equation or a delay-differential equation with a much larger delay than that was used in the example. Then it can happen that the dimension of the embedding space needs to be very high in order to properly unfold the attractor. This is not a problem of this approach per se but a statistical problem; one would need very long data samples to
be able to accurately estimate the nonlinear model function. I would expect that approximative models as derived by more general artificial neural networks will behave similarly as the radial basis function approximation used here. An open question is whether such a network may also synchronize with complex signals that do not necessarily result from a low-dimensional dynamical system. With this in mind, the statement of some speculative ideas may be allowed at the end as an outlook to possible future research: For an autonomous agent with limited, slow or no computing skills (one could think of a microrobot, an invertebrate, or a macromolecule, respectively), that nevertheless needs to respond quickly to environmental signals, a possible way of reaction is synchronization, since synchronization is a real-time process that does not require any explicit computing. By employing coupling schemes with a memory, it is also possible to perform a limited real-time prediction of the environmental signal. Thinking again of an autonomous agent, this would allow for an anticipation of environmental signals without any loss of time due to explicit computations, gaining time to react appropriately to the environment.
ACKNOWLEDGMENTS I would like to thank S. Boccaletti and W. Just for useful hints with respect to improvement of synchronization stability.
REFERENCES 1. 2.
Fujisaka, H., and Yamada, T., Progr. Theor. Phys., 69, 32–47 (1983). Afraimovich, V. S., Verichev, N., and Rabinovich, M., Radiophys. Quantum Electron., English Transl., 29, 795–803 (1986). 3. Pecora, L. M., and Carroll, T. L., Phys. Rev. Lett., 64, 821–824 (1990). 4. Cuomo, K. M., Oppenheim, A. V., and Strogatz, S. H., IEEE Trans. Circuits-II, 40, 626 (1993). 5. Kocarev, L., and Parlitz, U., Phys. Rev. Lett., 74, 5028–5031 (1995). 6. VanWiggeren, G. D., and Roy, R., Phys. Rev. Lett., 81, 3547 (1998). 7. Liu, Y. W., et al., Phys. Rev. E, 62, 7898–7904 (2000). 8. Sterling, D. G., Chaos, 11, 29–46 (2001). 9. Mosekilde, E., Maistrenko, Y., and Postnov, D., Chaotic Synchronization: Applications to Living Systems, World Scientific, Singapore, 2002. 10. Brown, R., and Rulkov, N. F., Phys. Rev. Lett., 78, 4189–4192 (1997). 11. Johnson, G. A., et al., Phys. Rev. Lett., 80, 3956–3959 (1998). 12. Tresser, C., Worfolk, P. A., and Bass, H., Chaos, 5, 693 (1998).
13. Junge, L., and Parlitz, U., “Dynamic coupling in partitioned state space”, in Proc. Int. Symposium on Nonlinear Theory and its Applications (NOLTA), Dresden, 2000, p. 245. 14. Murakami, A., and Ohtsubo, J., Phys. Rev. E, 63, 066203 (2001). 15. Heagy, J. F., Carroll, T. L., and Pecora, L. M., Phys. Rev. E, 50, 1874 (1994). 16. Brown, R., and Rulkov, N. F., Chaos, 7, 395–413 (1997). 17. Belykh, V. N., Belykh, I. V., and Hasler, M., Phys. Rev. E, 62, 6332–6345 (2000). 18. Josic, K., Nonlinearity, 13, 1321–1336 (2000). 19. Solís-Perales, G., Femat, R., and Ruíz-Velázquez, E., Phys. Lett. A, 288, 183–190 (2001). 20. Femat, R., and Solís-Perales, G., Phys. Rev. E, 65, 036226 (2002). 21. Ott, E., Grebogi, C., and Yorke, J. A., Phys. Rev. Lett., 64, 1196 (1990). 22. Pyragas, K., Phys. Lett. A, 170, 421 (1992). 23. Lai, Y.-C., and Grebogi, C., Phys. Rev. E, 47, 2357–2360 (1993). 24. Jackson, E. A., and Grosu, I., Physica D, 85, 1 (1995). 25. Just, W., et al., Phys. Rev. Lett., 78, 203 (1997). 26. Claussen, J. C., et al., Phys. Rev. E, 58, 7256–7260 (1998). 27. Boccaletti, S., Grebogi, C., Lai, Y.-C., Mancini, H., and Maza, D., Phys. Rep., 329, 103 (2000). 28. Nijmeijer, H., Physica D, 154, 219–228 (2001). 29. Kocarev, L., Parlitz, U., and Brown, R., Phys. Rev. E, 61, 3716–3720 (2000). 30. Carroll, T. L., Phys. Rev. E, 64, 015201 (2001). 31. Yanchuk, S., et al., Int. J. Bifurcation and Chaos, 10, 2629–2648 (2000). 32. Boccaletti, S., et al., Phys. Rev. E, 61, 3712–3715 (2000). 33. Zaks, M. A., Park, E.-H., and Kurths, J. R., Phys. Rev. E, 65, 026212 (2002). 34. Huygens, C., Philos. Trans. Royal Soc., 47, 937 (1669). 35. Packard, N. H., Crutchfield, J. P., Farmer, J. D., and Shaw, R. S., Phys. Rev. Lett., 45, 712–716 (1980). 36. Takens, F., “Detecting strange attractors in turbulence”, in Dynamical Systems and Turbulence, edited by D. Rand and L. Young, Springer, Berlin, 1981, vol. 898 of Lecture Notes in Mathematics, pp. 366–381. 37. Sauer, T., Yorke, J. A., and Casdagli, M., J. Stat. Phys., 65, 579–616 (1991). 38. Kantz, H., and Schreiber, T., Nonlinear Time Series Analysis, Cambridge University Press, Cambridge, 1997. 39. Honerkamp, J., Statistical Physics, Springer, Berlin, 2002. 40. Broomhead, D. S., and Lowe, D., Complex Systems, 2, 321 (1988). 41. Press, W. H., et al., Numerical Recipes in C, Cambridge University Press, Cambridge, 1992. 42. Rössler, O. E., Phys. Lett., 47A, 397 (1976). 43. Pikovsky, A., Rosenblum, M., and Kurths, J., Synchronization—A Universal Concept in Nonlinear Science, Springer, Berlin, 2001. 44. Voss, H. U., Phys. Rev. Lett., 87, 014102 (2001). 45. Lorenz, E. N., J. Atmos. Sci., 20, 130–141 (1963). 46. Schroer, C. G., Sauer, T., Ott, E., and Yorke, J. A., Phys. Rev. Lett., 80, 1410–1413 (1998). 47. Whitney, H., Annals of Math., 37, 645 (1936). 48. Mackey, M. C., and Glass, L., Science, 197, 287 (1977). 49. Farmer, J. D., Physica D, 4, 366–393 (1982). 50. Voss, H. U., Phys. Rev. E, 61, 5115–5119 (2000).
51. 52. 53. 54. 55. 56. 57. 58. 59. 60. 61.
White, J. K., Matus, M., and Moloney, J. V., Phys. Rev. E, 65, 036229 (2002). Voss, H. U., Phys. Lett. A, 279, 207–214 (2001). Alsing, P. M., Gavrielides, A., and Kovanis, V., Phys. Rev. E, 50, 1968–1977 (1994). Voss, H. U., Int. J. Bifurcation and Chaos, 12, 1619–1625 (2002). Sivaprakasam, S., et al., Phys. Rev. Lett., 87, 154101 (2001). Locquet, A., et al., Phys. Rev. E, 64, 045203(R) (2001). Masoller, C., Phys. Rev. Lett., 86, 2782–2785 (2001). Rogister, R., et al., Optics Lett., 26, 1486 (2001). Zhu, L. Q., and Lai, Y. C., Phys. Rev. E, 64, 045205 (2001). Murakami, A., Phys. Rev. E, 65, 056617 (2002). Calvo, O., et al., Preprint, cond-mat/0203583 (2002).