Neurocomputing 143 (2014) 331–338
Contents lists available at ScienceDirect
Neurocomputing journal homepage: www.elsevier.com/locate/neucom
Parameter estimation of the exponentially damped sinusoids signal using a specific neural network Xiuchun Xiao a,b, Jian-Huang Lai b,d,n, Chang-Dong Wang c,d a
College of Information, Guangdong Ocean University, Zhanjiang 524025, PR China School of Information Science and Technology, Sun Yat-sen University, Guangzhou 510006, PR China c School of Mobile Information Engineering, Sun Yat-sen University, Zhuhai 519082, PR China d SYSU-CMU Shunde International Joint Research Institute (JRI), Shunde 528300, PR China b
art ic l e i nf o
a b s t r a c t
Article history: Received 25 December 2013 Received in revised form 20 March 2014 Accepted 30 May 2014 Communicated by L. Xu Available online 24 June 2014
The problem of estimating the parameters of exponentially damped sinusoids (EDSs) signal has received very much attention in many fields. In this paper, a specific neural network termed EDSNN for parameter estimation of the EDSs has been proposed. Aiming at effectively evaluating the parameters of the EDSs signal, we construct a specific topology of EDSNN strictly following the mathematic formulation of EDSs signal. Then, what should be further done is how to train EDSNN using the data-set sampled from the EDSs signal. For this purpose, a modified Levenberg–Marquardt algorithm is derived for iteratively solving the weights of EDSNN by optimizing the pre-defined objective function. Profiting from good performance in fault tolerance of neural network, the proposed algorithm possesses a good performance in resistance to noise. Several computer simulations have been conducted to apply this method to some EDSs signal models. The results substantiate that the proposed EDSNN can synchronously obtain a higher precision for the damped factors, frequencies, also amplitudes and initial phases of all the EDSs than the state-of-the-art algorithm for noise free or noise case. & 2014 Elsevier B.V. All rights reserved.
Keywords: Exponentially damped sinusoids (EDSs) signal Neural network Levenberg–Marquardt algorithm Parameter estimation
1. Introduction Many practical signals such as speech and audio signal, power system transient signal and radar/sonar signal can be regarded as signals of the sum of exponentially damped sinusoids (EDSs) signal. In fact, estimating the parameters of EDSs signal is now becoming a very important task in many practical applications such as speech analysis [1,2], power system transient detection [3–5], radar/sonar signal analysis [6], and nuclear magnetic resonance image processing [7]. In the recent years, the problem of estimating the parameters of exponentially damped sinusoids (EDSs) signal has received very much attention. A number of techniques have been proposed to tackle this problem in the past. These techniques can be mainly classified as nonparametric [8–12] and parametric ones [1,13–15]. The nonparametric techniques are commonly computationally efficient and have less sensitivity to algorithm specific parameters. Unfortunately, these techniques often have inherent limitations such as suffering from frequency resolution or leakage effects in an unsynchronized sampling [13,16]. On the contrary, compared with n Corresponding author at: School of Information Science and Technology, Sun Yat-sen University, Guangzhou 510006, PR China. Tel.: þ86 13168313819. E-mail addresses:
[email protected] (X. Xiao),
[email protected] (J.-H. Lai),
[email protected] (C.-D. Wang).
http://dx.doi.org/10.1016/j.neucom.2014.05.059 0925-2312/& 2014 Elsevier B.V. All rights reserved.
nonparametric methods, most parametric methods are modelbased methods and can commonly achieve a relatively high accuracy [15]. But they assume that the generated models satisfy a real multi-component signal. The most widely used nonparametric technique is the fast Fourier transform (FFT) derived from the discrete Fourier transform (DFT) [17]. However, the picket phenomenon and spectrum leakage phenomenon occur when applying FFT to an unsynchronized sampling sequence analysis [6,13]. Consequently, in order to improve the accuracy and reduce its sensitivity to de-synchronization, specific synchronization hardware should be adopted [16– 18]. Another general nonparametric method is wavelet analysis [19]. It is an analysis tool with good feature in both time domain and frequency domain. Recently, many researches have been conducted to develop wavelet analysis approaches to detect, localize and classify different types of power system disturbances, including harmonic and inter-harmonic distortions in a power system [19,20]. Those kind of approaches are based on decomposing the disturbed signal into the other components which represent smoothed and detailed versions of the original signal [3]. Recently, several parametric methods have been also suggested for EDSs signal analysis [3,5]. Among these methods, subspace approaches such as estimation of signal parameters via rotational invariance techniques (ESPRIT) become very popular for their lower complexity [5]. The principle is to first separate the data
332
X. Xiao et al. / Neurocomputing 143 (2014) 331–338
into signal and noise subspaces via eigenvalue decomposition (EVD) of the sample covariance matrix or singular value decomposition (SVD) of the raw data matrix, then the parameters of interest are calculated from the corresponding eigenvectors and eigenvalues, or singular vectors and singular values [15]. The implementation of the algorithm in power systems was first proposed in [21]. Maximum likelihood (ML) and iterative quadratic ML (IQML) can also be used for estimating EDSs signal [22,15]. However, due to their extremely high computational requirement, the ML-based methods are only feasible for 2-D harmonic retrieval. Prony analysis has been shown to be a very appropriate technique to model a linear sum of exponentially damped sinusoids signal that are uniformly sampled [23–26]. In fact, it can exactly fit the typical EDSs signal in the sense of the least-squared error (LSE) technique. In these years, neural network methods are becoming popular in the field of parameter estimation problem for their high accuracy and good performance in resistance to noise [16,27– 29]. Nevertheless, though neural network can be successfully applied in estimating the parameters of some harmonic or interharmonic distortions, it cannot be directly applied in EDSs signal analysis for the reason that it is not easy to construct an appropriate topology and a good training algorithm of a general neural network such as an Adaline network because of the complicated nonlinear formulation of the EDSs signal which contain exponential functions, sine functions and cosine functions [16,29] (see Subsections 2.2 and 2.3 for more details). To solve the aforementioned problems, we construct a specific topology for a certain feedforward neural network strictly following the mathematic formulation of EDSs signal. We termed this specific neural network EDSNN. Similar to some common neural network model, the EDSNN has three layers and the most important layer is the hidden-layer. However, unlike the traditional neural network, the hidden-layer of EDSNN is mainly composed of some different kinds of neurons and some different operating units, i.e., each neuron employs one of the three distinct activation functions: exponential function, sine function or cosine function, and the operating units can perform addition or multiplication operations. By using various kinds of weights to connect all the distinct neurons, the mathematical formulas of EDSNN are consistent with the EDSs signal. Thus, we can estimate the parameters of EDSs signal by effectively training the EDSNN to force its weights to converge to stable values so as to approximate the given EDSs signal. In order to achieve this purpose, we defined an appropriate objective function, then a modified Levenberg– Marquardt algorithm is carefully derived to optimize the objective function. Finally, the damped factors, frequencies, also amplitudes and initial phases of all the EDSs can be estimated from the weights of the converged EDSNN. The remainder of the paper is organized as follows: in Section 2, we introduce the problem formulation and the proposed neural network for parameter estimation of exponentially damped sinusoids signal. As to the proposed neural network model, we divide it into two main parts: one is how to construct the EDSNN with specific topology and the other is to derive the modified Levenberg–Marquardt algorithm to train the proposed EDSNN. Section 3 reports experimental results of parameter estimation of some computer simulation experiments, and each experiment including noise free and noise cases. Finally, we conclude our paper in Section 4. It is worth pointing out that the main results in this paper were first presented at the IScIDE 2013 conference [29]. We have extended and revised the paper with some new and unpublished materials to give more details of the deduction of the proposed improved Levenberg–Marquardt algorithm (see Subsection 2.3) and to better verify that the proposed algorithm can achieve a very
high precision and has good performance in resistance to noise (see Subsections 3.2 and 3.3).
2. Proposed approach In this section, we will discuss the proposed approach in detail. Firstly, we present the mathematical formulation of the practical problem of estimating the parameters of exponentially damped sinusoids (EDSs) signal, then a specific topology of neural network termed EDSNN is constructed according to this formulation. In order to effectively solve the weights of the proposed EDSNN, an adaptive learning algorithm based on improved Levenberg–Marquardt is derived. As a result, the parameters of each exponentially damped sinusoid (EDS) component can be directly calculated using the converged weights of EDSNN.
2.1. Problem formulation In general, an actual signal consisting of n distinct exponentially damped sinusoid components can be represented by their respective unknown damping factors, angular frequencies, amplitudes and initial phases. Assuming m samples drawn from the signal y(t) with uniformly sampling interval time Δt are recorded as yðt j Þ≔yðjΔtÞ;
j ¼ 0; 1; 2; …; m 1;
ð1Þ
then, yðt j Þ can be formulated as follows: n
yðt j Þ ¼ ∑ Ai eσ i t j sin ðωi t j þ φi Þ; i¼1
i ¼ 1; 2; …; n; j ¼ 0; 1; 2; …; m 1; ð2Þ
where yðt j Þ denotes the signal sampled at tj; i denotes the component order; Ai denotes the amplitude of component i; σi denotes the damping factor of component i; ωi denotes the angular frequency of component i; φi denotes the initial phase of component i; n denotes the number of EDS components; m denotes the size of sample set. A typical EDSs signal can be given as Ref. [3], yðtÞ ¼ 1:0e 0:025t sin ð2π 0:4tÞ þ 0:5e0:037t sin ð2π 0:5tÞ:
ð3Þ
As the author [3] declared, the signal formulated in Eq. (3) represents two superposed low-frequency transient oscillations in power system. However, the parameters of each component in the signal are commonly unknown, thus, how to estimate these parameters from the data-set of measurements from real signal is very important in some practical applications. In fact, if the damping factors σ i ; i ¼ 1; 2; …; n are all equal to zero, then the corresponding EDS components are periodic and the problem of estimating the parameters of EDSs signal would degenerate to estimating the parameters of harmonic and interharmonic distortions, which can be solved by a neural network method with a general topology [18,16]; conversely, if the damping factors σ i ; i ¼ 1; 2; …; n are not all equal to zero, the corresponding EDS components will be aperiodic and decay to zero as time goes by [13], and the problem of estimating the parameters of EDSs signal in this case cannot be easily solved by a neural network method with a general topology [27,28]. In order to construct an appropriate topology of the proposed neural network, we should do some mathematical transformations with Eq. (2). By using the identity trigonometric given by the following well-known equation: sin ðα þ β Þ ¼ sin ðαÞ cos ðβ Þ þ cos ðαÞ sin ðβÞ:
ð4Þ
X. Xiao et al. / Neurocomputing 143 (2014) 331–338
Eq. (2) can be rewritten as n
yðt j Þ ¼ ∑ ðAi eσ i tj sin ðωi t j Þ cos ðφi Þ þ Ai eσ i tj cos ðωi t j Þ sin ðφi ÞÞ; i¼1
i ¼ 1; 2; …; n; j ¼ 0; 1; 2; …; m 1:
ð5Þ
Furthermore, assuming vi ¼ σ i , ¼ ωi , wi ¼ Ai cos ðφi Þ, w0i ¼ Ai sin ðφi Þ, Eq. (5) can be rewritten in a compact form as v0i
n
yðt j Þ ¼ ∑ ðwi evi t j sin ðv0i t j Þ þw0i evi tj cos ðv0i t j ÞÞ; i¼1
i ¼ 1; 2; …; n; j ¼ 0; 1; 2; …; m 1:
ð6Þ
vi ; v0i ; wi ; w0i ; i
Obviously, if the parameters ¼ 1; 2; …; n can be solved by some approaches, then the parameters σi, ωi, Ai and φi ; i ¼ 1; 2; …; n of the i-th EDS component can be directly calculated as follows:
σ i ¼ vi ;
ð7Þ
ωi ¼ v0i ;
ð8Þ
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi Ai ¼ w2i þ ðw0i Þ2 ;
ð9Þ
φi ¼ arctanðw0i =wi Þ:
ð10Þ
From the above analysis, we can know that it is only necessary to achieve the parameters vi ; v0i ; wi ; w0i ; i ¼ 1; 2; …; n for the purpose of estimating the parameters σi, ωi, Ai and φi ; i ¼ 1; 2; …; n of EDS components. In the next subsection, we will discuss how to gain our ends by using a type of neural network method. 2.2. Neural network model for estimating the parameters of EDSs signal (EDSNN) As discussed in the previous subsection, we know that the problem of estimating the parameters of EDSs signal cannot be easily solved by a neural network method with a general topology. In this subsection, in order to estimate the parameters of each EDS component in Eq. (2), a corresponding neural network model with a specific topology is proposed. We call this specific neural network EDSNN, as illustrated in Fig. 1.
333
Fig. 1 illustrates the specific topology of EDSNN. It has three layers, i.e., input-, hidden- and output-layers. The hidden-layer, which is mainly composed of 3n neurons and some operating units, is the most important part of EDSNN. The 3n hidden neurons can be separated into three classes and each class employs one of the three kinds of activation functions, i.e., exponential function, sine function or cosine function. The operating units can perform addition and multiplication operations. From the illumination above, we can see that the hidden layer of EDSNN is very different from a general AdaLine neural network. As to the input- and output-layers, they are simple units and only need to perform very simple operations. The connections between input- and hiddenlayers, or the connections between hidden- and output-layers are various kinds of weights. For the convenience of expression, we can denote the weights between input- and hidden-layers ^ v^ 1 v^ 01 v^ 2 v^ 02 …v^ n v^ 0n and the weights between as weight vector v≔½ ^ w ^1 w ^ 01 hidden- and output-layers as weight vector w≔½ ^n w ^2 w ^ 02 …w ^ 0n , respectively. Of course, the input of EDSNN is w the sampling time tj, and the output is the weighted sum of the ^ j Þ. product of pairs of activation functions, denoted as, yðt Therefore, the mathematical equation of EDSNN in Fig. 1 can be expressed as n
^ j Þ ¼ ∑ ðw ^ i ev^ i t j sin ðv^ 0i t j Þ þ w ^ 0i ev^ i tj cos ðv^ 0i t j ÞÞ; yðt i¼1
i ¼ 1; 2; …; n; j ¼ 0; 1; 2; …; m 1:
ð11Þ
Through observation and comparison, we can see that Eqs. (6) and (11) are consistent in mathematical formulas. ^ j Þ in Eq. (11) is regarded as the estimation value Obviously, if yðt ^ i , v^ 0i , w ^ 0i , v^ 0i , i ¼ 1; 2; …; n in of yðt j Þ in Eq. (6), then the weights w Eq. (11) can be regarded as an estimation of parameters wi, v0i , w0i , v0i , i ¼ 1; 2; …; n in Eq. (6), respectively. That is to say, the parameters of the i-th EDS component such as σi, ωi, Ai and φi can also be directly estimated from the weights ^ i , v^ i , v^ 0i , w ^ 0i in EDSNN according to Eqs. 7–10. As a result, we can w translate the estimation problem into a corresponding optimization problem, i.e., how to construct an adaptive learning algorithm for training the EDSNN illustrated in Fig. 1. In the next subsection,
Fig. 1. Neural network model for estimating the parameters of exponentially damped sinusoids signal.
334
X. Xiao et al. / Neurocomputing 143 (2014) 331–338
we will introduce an improved Levenberg–Marquardt algorithm and derive its iterative updating scheme.
^ k Þ is the Jacobi matrix defined in Eq. (15), μk A R is the where, Jðϖ learning rate defined in Eq. (18): ^ k Þ J þ ð1 θÞ J J T ðϖ ^ k ÞFðϖ ^ k Þ J Þ; μk ≔αk ðθ J Fðϖ
2.3. Improved Levenberg–Marquardt algorithm As discussed in the previous subsection, in order to estimate the parameters such as σi, ωi, Ai, φi, i ¼ 1; 2; …; n, where, i denotes the i-th EDS component, we should update the weights (i.e., ^ in the EDSNN to force the output of the weight vectors v^ and w) neural network to approximate the EDSs signal. For this purpose, we can define a suitable objective function as follows: m1
^ j Þ yðt j ÞÞ: E ¼ ∑ ðyðt
ð12Þ
j¼0
^ jÞ where, yðt j Þ is the sampling value of the EDSs signal for t ¼ t j , yðt is the actual output of the EDSNN when its input is tj. Obviously, Eq. (12) is a non-linear optimization problem. In order to optimize the objective function E, we can define a ^ Þ as follows: function Fðϖ ^ j Þ yðt j Þ ^ Þ ¼ yðt Fðϖ n
0
0
0
^ i ev^ i tj sin ðv^ i t j Þ þ w ^ i ev^ i t j cos ðv^ i t j ÞÞ yðt j Þ: ¼ ∑ ðw
ð13Þ
i¼1
0
0
0
0
0
0
^1 w ^ 1 v^ 2 v^ 2 w ^2 w ^ 2 … v^ n v^ n w ^n w ^ n is a vector ^ ≔½v^ 1 v^ 1 w where, ϖ including all weights in EDSNN illustrated in Fig. 1, tj is the input of the EDSNN. For all the inputs tj, we have equations as follows: 8 n > > ^ i ev^ i t0 sin ðv^ 0i t 0 Þ þ w ^ 0i ev^ i t 0 cos ðv^ 0i t 0 ÞÞ yðt 0 Þ ^ Þ ¼ ∑ ðw > F 0 ðϖ > > i¼1 > > > > n > > > ^ i ev^ i t1 sin ðv^ 0i t 1 Þ þ w ^ 0i ev^ i t 1 cos ðv^ 0i t 1 ÞÞ yðt 1 Þ ^ Þ ¼ ∑ ðw F 1 ðϖ > > > i¼1 < n ; ^ i ev^ i t2 sin ðv^ 0i t 2 Þ þ w ^ 0i ev^ i t 2 cos ðv^ 0i t 2 ÞÞ yðt 2 Þ ^ Þ ¼ ∑ ðw > F 2 ðϖ > > > i¼1 > > > > ⋮ > > > n > > 0 ^ > ^ i ev^ i tm0 sin ðv^ 0i t m0 Þ þ w ^ 0i ev^ i t m0 cos ðv^ 0i t m0 ÞÞ yðt m0 Þ > : F m ðϖ Þ ¼ ∑ ðw i¼1
ð14Þ 0
where, m ¼ m 1. Then, the Jacobi follows: 2 Ψ 0 ðv^ 1 Þ Ψ 0 ðv^ 01 Þ 6 Ψ ðv^ Þ Ψ ðv^ 0 Þ 6 1 1 1 1 6 6 Ψ 2 ðv^ 1 Þ Ψ 2 ðv^ 01 Þ 6 6 ⋯ ⋯ 4 Ψ m0 ðv^ 1 Þ Ψ m0 ðv^ 01 Þ
^ Þ of Eq. (13) can be defined as matrix Jðϖ
Ψ 0 ðw^ 1 Þ Ψ 1 ðw^ 1 Þ Ψ 2 ðw^ 1 Þ
Ψ 0 ðw^ 01 Þ Ψ 1 ðw^ 01 Þ Ψ 2 ðw^ 01 Þ
Ψ 0 ðv^ 2 Þ Ψ 1 ðv^ 2 Þ Ψ 2 ðv^ 2 Þ
⋯ ⋯
Ψ 0 ðw^ n Þ Ψ 1 ðw^ n Þ Ψ 2 ðw^ n Þ
⋯
⋯
⋯
⋯
⋯
⋯
Ψ m0 ðw^ 1 Þ Ψ m0 ðw^ 01 Þ Ψ m0 ðv^ 2 Þ ⋯ Ψ m0 ðw^ n Þ
A Rm4n
3 7 7 7 7 7 7 5
ð15Þ
0
where, m ¼ m 1, and, 8 ^ > Ψ j ðv^ i Þ ¼ ∂F∂j ðv^ϖi Þ ¼ w^ i t j ev^ i tj sin ðv^ 0i t j Þ þ w^ 0i t j ev^ i tj cos ðv^ 0i t j Þ > > > > > ^Þ ∂F j ðϖ 0 > > ^ i t j ev^ i tj cos ðv^ 0i t j Þ w ^ 0i t j ev^ i tj sin ðv^ 0i t j Þ < Ψ j ðv^ i Þ ¼ ∂v^ 0 ¼ w i
^ > Ψ j ðw^ i Þ ¼ ∂F∂jwð^ϖi Þ ¼ ev^ i tj sin ðv^ 0i t j Þ > > > > > > > ^ i Þ ¼ ∂F j ð^ϖ^0 Þ ¼ ev^ i tj cos ðv^ 0i t j Þ : Ψ j ðw ∂w
adjusted according to the following equaif r k o p1 ; if r k A ½p1 ; p2 ;
;
ð16Þ
where, i ¼ 1; 2; …; n and j ¼ 0; 1; 2; …; m 1: ^ , we can derive an In order to solve the weight vector ϖ improved Levenberg–Marquadt algorithm [30–33] as follows: ^k ^ kþ1 ¼ ϖ ^ k þ Δϖ ϖ ð17Þ
ð19Þ
else;
where 0 op0 o p1 op2 o 1, τ 4 0. In Eq. (19), rk is defined as follows: Aredk rk ≔ ; Predk
ð20Þ
where ( ^ k Þ‖22 ‖Fðϖ ^ k þ Δϖ^ k Þ‖22 Aredk ≔‖Fðϖ : 2 ^ k Þ‖2 ‖Fðϖ ^ k þ Jðϖ^ k ÞΔϖ^ k Þ‖22 Predk ≔‖Fðϖ
ð21Þ
It is worth mentioning that the improved LM algorithm has better convergence compared to gradient descent method and traditional LM algorithm [30–33]. In fact, by training the EDSNN with the improved LM algorithm, we can achieve significantly high accuracy, which will be further substantiated in the section of simulation verification.
3. Simulation verification In this section, we perform numerical experiments on parameter estimation to demonstrate the effectiveness of the proposed EDSNN approach. Three different study cases are considered to test the proposed method. In the first study case, the proposed algorithm is performed to estimate the parameters of a typical EDS signal with only one component. In the other two study cases, the parameter estimation problem is presented for a signal with two or three EDS components. It is worth further pointing out that, for all the simulation experiments in this section, the values of all the relevant parameters of the EDSNN are fixed. It is listed as follows:
τ ¼ 10 8 ; θ ¼ 0:5; α1 ¼ 0:1 þ τ; p1 ¼ p0 þ 0:25; p2 ¼ p1 þ0:5:
p0 ¼ 10 4 ;
3.1. Case study I The proposed algorithm is firstly applied to estimate parameters of a simple signal which contains only one EDS component given in the literature [5]. The samples used for parameter estimation are generated by using the following equation: yðtÞ ¼ 2:0e 2:5t sin ð2π 5:0t þ 3Þ;
i
^ k ðJ T ðϖ ^ k ÞJðϖ ^ k Þ þ μk IÞ J T ðϖ ^ k ÞFðϖ ^ k Þ; ¼ϖ
where, θ A ð0; 1Þ; αk is tion: 8 > < 4α k αk þ 1 ¼ αk > : maxðα =4; τÞ k
ð18Þ
ð22Þ
where, A1 ¼ 2:0, σ 1 ¼ 2:5, ω1 ¼ 2π 5:0, and φ1 ¼ 3:0. The noise-free and noisy versions of this artificial swing curve are shown in Fig. 2. 3.1.1. Noise free case To obtain a high precision for the parameter estimation of the signal in Eq. (22), we can set the objective error ε be very low, e.g., ε ¼ 10 10 , and randomly generate the initial weights of the EDSNN in their corresponding value ranges. In order to force the EDSNN to converge, we update the weights of the EDSNN by using the improved Levenberg–Marquadt algorithm, then, the parameters of the signal denoted in Eq. (22) can be calculated by using Eqs. 7–10. Table 1 illustrates the actual parameters of the original signal, the
X. Xiao et al. / Neurocomputing 143 (2014) 331–338
As shown in Table 2, we can see that all of the estimation results by the proposed algorithm have very high precision even the signal-to-noise ratio decaying to 5 dB. Beyond that, we can also see that all of the estimation results by the proposed algorithm are better than the ESPRIT method [5] except for SNR ¼30 dB. All these results show that our method is more robust to the noise ratio than ESPRIT [5].
2 Original signal Noisy signal
1.5 1 0.5 Amplitude
335
0 −0.5 −1
3.2. Case study II −1.5 −2 −2.5 50
100
150
200
Time step
Fig. 2. Noise free and noisy versions of a simple signal given in the literature [5], which contains only one EDS component.
Table 1 The actual, estimated parameters and relative error for the signal denoted in Eq. (22) with no noise. Parameters
A1
σ1
f1
φ1
Actual Estimated Relative error
2.0000 2.0000 0.0000
2.5000 2.5000 0.0000
5.0000 5.0000 0.0000
3.0000 3.0000 0.0000
Table 2 The actual, estimated parameters, and the relative error for the signal denoted in Eq. (22) with a variety of levels of SNRs. SNR (dB)
30
A1
Actual ESPRIT EDSNN
2.0000 1.9993 2.0027
Actual ESPRIT EDSNN
2.5000 2.4922 2.5055
Actual ESPRIT EDSNN Actual ESPRIT EDSNN
σ1
f1
φ1
20
2.0331 2.0013
10
2.1097 1.9913
5
yðtÞ ¼ 10:0e 0:25t sin ð2π 0:4tÞ þ 5:0e 0:48t sin ð2π 0:89tÞ;
ð23Þ
where, A1 ¼ 10:0, σ 1 ¼ 0:25, ω1 ¼ 2π 0:4, A2 ¼ 5:0, σ 2 ¼ 0:48, ω2 ¼ 2π 0:89, and the initial phases φ1 and φ2 are both assumed to equal to zero in this case. This artificial swing curve is shown in Fig. 3.
3.2.1. Noise free case Similar to case study I, we set the objective error ε ¼ 10 10 , and randomly generate the initial weights of the EDSNN in their corresponding value ranges. Then, we train the EDSNN by using the improved Levenberg–Marquadt algorithm and estimate the parameters by using Eqs. 7–10. Table 3 illustrates the actual parameters of the original signal, the estimated results and the relative error by using the proposed EDSNN. As shown in Table 3, we can see that the proposed EDSNN is capable of obtaining the actual parameters of the input signal containing two EDS components with almost no error.
2.0725 2.0036 15
2.2769 2.4959
2.6056 2.4903
2.8115 2.5178
5.0000 4.9986 4.9979
4.9840 5.0001
5.0045 4.9998
5.1536 5.0007
3.0000 3.0039 3.0022
3.0274 2.9987
2.9662 2.9989
2.9393 2.9976
Original signal Noisy signal
10
Amplitude
0
In this subsection, the proposed algorithm is applied to estimate the parameters of a relatively complex signal which contains two EDS components. The samples used for the parameter estimation are generated by using the following equation:
5
0
5
estimated results and the relative error by using the proposed EDSNN. As shown in Table 1, we can see that the proposed EDSNN is capable of obtaining the actual parameters of the input signal containing only one EDS component with almost no error.
3.1.2. With noise case In practical applications, noise disturbance is commonly an inevitable problem. In this part, we will consider the case when different signal-to-noise ratio (SNR) levels of white Gaussian noise (WGN) is added to the simulation signal denoted by Eq. (22). In order to validate the robustness of the proposed EDSNN, we test and compare it with the popular ESPRIT method [5] in a variety of levels of SNRs. Table 2 illustrates the actual parameters of the original signal and the estimated results by using the proposed EDSNN and the ESPRIT method [5].
10 0
20
40
60
80
100
120
140
160
180
200
Time step
Fig. 3. Noise free and noisy versions of a signal which contains two EDS components.
Table 3 The actual, estimated parameters and relative error for the signal denoted in Eq. (23) with no noise. Parameters
Actual Estimated Relative error
Component 1
Component 2
A1
σ1
f1
A2
σ2
f2
10.0000 10.0000 0.0000
0.2500 0.2500 0.0000
0.4000 0.4000 0.0000
5.0000 5.0000 0.0000
0.4800 0.4800 0.0000
0.8900 0.8900 0.0000
336
X. Xiao et al. / Neurocomputing 143 (2014) 331–338
3.2.2. With noise case In order to validate the robustness of the proposed EDSNN, we test and compare it with the popular ESPRIT method [5] in a variety of levels of SNRs. Suppose simulation signal II is polluted with different levels of Gaussian white noise. Table 4 illustrates the actual parameters of the original signal and the estimated results by using the proposed EDSNN. As shown in Table 4, we can see that all the results by the proposed EDSNN have very high precision for estimating the relative complex signal which contains two EDS components, even the signal-to-noise ratio decaying to 5 dB. On the other hand, the estimating results of ESPRIT method are very unstable (in each experiment, the estimating results are very different) and also far away from the actual values, so it is not necessary to list its results in Table 4. As a result, when a signal contains many EDS components in low level of SNR, we can consider the EDSNN instead of ESPRIT method [5] for estimating its parameters. Table 4 The actual, estimated parameters, and the relative error for the signal denoted in Eq. (23) with a variety of levels of SNRs. Components
Parameters
30
Component 1
A1
Actual Estimated Actual Estimated Actual Estimated Actual Estimated
10.0 9.9983 0.25 0.2494 0.4 0.4000 0.0000 0.0014
Actual Estimated Actual Estimated Actual Estimated Actual Estimated
5.0 4.9368 0.48 0.4723 0.89 0.8899 0.0000 0.0090
σ1 f1 φ1
Component 2
A2 σ2 f2 φ2
20
10
10.0185
5
10.0756
9.9848
0.2517
0.2507
0.3999
0.3999
0.3995
0.0001
0.0004
0.0046
5.0043
5.0429
4.8743
0.4789
0.4681
0.8889
0.8902
0.2488
0.4747 0.8904
3.3. Case study III In this subsection, the proposed algorithm is applied to estimate all the parameters of a relatively complex signal which contains three EDS components. The samples used for the parameter estimation are generated by using the following equation: yðtÞ ¼ 5:0e 0:025t sin ð2π 0:4tÞ þ 3:0e 0:037t sin ð2π 0:56tÞ þ 1:0e 0:01t sin ð2π 0:3tÞ;
where, A1 ¼ 5:0, σ 1 ¼ 0:025, ω1 ¼ 2π 0:4, A2 ¼ 3:0, σ 2 ¼ 0:037, ω2 ¼ 2π 0:56, A3 ¼ 1:0, σ 3 ¼ 0:01, ω3 ¼ 2π 0:3, and the initial phases φ1, φ2 and φ3 are all assumed to be equal to zero in this case. This artificial swing curve is shown in Fig. 4.
3.3.1. Noise free case Similar to case study I, we set the objective error ε ¼ 10 10 , and randomly generate the initial weights of the EDSNN in their corresponding value ranges. Then, we train the EDSNN by using the improved Levenberg–Marquadt algorithm and estimate the parameters by using Eqs. 7–10. Table 5 illustrates the actual parameters of the original signal, the estimated results and the relative error by using the proposed EDSNN. As shown in Table 5, we can see that the proposed EDSNN is capable of obtaining the actual parameters of the input signal containing three EDS components with almost no error.
Table 6 The actual, estimated parameters, and the relative error for the signal denoted in Eq. (24) with a variety of levels of SNRs. Components
Parameters
30
Component 1
A1
Actual Estimated Actual Estimated Actual Estimated Actual Estimated
5.0 5.0125 0.025 0.0252 0.4 0.3999 0.0000 0.0015
Actual Estimated Actual Estimated Actual Estimated Actual Estimated
3.0 2.9966 0.037 0.0369 0.56 0.5599 0.0000 0.0005
Actual Estimated Actual Estimated Actual Estimated Actual Estimated
1.0 1.0005 0.01 0.0101 0.3 0.3005 0.0000 0.0213
σ1 0.0009
0.0080
0.0079 f1 φ1
15 Original signal Noisy signal
10
Component 2
A2
Amplitude
σ2 f2
5
φ2 0
Component 3 5
A3 σ3 f3
10 0
20
40
60
80
100
120
140
160
180
ð24Þ
200
φ3
Fig. 4. Noise free and noisy versions of a signal which contains two EDS components.
20
10
4.9880
5
4.9755
4.9790
0.0246
0.0242
0.4000
0.3995
0.3990
0.0002
0.0005
0.0473
3.0162
3.0414
3.0207
0.0384
0.0389
0.5601
0.5703
0.5814
0.0067
0.0004
0.0249
0.9866
0.9900
0.9973
0.0093
0.0110
0.2999
0.2962
0.2874
0.0067
0.0259
0.0173
0.0247
0.0370
0.0101
Table 5 The actual, estimated parameters and relative error for the signal denoted in Eq. (24) with no noise. Parameters
Actual Estimated Relative error
Component 1
Component 2
Component 3
A1
σ1
f1
A2
σ2
f2
A3
σ3
f3
5.0000 5.0000 0.0000
0.0250 0.0250 0.0000
0.4000 0.4000 0.0000
3.0000 3.0000 0.0000
0.0370 0.0370 0.0000
0.5600 0.5600 0.0000
1.0000 1.0000 0.0000
0.0100 0.0100 0.0000
0.3000 0.3000 0.0000
X. Xiao et al. / Neurocomputing 143 (2014) 331–338
3.3.2. With noise case In order to validate the robustness of the proposed EDSNN, we test and compare it with the popular ESPRIT method [5] in a variety of levels of SNRs. Suppose simulation signal III is polluted with different levels of Gaussian white noise. Table 6 illustrates the actual parameters of the original signal and the estimated results by using the proposed EDSNN. As shown in Table 6, we can see that all the results by the proposed EDSNN have very high precision for estimating the relative complex signal which contains three EDS components, even the signal-to-noise ratio decaying to 5 dB. On the other hand, the estimating results of ESPRIT method are very unstable (in each experiment, the estimating results are very different) and also far away from the actual values, so it is not necessary to list its results in Table 6. As a result, when a signal contains many EDS components in low level of SNR, we can consider the EDSNN instead of ESPRIT method [5] for estimating its parameters.
4. Conclusions Generally speaking, a neural network method cannot be directly applied to parameter estimation of EDSs signal. For the purpose of solving this problem, we construct a specific topology of neural network which strictly follows the mathematic formulation of EDSs signal and term it EDSNN. Then we derive an adaptive improved Levenberg–Marquadt algorithm to train the EDSNN. At last, the parameters of EDSs signal can be estimated according to the converged weights of EDSNN. Profiting from the strict consistency of the mathematical formulas between the proposed EDSNN and the EDSs signal model, and also profiting from the good performance of the improved Levenberg–Marquadt algorithm, the proposed algorithm can achieve a very high precision and have good performance in resistance to noise. Computer simulation results substantiate that the proposed EDSNN can obtain a higher precision for the damped factors, the frequencies, the amplitudes and also the initial phases of all the EDS components than the state-of-the-art algorithm for noise free or noisy case. As a result, when a signal contains many EDS components in low level of SNR, or high precision is needed, we can consider the EDSNN instead of ESPRIT method for estimating its parameters.
Acknowledgments This work was supported by NSFC (61173084), National Science & Technology Pillar Program (No.: 2012BAK16B06), and Research Training Program of SMIE of Sun Yat-sen University. The authors would like to thank all the reviewers (including the reviewers of IScIDE 2013) for their comments which are very helpful in extending and revising the paper. References [1] J. Jensen, R. Heusdens, S. Jensen, A perceptual subspace approach for modeling of speech and audio signals with damped sinusoids, IEEE Trans. Speech Audio Process. 12 (2004) 121–132. [2] R. Boyer, J. Rosier, Iterative method for harmonic and exponentially damped sinusoidal models, in: Proceedings of the 5th International Conference on Digital Audio Effects (DAFx02), 2002. [3] K. EL-Naggar, On-line measurement of low-frequency oscillations in power systems, Measurement 42 (2009) 716–721. [4] H. Zeineldin, T. Abdel-Galil, E. El-Saadany, M. Salama, Islanding detection of grid connected distributed generators using tls-esprit, Electr. Power Syst. Res. 77 (2007) 155–162. [5] W.K. Najy, H. Zeineldin, A.H. Alaboudy, W.L. Woon, A bayesian passive islanding detection method for inverter-based distributed generation using esprit, IEEE Trans. Power Deliv. 26 (2011) 2687–2696.
337
[6] S. Rouquette, M. Najim, Estimation of frequencies and damping factors by two-dimensional esprit type methods, IEEE Trans. Signal Process. 49 (2001) 237–245. [7] J.-M. Papy, L.D. Lathauwer, V.S. Huffel, A shift invariance-based order-selection technique for exponential data modelling, IEEE Signal Process. Lett. 14 (2007) 473–476. [8] K.-P. Poon, K.-C. Lee, Analysis of transient stability swings in large interconnected power systems by fourier transformation, IEEE Trans. Power Syst. 3 (1988) 1573–1581. [9] K. Lee, K. Poon, Analysis of power system dynamic oscillations with heat phenomenon by fourier transformation, IEEE Trans. Power Syst. 5 (1990) 148–153. [10] M. Bertocco, C. Offelli, D. Petri, Analysis of damped sinusoidal signals via a frequency-domain interpolation algorithm, IEEE Trans. Instrum. Meas. 43 (1994) 245–250. [11] R. Pintelon, J. Schoukens, Frequency domain identification of linear time invariant systems under non-standard conditions, IEEE Trans. Instrum. Meas. 46 (1997) 65–71. [12] M.D. Sacchi, T.J. Ulrych, C.J. Walker, Interpolation and extrapolation using a high-resolution discrete fourier transform, IEEE Trans. Signal Process. 46 (1998) 31–38. [13] D. Agrež, A frequency domain procedure for estimation of the exponentially damped sinusoids, in: I2MTC'09, 2009. [14] H.-T. Li, P.M. Djurić, An iterative MMSE procedure for parameter estimation of damped sinusoidal signals, Signal Process. 51 (1996) 105–120. [15] W. Sun, H. So, Accurate and computationally efficient tensor-based subspace approach for multi-dimensional harmonic retrieval, IEEE Trans. Signal Process. 60 (2012) 5077–5088. [16] X. Xiao, X. Jiang, S. Xie, X. Lu, Y. Zhang, A neural network model for power system inter-harmonics estimation, in: BIC-TA 2010, 2010. [17] H. Qian, R. Zhao, T. Chen, Interharmonics analysis based on interpolating windowed FFT algorithm, IEEE Trans. Power Deliv. 22 (2007) 1064–1069. [18] D. Gallo, R. Langella, A. Testa, Desynchronized processing technique for harmonic and interharmonic analysis, IEEE Trans. Power Deliv. 19 (2004) 993–1001. [19] W.G. Morsi, M. El-Hawary, Suitable mother wavelet for harmonics and interharmonics measurements using wavelet packet transform, in: 2007 Canadian Conference on Electrical and Computer Engineering. [20] J. Barros, R.I. Diego, Analysis of harmonics in power systems using the wavelet-packet transform, IEEE Trans. Instrum. Meas. 57 (2008) 63–69. [21] M.H. Bollen, E. Styvaktakis, I.Y.-H. Gu, Categorization and analysis of power system transients, IEEE Trans. Power Deliv. 20 (2005) 2298–2306. [22] M. Kristensson, M. Jansson, B. Ottersten, Modified IQML and weighted subspace fitting without eigendecomposition, Signal Process. 79 (1999) 29–44. [23] D. Kundu, A modified prony algorithm for sum of damped or undamped exponential signals, Sankhyā: Indian J. Stat., Ser. A 56 (1994) 524–544. [24] D. Kundu, A. Mitra, Fitting a sum of exponentials to equispaced data, Sankhyā: Indian J. Stat., Ser. B 60 (1998) 448–463. [25] N. Kannan, D. Kundu, Estimating parameters in the damped exponential model, Signal Process. 81 (2001) 2343–2351. [26] E. Feilat, Prony analysis technique for estimation of the mean curve of lightning impulses, IEEE Trans. Power Deliv. 21 (2006) 2088–2090. [27] N. Rathina Prabha, N. Marimuthu, C. Babulal, Adaptive neuro-fuzzy inference system based total demand distortion factor for power quality evaluation, Neurocomputing 73 (2009) 315–323. [28] N. Rathina Prabha, N. Marimuthu, C. Babulal, Adaptive neuro-fuzzy inference system based representative quality power factor for power quality assessment, Neurocomputing 73 (2010) 2737–2743. [29] X. Xiao, J. Lai, C. Wang, A neural network for parameter estimation of the exponentially damped sinusoids, in: IScIDE 2013. [30] C. Kanzow, N. Yamashita, M. Fukushima, Levenberg–Marquardt methods with strong local convergence properties for solving nonlinear equations with convex constraints, J. Comput. Appl. Math. 172 (2004) 375–397. [31] C. Ma, J. Tang, X. Chen, A globally convergent Levenberg–Marquardt method for solving nonlinear complementarity problem, Appl. Math. Comput. 192 (2007) 370–381. [32] P. Chen, Why not use the Levenberg–Marquardt method for fundamental matrix estimation? IET Comput. Vis. 4 (2010) 286–294. [33] L. Yang, Y. Chen, A new globally convergent Levenberg–Marquardt method for solving nonlinear system of equations, Math. Numer. Sin. 30 (2008) 388–396.
Xiuchun Xiao received his Ph.D. degree in communication and information system in 2013 from Sun Yatsen University, Guangzhou, China. He is currently an Associate Professor with Guangdong Ocean University. His current research interests include artificial neural network and computer vision.
338
X. Xiao et al. / Neurocomputing 143 (2014) 331–338 Jian-Huang Lai received his Ph.D. degree in mathematics in 1999 from Sun Yat-sen University, Guangzhou, China. He is currently a Professor with Sun Yat-sen University. He has published over 150 scientific papers in the international journals and conferences. His research includes image processing, pattern recognition and face recognition.
Chang-Dong Wang received his Ph.D. degree in computer science in 2013 from Sun Yat-sen University, China. He is currently an Assistant Professor with Sun Yat-sen University. His ICDM 2010 paper won the Honorable Mention for Best Research Paper Awards. His current research interests include machine learning and computer vision.