Increasing the speed of convergence of the ... - Semantic Scholar

Report 2 Downloads 144 Views
872

IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 50, NO. 6, JUNE 2002

Increasing the Speed of Convergence of the Constant Modulus Algorithm for Blind Channel Equalization Sudarshan Rao Nelatury, Member, IEEE, and Sathyanarayan S. Rao

Abstract—The constant modulus algorithm (CMA) is an excellent technique for blind channel equalization. Recently signed error version of CMA (SE-CMA) and dithered signed error version (DSE-CMA) have been proposed which afford overall computational efficiency. We propose three different error functions for faster convergence. This would be essential for communication systems, which cannot afford a high startup delay or for systems, where the channel’s impulse response is rapidly fluctuating. One of these algorithms relies on the idea of a variable step size, which increases the rate of convergence. Index Terms—Blind equalization, constant modulus algorithm, error function, speed of convergence.

Fig. 1. Noiseless T/2 spaced multirate system model.

half the symbol period. The source symbol at baud spacing is , is the vector consisting of the fractionally-spaced channel impulse response and is the fractionally-spaced equal. izer coefficient vector of size is the time decimated channel convolution matrix, the If system output may be expressed as (1)

I. INTRODUCTION

B

LIND equalizers are used to correct the distortions caused by transmission channels, if training sequences are unavailable. Usually they have a tapped delay line structure. The taps of the equalizer are updated using adaptive algorithms. A well-known candidate among them is the constant modulus algorithm (CMA) originally proposed by Godard [1]. This is a preferred choice because of its robustness and because it can be easily implemented [2], [3]. The signed error version of CMA (SE-CMA) is shown to improve the computational efficiency of the traditional CMA algorithm resulting in low-cost design [4]. The traditional CMA, like the celebrated LMS algorithm [5], involves a constant scale factor that controls the speed of adaptation. If the speed is an important factor, one might go for a variable CMA (V -CMA). In this paper we shall propose three different error functions and compare the speed of convergence by simulation results. The improvement in the speed of V -CMA is remarkable on a logarithmic scale. We find that using a fractionally spaced equalizer (FSE) model of appropriate order one might achieve perfect equalization under no noise condition. With the proposed algorithms an example of undermodeled channel is also simulated.

is the length vector of baud-spaced source symbols. The equalizer coefficients are updated using an equation of the form:

where

(2) is the receiver input vector of same length as and where is a small step size. The symbol denotes complex conjugation. Using the stochastic gradient descent approach to optimize the CMA cost function, the error function is expressed as

(3) for the error function, the

With the above expression usual CMA becomes

(4) The signed error version of CMA (SE-CMA) [4] takes

II. CMA AND SE-CMA

(5)

Let us consider the digital communication system of Fig. 1 wherein we show a multirate model consisting of an information source emitting symbols from a finite, zero mean alphabet into an FIR channel and an FSE with sampling interval equal to Paper approved by C.-L. Wang, the Editor for Modulation Detection and Equalization of the IEEE Communications Society. Manuscript received August 25, 2000; revised June 18, 2001. This paper was presented in part at the 34th Asilomar Conference on Signals, Systems and Computers, Pacific Grove, CA, October 29–November 1, 2000. The authors are with the Department of Electrical and Computer Engineering, Villanova University, Villanova, PA 19085 USA (e-mail: [email protected]; [email protected]). Publisher Item Identifier S 0090-6778(02)05553-8.

and updates the equalizer coefficients as (6) Let indicate the set of possible combinations of source alphais the expected output symbol set bets and where SE function is discontinuous. It was noted that SE-CMA error function remains constant within any convex region that is not subdivided by the sign boundaries, which are hyperplanes:

0090-6778/02$17.00 © 2002 IEEE

for

and

(7)

IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 50, NO. 6, JUNE 2002

873

Fig. 3. The first four error functions —  (y ), - -  (y ), - .-.  (y ),

 (y

   (y

Fig. 2.



Variation in .

= 10



with



between two limits



= 10

) for ).

i

= 1; 2; 3; 4. Legend:

and

III. PROPOSED ERROR FUNCTIONS FOR REAL BPSK In this section we shall construct three error functions , and . Assume the source to be a real BPSK . Let the with equiprobable alphabet and unity dispersion . For real product of squared deviations of the output be is taken as in case BPSK source, and in case of . Using of a scale factor the updated equation can be written as

Fig. 4. CMA contours and signed error boundaries for BPSK over the noiseless channel c = [ 0:0901; 0:6853; 0:7170; 0:0901] . The least mean square solutions are shown by crosses.

0

0

With the expressions and for , respectively, we arrive at (10) (8) (11) The factor 1/2 is used merely for convenience. The above algorithm might experience gradient noise amplification whenis large. Hence we wish to normalize the correction ever with the choice . term above by dividing with This idea comes from the well-known “normalized LMS algorithm” which can be viewed as the minimum-norm solution [5]. The positive constant “ ” is used to remove numerical difficulties that arise when the denominator is close to zero. Thus (8) becomes (9)

Further, as the step size controls the rate of convergence, with a large value giving fast convergence and a smaller value providing better steady state performance, we might introduce a variable as done in the case of traditional LMS algorithm [6]. as We shall suggest (12)

(13)

874

IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 50, NO. 6, JUNE 2002

Fig. 5. (a), (b) Learning curves of f and f using different error functions. (c), (d) Combined channel equalizer impulse response coefficients with the same plotting options. The iteration number is taken on the x-axis on log scale. Legend: — using  (y ), i.e., CMA, - - using  (y ), i.e., SE-CMA, 33 using  (y ), -.-. using  (y ), . . . using  (y ), i.e., V-CMA.

We shall show their effectiveness through simulations. Here and use a constant scale factor whereas employs a variable . In (13) and are two fixed parameters, and which control the variation of within two limits . The initial value of is . Note that and are unit step and ramp functions, respectively. Without the would be the same modification of variable step size . The updated equation (2) with the step size and error as functions as in (13) and (12), respectively, is V -CMA algorithm and is observed to yield faster convergence. Fig. 2 shows with on a soft limited variation of for a log–log scale. A comparison of the error functions , and is given in Fig. 3. In suggesting , we have in view the upsampling at the input end of the channel. In real-time implementation they can be precomputed and stored in a lookup table to reduce computation time. IV. SIMULATION EXAMPLES In this section we shall demonstrate the effectiveness of the error functions just proposed. Let us take the communication system in Fig. 1 with an i.i.d. BPSK source, the noiseless and a channel . This is a well-behaved channel two-tap FSE in [3, Table 1]. Fig. 4 portrays the CMA contours with signed error boundaries superimposed in the equalizer space. In a general case, the least mean squared error solution [3] for the

Fig. 6. Learning trajectories in the f –f equalizer plane superimposed on the CMA contours and the signed error boundaries. Note that for legibility we selected every 20th point in case of  (y ) i = 1; 2; 3 and 4. But all the successive points are shown in the trajectory corresponding to  (y ). Legend:  using  (y ), i.e., CMA, }} using  (y ), i.e., SE-CMA, using  (y ), 33 using  (y ), rr using  (y ), i.e., V-CMA.

optimal equalizer can be found from where indicates conjugate transpose, variance to signal variance, and

; is the ratio of noise

IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 50, NO. 6, JUNE 2002

875

Fig. 7. (a), (b) Semilog plot of the learning curves of f and f using different error functions in case of the undermodeled channel c = 0:1; 0:5; 0:2] carrying a BPSK signal. (c)–(e) Combined channel equalizer impulse response coefficients [h h h ] with the same plotting options. They seem to reach the values [0 1 0], but there is a slight disparity. The iteration number is taken on the x-axis on log scale. Legend: — using  (y ), i.e., CMA, - - using  (y ), i.e., SE-CMA, using  (y ), -.-. using  (y ), . . . using  (y ), i.e., V-CMA.

[0:1; 0:3; 1;

0

33

is the zero-forcing combined impulse response containing 1 in the th position. As we are not including noise in . For the present example, the opour simulations timum solution set (upto the sign) consists of the points in the – plane corresponding to the zero-forcing impulse response alternatives in the – plane (i.e., in the combined channel-equalizer space). Crosses in Fig. 4 show the least mean square error solutions. Fig. 5(a) and (b) shows semi-log plots and provide and using the five the learning curves for , and different algorithms with used in (2). For . For , and . The initial trial estimates of and are set at 2 and 0.5 in all the cases. Note that the algorithms converge with different speeds. On the to the solution -axis we show logarithm of iteration number to show the improvement in the speed of adaptation. Fig. 5(c) and (d) . shows the overall impulse response Shown in Fig. 6 are the learning trajectories on the – plane superimposed on the CMA contours and the signed error boundaries. Because of the logarithmic improvement in the speed, while showing the trajectories we selected every 20th . But point in the different cases corresponding to

Fig. 8. Learning trajectories in the f –f equalizer plane superimposed on the CMA contours and the signed error boundaries for the channel c . Note that for legibility we selected every 100th point in case of  (y ) i = 1; 2; 3 and 4. But all the successive points are shown in the trajectory corresponding to  (y ). The least mean square solutions are shown by crosses. Legend: using  (y ), i.e., CMA, using  (y ), i.e., SE-CMA, using  (y ), ** using  (y ), using  (y ), i.e., V-CMA.

rr

}}



all the successive points are shown for . Note that the SE-CMA trajectory after reaching an error boundary in the neighborhood follows the same until convergence.

876

Next, we shall repeat the above simulation for the case of an under-modeled channel mentioned in [4]. Corresponding to the six admissible zeroforcing impulse responses [ 1 0 0] , [0 1 0] , and [0 0 1] the least mean square error solutions for the equalizer co, , and efficients are , respectively. Fig. 7(a) and (b) shows semilog plots of the learning curves for and . Fig. 7(c)–(e) show combined zero forcing impulse response values versus iteration. Fig. 8 shows the CMA contours, the signed error boundaries and the learning trajectories for all the five . For algorithms. The initial guess for all these is legibility, in the first four algorithms only the 100th sample is shown, but all successive points are shown in case of the variable step size algorithm. We notice that, even in case of channel under modeling, the learning behavior is acceptable, even though, perfect reconstruction is not possible. In fact all of them do not seem to converge to the same optimum as the cost function surface becomes multimodal with unequal minima. Crosses in Fig. 8 represent the least mean square error solutions and they do not appear to coincide the expected minimal locations on the CMA contours as one might expect in case of channel under-modeling.

IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 50, NO. 6, JUNE 2002

V. CONCLUSION This paper proposed error functions that might be incorporated in the CMA for fast convergence. Also the idea of variable step size results in a fast V -CMA for blind equalization. The rate of convergence is found to be high. These algorithms may be employed in blind equalization of rapidly changing channels. REFERENCES [1] D. N. Godard, “Self-recovering equalization and carrier tracking in twodimensional data communication systems,” IEEE Trans. Commun., vol. COM-28, pp. 1867–1875, Nov. 1980. [2] J. R. Treichler et al., “A new approach to multipath correction of constant modulus signals,” IEEE Trans. Acoust., Speech, Signal Processing, vol. ASSP-31, pp. 459–472, Apr. 1983. [3] C. R. Johnson et al., “Blind equalization using the constant modulus criterion: A review,” Proc. IEEE, vol. 86, pp. 1927–1950, Oct. 1998. [4] D. R. Brown, P. B. Schniter, and C. R. Johnson, Jr., “Computationally efficient blind equalization,” in Proc. 35th Allerton Conf. on Communication, Control and Computing, Monticello, IL, Oct. 1997, pp. 54–63. [5] S. Haykin, Adaptive Filter Theory, 3rd ed. Englewood Cliffs, NJ: Prentice-Hall, 1996. [6] R. H. Kwong and E. W. Johnston, “A variable step size LMS algorithm,” IEEE Trans. Signal Processing, vol. 40, pp. 1633–1642, July 1992.