IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS—I: REGULAR PAPERS, VOL. 55, NO. 9, OCTOBER 2008
2631
Post-Nonlinear Blind Extraction in the Presence of Ill-Conditioned Mixing Wai Yie Leong and Danilo P. Mandic, Senior Member, IEEE
Abstract—An extension of blind source extraction (BSE) of one or a group of sources to the case of ill—conditioned and post-nonlinear (PNL) mixing is introduced. This is achieved by a “mixed objective” type of cost function which jointly maximizes the kurtosis of a recovered source and estimates a measure of nonlinearity within the mixing system. This helps to circumvent problems with existing BSE methods, which are limited to noiseless and linear mixing models. Simulations illustrate the performance of the proposed algorithm and its usefulness, especially in the presence of very ill-conditioned mixing systems.
log
Index Terms—Blind source extraction, blind source separation, deflation, ill-conditioned matrix, post-nonlinear (PNL) model.
NOMENCLATURE
Small change applied to weight nonlinear function. Learning rate for continuous-time algorithm. (Sigma) Standard deviation. Set of variables.
or
Small change applied to weight Mixing matrix. Nonlinear mapping. Cumulant. Expected value. Exponential. Identity matric or identity matrix of dimension . Cost function.
Manuscript received January 26, 2007; revised September 30, 2007. First published April 18, 2008; current version published October 29, 2008. This paper was recommended by Associate Editor A. Kuh. W. Leong is with the Communications and Signal Processing Group, Department of Electronics and Electrical Engineering, Imperial College London, London, SW7 2AZ, U.K., and also with Singapore Institute of Manufacturing Technology, 638075 Singapore (e-mail:
[email protected]). D. Mandic is with the Communications and Signal Processing Group, Department of Electronics and Electrical Engineering, Imperial College London, London, SW7 2AZ, U.K. (e-mail:
[email protected]) Digital Object Identifier 10.1109/TCSI.2008.922022
).
First signal mixture. Post-nonlinear mixtures First weight vector. =[w_{ij}] — Extraction matrix. Signal extracted from by .
Sign of the kurtosis. Open subset of .
Superscript denotes transpose operator. Estimation operator. Complex conjugate, transpose.
Discrete-time or number of iterations applied. Condition number. Normalized kurtosis. Kurtosis. Natural logarithm. Moments. Number of possible inputs. Real dimensional parameter space. First source signal. Vector variable of source signals. Sign function ( for and for Hyperbolic tangent.
I. INTRODUCTION
B
LIND SIGNAL SEPARATION (BSS) [4], [20], [22] aims at recovering unobservable signals (sources) from their linear or nonlinear mixtures. This technique has recently attracted much interest due to its potentially wide number of applications. Despite the present progress in the theory of BSS [17], standard algorithms have been typically designed for noiseless and linear mixtures, a rather simplistic case. To the end, much effort has been dedicated to BSS for ill-conditioned and nonlinear mixing. In those cases we may as well opt to recover only a small subset of “interesting” sources in an ill-conditioned system, that is to perform blind source extraction (BSE).1 A combination of BSE and deflation was originally proposed in [21], and has been subsequently further extended [23], [24], [27], [28], [31]–[33]. However, the main limitation of the existing BSE algorithms is that they have been specifically designed for linear instantaneous mixtures, a condition which is not realistic for most real world situations. To help mitigate some of these limitations, we set out to extend existing BSE techniques and derive criteria and algorithms for simultaneous post-nonlinear [9] extraction of arbitrary groups of (where denotes the total number of sources) signals of interest. 1As special case, the BSE problem ought to be treated different from BSS. BSS is meant to perform signal separation simultaneously, whereas BSE extracts individual signals sequentially.
1549-8328/$25.00 © 2008 IEEE Authorized licensed use limited to: Imperial College London. Downloaded on July 12,2010 at 11:22:57 UTC from IEEE Xplore. Restrictions apply.
2632
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS—I: REGULAR PAPERS, VOL. 55, NO. 9, OCTOBER 2008
There are a variety of BSE algorithms in the open literature, including those based on high-order statistics (HOS) (such as the kurtosis) and those based on second-order statistics (SOS) [30], such as the structure using a linear predictor [19]. In the latter case, it has been widely assumed that as long as the source signals exhibit different temporal structures, the minimisation of the mean squared prediction error will lead to successful linear extraction. However, BSE conducted this way may exhibit a relatively low success rate. The need for the inverse modelling of an ill-conditioned and post-nonlinear system [9], [14], [29] arises in many real world situations, yet this case is rarely addressed when developing BSE algorithms. It is however clear that (1) sensors normally posses nonlinear transfer characteristics; (2) the effects of reflections and interfering signals may introduce an ill-conditioned mathematical model of the mixing system. To help circumvent the problems associated with the assumption of linear mixing, an extension in the form of post-nonlinear mixing system is proven to be considerably more applicable, as it allows for nonlinear mixing features to be included within the system model. This approach has already attracted considerable interest [9], [14], [29]. Namely, in the post-nonlinear mixing model, the linear ICA theory2 and the commonly exploited equivariance property might not be powerful enough to model the underlying nonlinear mapping, and BSS algorithms for the linear mixing model will generally fail. We therefore need to resort to nonlinear models, and make use of their more general approximation capabilities [8], [9]. One such set of algorithms for blind separating post-nonlinear mixtures using parametric nonlinear functions was proposed by Lee [26]. It was assumed that the mixing is performed in two stages: a linear mixing process followed by a nonlinear transfer function. The focus was on a parametric sigmoidal nonlinearity and on highorder polynomials. It was further shown in [6], that for general nonlinear ICA, there always exist an infinite number of solutions if the space of the nonlinear mixing functions is unlimited, and hence the independent components extracted from the observations are not necessarily the true source signals. Furthermore, in general, nonlinear ICA suffers from high computational complexity. Solving the nonlinear BSS problem based only on the independence assumption is possible only in some special cases, for example, when the mixtures are post-nonlinear (PNL), and using some weak assumptions [9]. To that cause, in this paper, we consider the BSE problem in the presence of i) PNL mixtures and ii) ill-conditioned mixing. Notice that in this case BSE should be treated differently from BSS.
II. POST-NONLINEAR MIXTURES Consider
unknown zero-mean sources at a discrete time instant . Sources are observed through a nonlinear vector mapping 2As
a generative model, ICA aims to find the independent components from the mixture of statistically independent sources by optimising different criteria, for review, see [3], [5], [7], [18].
Fig. 1. Block diagram of the ill-conditioned post-nonlinear mixing model.
Fig. 2. General structure of the blind source extraction (BSE).
and (possibly ill-conditioned)3 mixing matrix , to give measurements . This nonlinear mixing problem (from to the observation ) can be the unknown sources mathematically described as a post-nonlinear system. We can are nonlinear memoryless therefore assume the signals mixtures of unknown statistically independent sources , and that the observation process can be expressed as (Fig. 1) (1) where is an unknown mixing matrix which is assumed to be nonsingular. A. Blind Source Extraction Procedure Fig. 2 shows a general structure of the BSE process which extracts one single source at a time, where there are two principal stages in this process: extraction and deflation [21]. The mixtures first undergo the extraction stage to have one source recovered; after deflation, the contribution of the extracted source is removed from the mixtures. These new “deflated” mixtures contain linear combinations of the remaining sources; the next extraction process then recovers the second source; this process repeats until the last source of interest is recovered. Our goal is to extract the sources of interest without any prior knowledge of their distributions and the (possibly ill-conditioned and nonlinear) mixing mechanism. To that cause, we need to derive an extraction structure for which the learning rule involves both the estimation of the single processing unit, and a procedure to estimate the nonlinear effects 3One example of an ill-conditioned matrix can be found in [1]. This matrix R 3 H 3 R ; where R i; j sin theta ; can be generated as follows: R j; i 0R i; j ; R eye n ; R i; i cos theta ; R j; j R i; i . In Matlab, this can be coded as H hilb n , ind randperm n , theta 3 pi 3 rand, i ind .
( )=
2
A= ( ) = ( ) ( )= = () = (1)
( =
( )= ( ) ) ( )= ( ) () =
Authorized licensed use limited to: Imperial College London. Downloaded on July 12,2010 at 11:22:57 UTC from IEEE Xplore. Restrictions apply.
LEONG AND MANDIC: POST-NONLINEAR BLIND EXTRACTION IN THE PRESENCE OF ILL-CONDITIONED MIXING
of ill-conditioned mixing within. This way, the extraction operation for a single signal, , can be expressed as (2) denotes the single extracted output signal and where for the first processing mixtures. The denotes a global demixing vector from the sources to the outputs, where is a nonlinear function explained later.
2633
, , denotes the expectation where the learning rate. operator and In a special case, for symmetric pdf distributions of sources and and odd activation functions (7) Therefore, we can obtain the median learning rule (8)
III. PROPOSED EXTRACTION ALGORITHM
where
For the extraction of ill-conditioned post-nonlinear mixtures, analogous to “mixed norm” approaches to adaptive filtering [16], we propose the following “mixed contrast function” criterion based on [10], [22]
(9) Results in other areas show that such a median learning rule activation function is robust to additive noise and with the nonlinearities [12]. B. Normalized Kurtosis-Based Cost Function
(3) where , , , are true corprobability density functions of the source signals [25], responds to the first term in the cost function (3) (kurtosis) and to the second term (nonlinearity). The left-hand side part of (3) performs standard BSE, whereas the right hand part of (3) estimates the nonlinearity within the mixing process.
A classical measure of nonGaussianity is the kurtosis, which is defined as in [5]. We for zero-mean random variable in (3) as4 can represent the term
(10) The normalized kurtosis, [19] is then obtained when the is divided by the square of the variance kurtosis , to give (11)
A. Nonlinearity: The Activation Function Notice that criterion (3) represents a joint constrained optimisation problem. In order to derive a learning algorithm corresponding to (3), we shall consider separately the minimisation of each part of cost function (3). From (3), to extract only signal , we have
As a cost function for kurtosis based BSE, we may employ
where the parameter signal, within
(12) determines the sign of the kurtosis of the
for source signal with negative for source signal with positive (4) where
osis, osis.
(13)
Applying standard gradient descent to minimize the cost function, we obtain
is a smooth nonlinearity and (5)
It is important to note that (3) holds only if the functions are invertible, a restriction that must be taken into account in the development of learning algorithms. Hence, on the basis of the standard gradient descent, we obtain an approximate learning rule, given by
(6)
(14) 4For a zero mean variable y , the first four univariate cumulants are thus defined as:
kurt (y ) cum(y ) = E fy
g = 0; (mean); kurt (y ) cum(y ; y ) = var(y ) = E fy g; (variance); kurt (y ) cum(y ; y ; y ) = E fy g; (skewness); kurt (y ) cum(y ; y ; y ; y ) = E fy g 0 3E fy g ; (kurtosis):
Authorized licensed use limited to: Imperial College London. Downloaded on July 12,2010 at 11:22:57 UTC from IEEE Xplore. Restrictions apply.
2634
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS—I: REGULAR PAPERS, VOL. 55, NO. 9, OCTOBER 2008
where
. The term is always positive, and can be absorbed . by the learning rate The moments , for , can be estimated online as
procedure may be recursively applied to extract all source sigdeflation we require nals sequentially. This means, that for and online linear transformation given by (22) where
(15) Applying subsequently a stochastic approximation, we obtain an online learning rule (16) where
is a learning rate and
and
(23) is an estimation of the column of the identified where . mixing matrix , The proposed method is outlined below: Procedure: Blind extraction and deflation of post-nonlinear mixtures
(17) is the nonlinearity. Since the positive term can be absorbed within the learning rate, we can also use the following approximation of the nonlinearity
, the ,
For post-nonlinearly mixed signals, single extracted signal is defined as is randomly initialized: where For
signal
Follow the criterion
(18) or
(19) For spiky signals with positive kurtosis (Super-Gaussian signals), the nonlinearity closely approximates a sigmoidal function. As a special case, applying a simple Euler approximation to (17), update yields the discrete-time learning rule (20) where earity.
is a vector of sensor signals and
For
: number of data points
1
Apply the algorithm
2
Perform Adaptive Extraction
the nonlin-
C. The Proposed Blind Extraction Learning Rule
End extraction for
Finally, combining (5) and (20), our proposed algorithm for BSS of post-nonlinear mixtures becomes
Repeat for 3
signal
signals, until all signals extracted
Deflation method
(21) where where the extracted outputs, ). This concludes the derivation of the adaptive blind source extraction algorithm based on cost function (3).
End extraction for
signals
IV. EXPERIMENTAL RESULTS D. Deflation Learning Rule After the successful extraction of the first source signal , we can apply the deflation procedure which removes previously extracted signals from the mixtures. This
In the experiments, simulations were based on three source signals: with binary distribution, with sine waveform and with Gaussian distribution (Fig. 3). Monte Carlo simulations with 5000 iterations of independent trials were performed. The
Authorized licensed use limited to: Imperial College London. Downloaded on July 12,2010 at 11:22:57 UTC from IEEE Xplore. Restrictions apply.
LEONG AND MANDIC: POST-NONLINEAR BLIND EXTRACTION IN THE PRESENCE OF ILL-CONDITIONED MIXING
Fig. 3. Original unknown sources. s with binary distribution, s with sine waveform and s with Gaussian distribution.
2635
Fig. 5. Extracted signals with binary distribution (top), sine waveform (middle) and Gaussian distribution (bottom) using linear predictor [19].
Fig. 6. Extracted signals with binary distribution (top), sine waveform (middle), and Gaussian distribution (bottom) using the proposed nonlinear predictor. Fig. 4. Three ill-conditioned post-nonlinear mixtures.
initial values of the weights and the demixing matrix were randomly generated for each run. The simulations were conducted without prewhitening. 3 ill-conditioned mixing matrix5 [1] was randomly A3 generated (based on Fig. 1), the ill-conditioned mixing matrix is given by (24)
A
5The
KA
K
K
(25) To measure the quantitative performance of the proposed algorithm, we employ the performance index (PI) defined by [2]
kAkkA k
(26)
k1k
where . The smaller the value of PI, the better the quality of extraction. The measure of qualitative performance were scatter plots, presented in Fig. 7, which show that the proposed method has
condition number of a matrix is the quantity ( ) = . It is a measure of the sensitivity of the solution of s = b to perturbations of A or b. If the condition number of A is ‘large’, A is said to be ill-conditioned. If the condition number is unity, A is said to be perfectly conditioned [15]. If (A)= (A), where (A), (A) A is normal then (A) = are respectively maximal and minimal (by moduli) eigenvalues of A. If is l norm then (A) = (A)= (A), where (A), (A) are respectively the maximal and minimal singular values of A [11].
A
where the condition number, . If, as a nonlinear function from Section II, we use the sigmoid saturation type function , our ill-conditioned postnonlinear mixtures (Fig. 4) can be modelled as
Authorized licensed use limited to: Imperial College London. Downloaded on July 12,2010 at 11:22:57 UTC from IEEE Xplore. Restrictions apply.
2636
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS—I: REGULAR PAPERS, VOL. 55, NO. 9, OCTOBER 2008
Fig. 7. Scatter plot comparing the independence of the output signals; Column 1: signal 1 and 2; Column 2: signal 2 and 3; Column 3: signal 1 and 3.
=
Fig. 8. Learning curve of the extraction algorithms with condition number : (a) normalized kurtosis [19]. (b) The proposed adaptive method.
213 5601
the potential to extract the ill-conditioned post-nonlinearly mixtures (Fig. 6), as indicated by the output scatter plots being closely matched with the original sources (Fig. 7). The proposed adaptive method also exhibits faster convergence and better performance index than the recently introduced state of the art method [19] in Fig. 8 and Fig. 9 with condition number 213.5601 and 450.4487, respectively. Fig. 10 shown the performance index for three different nonlinearities after first and second extraction in condition number 473.8132. and (as The monomial nonlinearity addressed in [13], the scaling condition) shown performance index less than 17 dB after the first extraction.
Fig. 9. Learning curve of the extraction algorithms with condition . (a) normalized kurtosis [19]. (b) The : number proposed adaptive method.
= 349 6226
Table I shows the Performance Index of the extracted signals with different condition numbers (1.9247, 38.7087, 190.9155, 213.5601, 363.6029, 349.6226, and 450.4487) using the normalized kurtosis method [19] and the proposed adaptive method. We observed that the proposed adaptive method outperformed the conventional normalized kurtosis method [19], and showed a natural trend, whether the normalized kurtosis method showed very inconsistent performance. V. CONCLUSION We have addressed a special class of BSS algorithms, namely ill-conditioned post-nonlinear BSE, by which we can recover a
Authorized licensed use limited to: Imperial College London. Downloaded on July 12,2010 at 11:22:57 UTC from IEEE Xplore. Restrictions apply.
LEONG AND MANDIC: POST-NONLINEAR BLIND EXTRACTION IN THE PRESENCE OF ILL-CONDITIONED MIXING
2637
kurtosis of the extracted signals, the nonlinear function can be selected from [10]. REFERENCES
Fig. 10. A learning curve of the extraction algorithms with 3 different nonlinearities as show in [13].
TABLE I PERFORMANCE INDEX OF THE EXTRACTED SIGNALS WITH DIFFERENT CONDITION NUMBERS USING THE NORMALIZED KURTOSIS METHOD [19] AND THE PROPOSED ADAPTIVE METHOD
single source or a subset of sources at a time, instead of recovering all of the sources simultaneously. The proposed adaptive algorithm does not require any prepocessing (prewhitening), and due to the design of the contrast function, it is particularly suitable for sequential blind source extraction with ill-conditioned post-nonlinear mixing matrices. Simulation results have confirmed the validity of the theoretical results and demonstrated the performance of the algorithm. APPENDIX By changing , the nonlinearity can be varied between a linear device and a hard limiter. The effects of can be by a constant. studied by scaling A convenient nonlinearity is a hyperbolic tangent function, given by (27) the positive scalar In such a case
is used to modify the shape (slope) of
. (28)
For sub-Gaussian source signals, the cubic nonlinear function has been a favorite choice. For mixtures of suband super-Gaussian source signals, according to the estimated
[1] A. Cichocki and D. Erdogmus, MLSP 2006 data competition [Online]. Available: http://mlsp2006.conwiz.dk/, 2006 [2] A. Cichocki and S. I. Amari, “Adaptive Blind Signal and Image Processing,” in Locally Adaptive Algorithms for ICA and Their Implementations. New York: Wiley , 2002. [3] A. Hyvarinen, “Survey on independent component analysis,” Neural Comput. Surv., vol. 2, pp. 94–128, 1999. [4] A. Hyvarinen and E. Oja, Independent Component Analysis: A Tutorial. Tech Rep. Helsinki Univ.Technology, Helsinki, Finland, Apr. 1999. [5] A. Hyvarinen, J. Karhunen, and E. Oja, Independent Component Analysis. New York: Wiley, 2001. [6] A. Hyvarinen and P. Pajunen, “Nonlinear independent component analysis: Existence and uniqueness results,” Neural Netw., vol. 12, pp. 429–439, 1999. [7] A. Mansour, A. Barros, and N. Ohnishi, “Blind separation of sources: Methods, assumptions and applications,” IEICE Trans. Fundamentals Electronics, Commun. Comput. Sci., vol. E83-A, pp. 1498–1512, 2000. [8] A. Taleb, “Source separation in structured nonlinear models,” in Proc. IEEE Int. Conf. Acoust., Speech, Signal Process., 2001, vol. 6, pp. 3613–3516. [9] A. Taleb and C. Jutten, “Source separation in post-nonlinear mixtures,” IEEE Trans. Signal Process., vol. 47, no. 10, pp. 2807–2820, 1999. [10] A. J. Bell and T. J. Sejnowski, An Information Maximization Approach to Blind Separation and Blind Deconvolution, vol. 7, pp. 1129–1159, 1995. [11] Golub and Van Loan, Matrix Computations, 3rd ed. Baltimore, MD: Johns Hopkins Univ. Press. [12] G. R. Arce, Nonlinear Signal Processing: A statistical Approach, first ed. New York: Wiley, 2005. [13] H. Mathis, “Nonlinear Functions for Blind Separation and Equalization,” Ph.D. dissertation, The Swiss Federal Institute of Technology,, Zurich, Nov. 2001. [14] H. Valpola, X. Giannakopoulos, A. Honkela, and J. Karhunen, “Nonlinear independent component analysis using ensemble learning: experiments and discussion” Tech. Rep., Helsinki Univ. Technology, Neural Networks Research Centre, Helsinki, Finland, 2000. [15] J. Blackledge, Digital Signal Processing: Mathematical and Computational Methods, Software Development and Applications, 2nd ed. London, U.K.: Horwood,, 2006. [16] J. Chambers and A. Avlonitis, A Robust Mixed-Norm Adaptive Filter Algorithm, vol. 4, pp. 46–48, Feb. 1997. [17] J. Eriksson and V. Koivunen, “Identifiability and separability of linear ICA models revisited,” in Proc. Int. Workshop Independent Component Anal. Blind Signal Separation, Apr. 2003, pp. 23–27. [18] J. F. Cardoso, “Blind signal separation: Statistical principles,” Proc. IEEE, vol. 44, pp. 3017–3030, 1998. [19] W. Liu and D. P. Mandic, “A normalized kurtosis based algorithm for blind source extraction from noisy measurements,” Signal Process., vol. 86, pp. 1580–1585, Jul. 2006. [20] E. Moreau and O. Macchi, “High-order contrasts for self-adaptive source separation,” Int. J. Adaptive Contr. Signal Process., pp. 19–46, 1996. [21] N. Delfosse and P. Loubaton, “Adaptive blind separation of independent sources: A deflation approach,” Signal Process., vol. 49, pp. 59–83, 1995. [22] P. Comon, “Independent component analysis, a new concept?,” Signal Process., vol. 36, no. 3, pp. 287–314, Apr. 1994. [23] S. A. Cruces-Alvarez, “From blind signal extraction to blind instantaneous signal separation: Criteria, algorithm, and stability,” IEEE Trans. Neural Netw., vol. 15, pp. 859–873, Jul. 2004. [24] S. A. Cruces-Alvarez, A. Cichoki, and S. I. Amari, “On a new blind signal extraction algorithm: Different criteria and stability analysis,” IEEE Signal Process. Lett., vol. 9, pp. 233–236, Aug. 2002. [25] S. Amari, “Natural gradient works efficiently in learning,” Neural Comput., vol. 10, pp. 251–276, Jan. 1998. [26] T. W. Lee, B. Koehler, and R. Orglmeister, “Blind source separation of nonlinear mixing models,” Nueral Netw. Signal Process., vol. 7, pp. 406–415, 1997. [27] W. Y. Leong and D. P. Mandic, “Blind extraction of noisy events using nonlinear predictor,” in Proc. 2007 IEEE Int. Conf. Acoust., Speech, Signal Process., Apr. 6–8, 2004, pp. 657–660.
Authorized licensed use limited to: Imperial College London. Downloaded on July 12,2010 at 11:22:57 UTC from IEEE Xplore. Restrictions apply.
2638
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS—I: REGULAR PAPERS, VOL. 55, NO. 9, OCTOBER 2008
[28] W. Y. Leong and D. P. Mandic, “Adaptive blind extraction for post-nonlinearly mixed signals,” in Proc. 2006 IEEE Int. Workshop Mach. Learning Signal Process., Maynooth, Ireland, Sep. 6–8, 2006, pp. 91–96. [29] W. Y. Leong and J. Homer, “EKENS: A learning on nonlinear blindly mixed signals,” in Proc. 30th Int. Conference on Acoustics, Speech, and Signal Processing (ICASSP’05), Mar. 19–23, 2005, vol. 4, pp. 81–84. [30] X. L. Li and X. D. Zhang, “Sequemtial blind extraction adopting second-order statistics,” IEEE Signal Process. Lett., vol. 14, pp. 58–61, 2007. [31] Y. Q. Li and J. Wang, “Sequential blind extraction of instananeously mixed sources,” IEEE Trans. Signal Process., vol. 50, pp. 997–1006, May 2002. [32] Y. Q. Li, J. Wang, and J. M. Zurada, “Blind extraction of singularly mixed source signals,” IEEE Trans. Neural Netw., vol. 11, no. 6, pp. 1413–1422, Nov. 2000. [33] Z. L. Zhang and Z. Yi, “Robust extraction of specific signals with temporal structure,” Neurocomputing, vol. 69, pp. 888–893, 2006. Wai Yie Leong received the B.S. and the Ph.D. degrees from The University of Queensland, Australia, in 2002 and 2006, respectively, both in electrical engineering. In 1999, she was a System Engineer at the Liong Brothers Poultry Farm. From 2002 to 2005, she was appointed as Research Assistant and Teaching Fellow of the School of Information Technology and Electrical Engineering, The University of Queensland, Australia. She is also a Teaching Fellow of St. John’s College, Australia. In 2005, she joined the
School of Electronics and Electrical Engineering, Imperial College London, U.K. as a Postdoctoral Research Fellow. Between her B.Eng. and Ph.D studies, she has been actively involving in research commercialization. She is now the Research Engineer of A*STAR Corporate, Singapore, and the Head of the Sensing and Conditioning Laboratory. Her research interests include blind source separation, blind extraction, smart sensor, wireless communication systems, smart antennas and biomedical engineering.
Danilo P. Mandic (SM’97) is a Reader in Signal Processing, Imperial College London. He has been working in the area of nonlinear adaptive signal processing and nonlinear dynamics. His publication record includes two research monographs Recurrent Neural Networks for Prediction, and Complex Valued Nonlinear Adaptive Filters with Wiley, an edited book on Signal Processing for Information Fusion (Springer 2007), and more than 200 publications in Signal and Image Processing. He has been a Guest Professor in KU Leuven Belgium, TUAT Tokyo, Japan, and Westminster University, U.K., and Frontier Researcher in Riken, Japan. Dr. Mandic has been a Member of the IEEE Technical Committee on Machine Learning for Signal Processing, Associate Editor for the IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II, IEEE TRANSACTIONS ON SIGNAL PROCESSING, and International Journal of Mathematical Modelling and Algorithms. He has produced award winning papers and products resulting from his collaboration with Industry. He is a Member of the London Mathematical Society.
Authorized licensed use limited to: Imperial College London. Downloaded on July 12,2010 at 11:22:57 UTC from IEEE Xplore. Restrictions apply.