An EM Algorithm for Ion-Channel Current Estimation - Semantic Scholar

Report 0 Downloads 112 Views
26

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 56, NO. 1, JANUARY 2008

An EM Algorithm for Ion-Channel Current Estimation William J. J. Roberts, Senior Member, IEEE, and Yariv Ephraim, Fellow, IEEE

Abstract—Parameter estimation of a continuous-time Markov chain observed through a discrete-time memoryless channel is studied. An expectation-maximization (EM) algorithm for maximum likelihood estimation of the parameter of this hidden Markov process is developed and applied to a simple example of modeling ion-channel currents in living cell membranes. The approach follows that of Asmussen, Nerman and Olsson, and Rydén, for EM estimation of an underlying continuous-time Markov chain. Index Terms—Ion-channel, Markov modulated process.

I. INTRODUCTION

I

N this paper we study parameter estimation of a continuous-time finite-state homogeneous Markov chain observed through a discrete-time memoryless invariant channel. This hidden Markov process [8] is naturally encountered in communication theory where, for example, the Markov chain represents a telegraph signal [24]. It is also commonly used in modeling ion-channel currents in living cell membranes [5], [10]. Another application is in modeling internet traffic [23]. We focus here on Gaussian channels, and refer to the process as a Markov-modulated Gaussian process (MMGP) in analogy to Markov-modulated Poisson processes [20]. The parameter of the MMGP comprises the initial state distribution, the generator of the underlying Markov chain, and the parameter of the memoryless channel. For the Gaussian channel, this latter parameter comprises a set of means and variances. We develop an expectation maximization (EM) algorithm with the goal of achieving a maximum likelihood (ML) estimator of the MMGP parameter. We adopt the approach developed by Asmussen, Nerman, and Olsson [2], and Rydén [20], who exploited the explicit form of the likelihood function of a continuous-time Markov chain. The approach developed here is not limited to the Gaussian channel and is applicable to other channels as well. We test our algorithm on a simple example of ion-channel current estimation. For this application, the most significant parameter is the generator of the continuous-time Markov chain, which characterizes the rate of the exponentially distributed dwell time of the current at each conductance level.

Manuscript received July 21, 2006; revised March 9, 2007. The associate editor coordinating the review of this manuscript and approving it for publication was Dr. Antonia Papandreou-Suppappola. W. J. J. Roberts is with Atlantic Coast Technologies, Inc., Silver Spring, MD 20904 USA (e-mail: [email protected]). Y. Ephraim is with the Department of Electrical and Computer Engineering, George Mason University, Fairfax, VA 22030 USA (e-mail: yephraim@gmu. edu). Digital Object Identifier 10.1109/TSP.2007.906743

Estimation of continuous-time Markov chains observed in continuous-time channels has been studied at length, first by Wonham [24], then by Dembo and Zeitouni [6], and later by Elliot, Aggoun, and Moore [7]. These approaches relied on the generalized Bayes or Kallianpur-Striebel formula [15] which uses the powerful technique of change of measures. While our problem could have been developed along the same lines, the approach of Asmussen et al. and Rydén is adequate for our model. The problem of parameter estimation of an MMGP has been encountered in the above mentioned applications. In [5] and [23], the generator of the continuous-time Markov chain was obtained from an estimate of the transition matrix of the Markov chain obtained from sampling the continuous-time chain at time instants corresponding to the observation times. The transition matrix was estimated using the Baum algorithm [3]. In that approach, the generator estimate was simply the (principal) matrix logarithm of the transition matrix normalized by the sampling period. Conditions for existence and uniqueness of a generator for a given transition matrix were given by Israel, Rosenthal, and Wei [13]. It is demonstrated in [13] that not all transition matrices have generators, and when one exists, it may not be unique. When a transition matrix does not have a generator, then the discrete-time Markov chain is not embeddable in the continuous-time chain. When a transition matrix has multiple generators, then each provides a different transition probability for jumps over a given time interval. For example, when the eigenvalues of the transition matrix are all distinct positive, then the matrix logarithm provides a unique generator. If any of the distinct eigenvalues of the transition matrix are negative, then no generator exists [13, Theorem 5.2]. In addition to these shortcomings, a generator matrix with a certain structure is often desired [10] and it is not generally possible to estimate a transition matrix that results in the required structured generator. In other approaches to the problem, Fredkin and Rice [10] performed numerical maximization of the MMGP likelihood function using a general purpose optimization procedure. Michalek and Timmer [16], and Qin, Auerbach, and Sachs [17], augmented numerical MMGP likelihood maximization with analytical expressions for the gradient of the likelihood with respect to the generator matrix. In contrast with these approaches, the approach proposed here estimates the generator directly from the data in the ML sense using the EM algorithm. Each EM iteration leads to an explicit closed form estimator for the parameter of the MMGP. Furthermore an initially imposed structure for the generator matrix, such as some of its elements equal to zero, is maintained in each EM iteration. We demonstrate the performance of the EM algorithm for ion-channel current modeling. We use a simple example similar

1053-587X/$25.00 © 2008 IEEE

ROBERTS AND EPHRAIM: ION-CHANNEL CURRENT ESTIMATION

27

to that given in [5] and [10]. We also demonstrate some of the difficulties associated with estimation of the generator through the transition matrix of the sampled Markov chain. The plan for the remainder of this paper is as follows. In Section II we specify the MMGP process. In Section III we present the EM algorithm and discuss its efficient implementation using some of our earlier results from [21]. In Section IV, we present numerical results from the ion-channel modeling application. Comments are provided in Section V. II. DEFINITION OF THE MMGP An MMGP is a finite-state continuous-time homogeneous Markov chain observed through a memoryless invariant Gaussian channel. It is characterized by the distribution of the Markov chain, and by the conditional distribution of each observation given the state of the Markov chain. denote the continuous-time Markov chain Let . The number of states conwith state space, say stitutes the order of the chain and is assumed known. Let denote the generator of the Markov chain, which is assumed irreducible, and define . Let devector of the initial state probabilities of the note the Markov chain. Assume that we observe the MMGP over the , and that during this period we have unitime interval formly spaced observations at time instants , , for some . Let denote the th observation and let denote its realization. Let denote the Gaussian probability density function (pdf) of given that the Markov chain resides in state at time . As and denote, respectively, the mean and variance usual, of the Gaussian pdf. transition probaThe MMGP is characterized by the bility matrix where . The element of the transition density matrix is obtained from differentiation of with respect to and is given by

(1) Note that we use to denote both a probability measure and a density as appropriate. An exception to this rule is the pdf of which will be the observation sequence denoted by . Additionally in (1), we have assumed that the . Let , observation depends only on denote an diagonal matrix whose element and let . Rewriting (1) in matrix form we obtain is (2) Let denote the parameter of the can now be written MMGP. The density of the observations as

term effects on the likelihood function [4]. Furthermore, when the process is assumed stationary, then estimation of the stationary distribution significantly complicated the algorithm. In this case, constrained ML estimation subject to must be implemented [12, p. 261]. For these reasons, we redefine the and focus on its estimaparameter as tion. III. EM ALGORITHM The EM algorithm attempts to find a maximum likelihood estimate of the parameter from a sequence of realizations . Our derivation and its notation is similar to that employed by Rydén [20] for EM estimation of the parameter of a Markov modulated Poisson process. We also use the computational enhancements to Rydén’s approach developed in [21]. In the EM approach, the missing data is conveniently chosen to facilitate the estimation problem. Thus, the missing data could be, for example, the discrete-time Markov chain obtained from sampling the continuous-time chain at time instants corresponding to the observation times. This approach could be refined if the continuous-time chain is sampled at a higher rate. The rationale here is that refined sampling could eventually lead to the continuous-time sample path. While this choice of missing data results in a simple E-step, which is similar to that obtained in the Baum algorithm, it does not lead to an explicit M-step for estimating the generator [19]. If the generator is estimated as the matrix logarithm of the transition matrix estimate of the discretized chain, then it is not guaranteed to have negative off-diagonal terms, and a priori structure of the generator cannot be guaranteed, as elaborated in Section I. The proposed approach, which capitalizes on the ideas of Asmussen et al. [2] and Rydén [20], is free of these shortcomings and hence is the method of choice in this paper. Assume that the Markov chain jumps at the time instants . Set and . denote the state of during . Let Let denote the number of observations that are generated within . Let . Let denote the complete data likelihood which can be written as

(4) where represents the th element of the row vector . The first line in (4) represents the density of the trajectory of the Markov [1], [20]. The second line represents chain the Gaussian pdf of given . Let denote the indicator function. Taking the logarithm of (4) yields

(3) where is an vector with all elements equal to one. We shall not focus on estimation of since it has negligible long

(5)

28

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 56, NO. 1, JANUARY 2008

where

. Applying Bayes’ rule to the conditional expectation of (7) we obtain (6)

denotes the time

spent in state during

, (14)

(7) denotes, for to state in

, the number of jumps of ,

appears at time Note that in our notation, observation . Next, using conditional independence of and given yields

from state

(8) (15)

indicates whether or not the Markov chain was in state when was observed, and (9)

where gration over the disjoint intervals yields

. Performing the inte,

denotes the number of observations that occurred while was in state . A new parameter estimate, say , is obtained as the parameter which maximizes the expected value of (5) and an current parameter estimate . Performing the given constrained maximization leads to the new estimate given by (16) (10) (11)

The integrand in (16) comprises the product of forward and backward densities. For the forward density we have

(12)

where , , , and are, respectively, conditional mean estimates under of , , , and given . These intuitive expressions constitute the M-step of the EM algorithm. The estimate of which is not necessarily the stationary distri. bution is given by It remains to perform the E-step of the EM algorithm which provides the above conditional expectations. We first focus on , the conditional mean of the number of jumps of from state to state given . Using a similar approach to that taken in [2, p. 440], it can be shown that

(17) where denotes an vector of which the th element is one and all remaining elements are zero. For the backward density we have

(13) (18) This result is similar to [20, Eq. 13] of Ryden for MMPPs. Next, , and define as the largest inlet and as the smallest integer for any real valued teger

where

denotes the transpose of

.

ROBERTS AND EPHRAIM: ION-CHANNEL CURRENT ESTIMATION

29

The forward and backward densities have large dynamic range and hence must be scaled appropriately [14]. A scaling procedure was developed in [21] and is applied here. The is given by , and at scaling factor at , by . Define the scaled forward density as (19) The forward recursion follows immediately as

with . The calculated as follows:

term for

It can be shown that (30) denotes term-by-term multiplication of the two mawhere trices. Note from (30) and (10) that the new EM estimate of maintains any zeros present in the current estimate. The integral (29) can be efficiently evaluated using an approach developed by Van Loan [22, Theorem 1] and applied in block-tri[21]. To apply this approach here, define the angular matrix

(20)

(31)

can be readily

upper right block of the matrix Then is given by the . Thus, the evaluation of the integral of matrix exponentials in (29) can be replaced by evaluation of a matrix exponential of higher dimension. The latter can be efficiently performed using the diagonal Padé approximation with repeated squaring, as recommended in [22]. To continue the description of the E-step we now address the evaluation of , the conditional mean of the time that spent in state given . Using (6), this estimate is given by

(21) (22) (23) Hence

(32) (24)

Define the scaled backward density as

Using the conditional independence of and given , and the other techniques used in the evalua, it is readily shown that tion of

(25) The scaled backward recursion is given by (26)

(33) To complete the description of the E-step we address the eval. The remaining estimate is obtained from uation of as evident from (9). From (17), (18), (20), and and , de(26) it is apparent that the th elements of and , respectively, are given by noted by

with . Substituting the scaled forward-backward densities (19) and (25) into (16) we have that

(34) (27)

Hence

Next, we obtain a compact expression for the matrix . Define the following two matrices given by: (28)

(35) Finally, the log-likelihood of the process is given by

and (29)

(36)

30

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 56, NO. 1, JANUARY 2008

IV. NUMERICAL RESULTS In this section we demonstrate the performance of the proposed EM algorithm using data that simulates ion-channel currents in living cell membranes. Ion-channel currents are generally modeled as finite-state continuous-time homogeneous Markov chains whose states represent conductance levels. The ion-channel current dwells at a conductance level for an exponentially distributed time. Ion-channel currents are generally recorded using the patch clamp technique which introduces substantial amounts of noise and deterministic interferences. In its simplest form the noise is assumed additive, Gaussian and white. The recorded signal can thus be modeled as a finite-state continuous-time Markov chain observed through a memoryless Gaussian channel. The ion-channel currents were modeled here using an MMGP similar to that proposed by Fredkin and Rice [10, Fig. 2]. The parameter of the model used here is given by

imum for the estimated variances. Specifically, any was set to 0.04. The algorithm was terminated when the relative difference between log-likelihood values of successive itor when the number of iterations reached erations was 500, whichever occurred first. Matrix exponentials were calculated using the Matlab function “expm” which relies upon the Padé approximation with repeated squaring. For each component of the parameter of the MMGP, say for the th component , the empirical bias and variance of its estimate were calculated. For a given sampling period , let denote an estimate of as obtained from the th MMGP realization. The empirical mean, bias, and standard deviation of were, respectively, obtained from (39)

(40) and (37) This parameter differs from that specified in [10] only in the value of which was changed from 0 to 0.5 to avoid having two states with equal means and equal variances. States with equal means and variances can result in an unidentifiable model, see [8], [9], [18]. A uniform was used to avoid bias toward any particular state. Note that the constant variance of the Gaussian processes represents the variance of the measurement noise in the patch clamp technique. As such, this variance is independent of the state of the continuous-time Markov chain. The proposed EM algorithm could be specialized for the constant variance case by an obvious modification of (12), but this was not done here. The parameter specified in (37) was used to generate realizations from the MMGP. The MMGP samples were generated at regular intervals of seconds each, for a total duration of sec. We have tested three possible values for given , and . by . The sampling rate used in [10] corresponds to Independent Gaussian noise with zero mean and standard deviation equal to 0.3 was added to each sampled Markov chain realization. The number of samples in each MMGP realization . A total of 1000 realizations were generwas ated for each value of considered. Each triplet in the 3 1000 MMGP realizations shared the same Markov chain realization. The algorithm was initialized by

(38) was known to the algorithm. As in The true the application of hidden Markov processes to speech, performance of the algorithm here was improved by enforcing a min-

(41) Table I provides the empirical bias and standard deviation in estimating the MMGP parameter for the three tested sampling periods. From this table, the intuitive result that the estimation accuracy of the Gaussian channel parameter improved increased for fixed is seen. This is intuitively exas pected since the noise is white, and increasing provides new uncorrelated noise measurements when the Markov chain is assumed known as is the case in every EM iteration. The situation is different with regard to the estimation accuracy of the generator . Here, the estimation accuracy depends primarily on rather than on since the Markov chain is continuous time and it is treated as such by the proposed EM algorithm. Note also that the EM algorithm does not change the value of any entry of the generator whose initial value was zero or whose value became of the genzero at any subsequent iteration. The element erator was hardest to estimate due to the structure of which implies that the chain spends relatively short time in the first state. For comparison purposes, an alternative EM approach, see, e.g., [5], [23], which relies on the classical Baum algorithm was studied. Recall that in the proposed EM approach, the missing data was chosen to be the continuous-time Markov chain. In the alternative EM approach, the missing data was taken as the Markov chain obtained from sampling of the continuous-time chain at the time instants of the observations. This results in a discrete-time Markov chain observed through a discrete-time Gaussian channel. This is essentially the hidden Markov process commonly used in other applications, such as speech modeling, for which the Baum algorithm is appropriate. The parameter of interest, however, remains the generator of the continuous-time Markov chain rather than the transition matrix of the sampled chain normally estimated by the Baum algorithm. Since the Baum algorithm does not lead to an explicit generator estimate, it has been a common practice among users of this approach,

ROBERTS AND EPHRAIM: ION-CHANNEL CURRENT ESTIMATION

31

TABLE I EMPIRICAL BIAS ( ) AND STANDARD DEVIATION ( ) IN PARAMETER ESTIMATION USING THE PROPOSED EM APPROACH

TABLE II PERCENTAGE OF CASES WHERE MATRIX LOGARITHM COULD NOT BE APPLIED IN THE BAUM-BASED APPROACH

to first estimate the transition matrix of the sampled chain, and then to infer the generator from the principal matrix logarithm of the transition matrix estimate. As noted in Section I, and as will be demonstrated in this section, this approach does not always work, since not all transition matrices correspond to valid generators. Thus a generator estimated in this way may have negative off-diagonal elements, and applying the matrix logarithm may not be possible due to negative eigenvalues of the transition matrix. We shall refer to the above alternative EM approach as the Baum-based approach. The Baum-based approach and the proposed EM approach were compared using the data and experimental setup described earlier. The initial transition matrix of the Baum-based approach was obtained using the matrix exponential of the generator specified in (38). For the three tested sampling periods, Table II shows the percentage of the MMGP realizations for which application of the matrix logarithm in the Baum-based approach was not possible. This occurred whenever the transition matrices estimated by the Baum algorithm had at least one negative eigenvalue. Tables III and IV provide for both approaches the empirical bias and standard deviation, respectively, in estimating the MMGP parameter. To compare the two approaches on the same data, these empirical estimates were calculated using only those realizations for which the Baum-based approach yielded valid generator estimates. Tables II–IV show that by increasing the sampling rate, the failure rate of the Baum-based approach to produce valid generator estimates was reduced, and the performance of the Baum-based approach was improved. For the sampling rate corresponding to used in [10], the

Baum-based approach suffered a failure rate of 4.3%, and its performance was generally poorer compared to the proposed EM approach. Furthermore, for that sampling rate, the generator estimated by the Baum-based approach often contained and negative off-diagonal elements. In particular, the elements were frequently estimated as negative numbers with small magnitudes. V. COMMENTS We have derived an EM algorithm for estimating the parameter of a continuous-time homogeneous Markov chain observed through a discrete-time invariant memoryless channel. We compared the algorithm with a Baum-based approach in which the generator of the continuous-time chain is obtained from the estimated transition matrix of a sampled Markov chain. We have demonstrated that the Baum-based approach frequently fails to provide a valid generator with nonnegative off-diagonal elements. Furthermore in some cases (greater than 21% at sampling rate of 1.28 ms) application of the matrix logarithm to the transition matrix estimated by Baum-based approach was not possible. If the Baum algorithm is derived using the notation and scaling approach used here, it can be shown that while the Baum and the proposed EM algorithms utilize the matrix (28), they differ in how is used to estimate the underlying is used Markov chain. For the proposed EM approach, to estimate the generator matrix using (31). For the Baum is estimated using algorithm, a transition matrix estimate , where is a current transition matrix estimate. As this is the primary difference between

32

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 56, NO. 1, JANUARY 2008

TABLE III EMPIRICAL BIAS OBTAINED BY THE PROPOSED EM APPROACH AND THE BAUM-BASED APPROACH

TABLE IV EMPIRICAL STANDARD DEVIATION OBTAINED BY THE PROPOSED EM APPROACH AND BY THE BAUM-BASED APPROACH

the two algorithms, an implementation of one algorithm can be easily converted to an implementation of the other. Similarly, algorithms for discrete-time Markov chain estimation in correlated noise, see e.g., [11], can be readily modified to estimate a continuous-time Markov chain. A number of further research avenues are possible. First, the algorithm derived here is not limited to Gaussian channels. Second, the MMGP can be readily extended to multivariate processes. Third, our derivation could be readily extended to processes with irregular intersample times. In this case one and (29) becomes a sum of integrals defines over the . The number of matrix exponentials to be evaluated is, thus, increased by . The remaining aspects of the algorithm follow. An interesting example is a random sampling rate, for example, as a result of a jitter in A/D conversion.

ACKNOWLEDGMENT The authors thank the anonymous referees for their useful comments which significantly improved the presentation of this paper. The first author also thanks C. J. Willy for his useful comments. REFERENCES [1] A. Albert, “Estimating the infinitesimal generator of a continuous-time, finite state Markov process,” Ann. Math. Statist., vol. 33, no. 2, pp. 727–753, Jun. 1962. [2] S. Asmussen, O. Nerman, and M. Olsson, “Fitting phase-type distributions via the EM algorithm,” Scand. J. Statist. 23, no. 4, pp. 419–441, 1996. [3] L. E. Baum and T. Petrie, “Statistical inference for probabilistic functions of finite state Markov chains,” Ann. Math. Statist., vol. 37, pp. 1554–1563, 1966.

ROBERTS AND EPHRAIM: ION-CHANNEL CURRENT ESTIMATION

[4] P. Billingsley, Convergence of Probability Measures. New York: Wiley, 1968. [5] S. H. Chung, V. Krishnamurthy, and J. B. Moore, “Adaptive processing techniques based on hidden Markov models for characterizing very small channel currents buried in noise and deterministic interference,” Philos. Trans.: Biolog. Sci., vol. 334, no. 1271, pp. 357–384, Dec. 1991. [6] A. Dembo and O. Zeitouni, “Parameter estimation of partially observed continuous time stochastic processes via the EM algorithm,” Stoch. Process. Appl., vol. 23, no. 1, pp. 91–113, 1986. [7] R. J. Elliott, L. Aggoun, and J. B. Moore, Hidden Markov Models: Estimation and Control. New York: Springer-Verlag, 1994. [8] Y. Ephraim and N. Merhav, “Hidden Markov processes,” IEEE Trans. Inf. Theory, vol. 48, pp. 1518–1569, Jun. 2002. [9] D. R. Fredkin and J. A. Rice, “On aggregated Markov processes,” J. Appl. Probab., vol. 23, pp. 208–214, 1986. [10] D. R. Fredkin and J. A. Rice, “Maximum likelihood estimation and identification directly from single-channel recordings,” in Proc. R. Soc. London, 1992, vol. 125, pp. 125–132. [11] D. R. Fredkin and J. A. Rice, “Fast evaluation of the likelihood of an HMM: Ion channel currents with filtering and colored noise,” IEEE Trans. Signal Process., vol. 49, no. 3, pp. 625–633, Mar. 2001. [12] G. R. Grimmett and D. R. Stirzaker, Probability and Random Processes. Oxford, U.K.: Oxford Science, 2001. [13] R. B. Israel, J. S. Rosenthal, and J. Z. Wei, “Finding generators for Markov chains via empirical transition matrices with application to credit ratings,” Math. Finance, vol. 11, no. 2, pp. 245–265, Apr. 2001. [14] S. E. Levinson, L. R. Rabiner, and M. M. Sondhi, “An introduction to the application of the theory of probabilistic functions of a Markov process to automatic speech recognition,” Bell Syst. Tech. J., vol. 62, no. 4, pp. 1035–1074, Apr. 1983. [15] R. S. Lipster and A. N. Shiryayev, Statistics of Random Processes. New York: Springer-Verlag, 1977, pt. I. [16] S. Michalek and J. Timmer, “Estimating rate constants in hidden Markov models by the EM algorithm,” IEEE Trans. Signal Process., vol. 47, pp. 226–228, Jan. 1999. [17] F. Qin, A. Auerbach, and F. Sachs, “A direct optimization approach to hidden Markov modeling for single channel kinetics,” Biophys. J., vol. 79, pp. 1915–1927, Oct. 2000. [18] T. Rydén, “Consistent and asymptotically normal parameter estimators for hidden Markov models,” Ann. Statist., vol. 22, no. 4, pp. 1884–1895, 1994. [19] T. Rydén, “Parameter estimation for Markov modulated Poisson processes,” Commun. Statist. Stochastic Models, vol. 10, no. 4, pp. 795–829, 1994.

33

[20] T. Rydén, “An EM algorithm for estimation in Markov-modulated Poisson processes,” Computat. Statist. Data Analysis, vol. 21, pp. 431–447, 1996. [21] W. J. J. Roberts, Y. Ephraim, and E. Dieguez, “On Rydén’s EM algorithm for estimating MMPPs,” IEEE Signal Process. Lett., vol. 13, no. 6, pp. 373–377, Jun. 2006. [22] C. F. Van Loan, “Computing integrals involving the matrix exponential,” IEEE Trans. Autom. Control, vol. 23, no. 3, pp. 395–404, 1978. [23] W. Wei, B. Wang, and D. Towsley, “Continuous-time hidden Markov models for network performance,” Perform. Eval., vol. 49, no. 1-4, pp. 129–146, Sep. 2002. [24] W. M. Wonham, “Some applications of stochastic differential equations to optimal nonlinear filtering,” SIAM J. Control, ser. A, vol. 2, no. 3, pp. 347–369, 1965.

William J. J. Roberts (S’89–M’90–SM’06) received the Ph.D. degree in information technology from George Mason University, Fairfax, VA, in 1997. From 1990 to 2000, he was with the Defence Science Technology Organization, Salisbury, South Australia. From 1998 to 1999, he held a postdoctoral position at the Tokyo Institute of Technology, Tokyo, Japan. Since 2000, he has been with Atlantic Coast Technologies, Inc. Silver Spring, MD. His interests are in statistical signal processing.

Yariv Ephraim (S’82–M’84–SM’90–F’94) received the D.Sc. degree in electrical engineering in 1984 from the Technion, Israel Institute of Technology, Haifa. During 1984-1985, he was a Rothschild Postdoctoral Fellow with the Information Systems Laboratory, Stanford University, Stanford, CA. During 1985-1993, he was a Member of Technical Staff at the Information Principles Research Laboratory, AT&T Bell Laboratories, Murray Hill, NJ. In 1991 he joined George Mason University, Fairfax, VA, where he is currently Professor of Electrical and Computer Engineering. His research interests are in statistical signal processing.