1994-TSP-Locally Monotonic Regression.pdf

Report 3 Downloads 82 Views
2796

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL.

41, NO. 9,

SEPTEMBER 1993

Locally Monotonic Regression Alfred0 Restrepo, Member, IEEE, and Alan C. Bovik, Senior Member, IEEE

Abstract-The concept of local monotonicity appears in the study of the set of root signals of the median filter and provides a measure of the smoothness of a signal. The median filter is a suboptimal smoother under this measure of smoothness, since a filter pass does necessarily yield a locally monotonic output; even if a locally monotonic output does result, there is no guarantee that it will possess other desirable properties such as optimal similarity to the original signal. Locally monotonic regression is a technique for the optimal smoothing of finitelength discrete real signals under such a criterion. A theoretical framework where the existence of locally monotonic regressions is proven and algorithms for their computation are given. Regression is considered as an approximation problem in R", the criterion of approximationis derived from a semimetric and the approximating set is the collection of signals sharing the property of being locally monotonic.

I. INTRODUCTION

T

HE rbnning median was conceived by Tukey as a tool for the exploration of time series. It is used as a smoother of discrete signals, due to its ability to preserve monotonic segments, edges, and constant neighborhoods [ 11, 121, while eliminating short-duration pulses; since signal components of these types have overlapping frequency spectra, accomplishing the same task with an invariant linear filter is impossible. The median filter has a simple and concise local definition; however, it is not easy to provide a precise global characterization of it. This becomes evident in the characterization of the set of its root signals [ I], [3], [4], which is not obvious. For the smoothing of signals, several alternatives to the median filter have been proposed, e.g., rank order smoothers 151, the recursive median filter [6], moving trimmed means [7], FIR-median filters [8] and others. A property of signals that has relevance in the study of the median filter is local monotonicity [ 11. Local monotonicity provides a criterion of smoothness since it sets a restriction on how often changes of trend (increasing to Manuscript received July 26, 1991; revised November 1 I , 1992. The associate editor coordinating the review of this paper and approving it for publication was Prof. Aggelos K. Katsaggelos. A. Restrepo was with the Department of Electrical and Computer Engineering, University of Texas at Austin, Austin, TX 78712-1084. He is now with the Departamento de Ingenieria Elictrica, Universidad de 10s Andes, A.A. 4976, Bogota, Colombia. A. C. Bovik is with the Department of Electrical and Computer Engineering, University of Texas at Austin, Austin, TX 78712-1084. IEEE Log Number 9210114.

decreasing and vice versa) may occur. In the evolution of this idea, it is natural to ask for optimal smoothers under a criterion of local monotonicity. As pointed out in [9], the design of nonlinear smoothers is in many instances an art rather than a science. Locally monotonic regression is a direct, optimal approach for smoothing finite-length discrete signals, under a local monotonicity criterion. The concept of local monotonicity is relatively unexplored and has been mostly limited to the theory of a large class of nonlinear filters that includes median filters 113, [2], rank-order filters [6], and order-statistic filters [ 101[121. The smoothness constraint of local monotonicity, on a discrete signal, does not limit the magnitude of the changes the signal makes in going from one coordinate point to the next one but it does limit the frequency of the oscillations of the signal. When a discrete signal increases its value from one coordinate to the next, it is said to have an increasing transition; analogously for a decreasing transition; in a locally monotonic signal, between an increasing transition and a decreasing transition there is always a constant segment of a specified minimum length [l], [2]. Unlike other properties, such as linearity, local monotonicity is defined at the local level rather than at the global level: a locally monotonic signal is monotonic at the local level while at the global level it may be monotonic or not. Local monotonicity is a meaningful measure of signal shape in many instances; for example, the scan lines produced by a convex object in a digital image are often locally monotonic. Departures from local monotonicity may indicate the contamination of a signal with noise and, given a nonlocally monotonic signal, it may be desirable to find a signal that is both similar to the given signal and locally monotonic. The similarity between two signals may be measured using a semimetric for R " . The use of semimetrics instead of metrics is not wasteful; the approximation of signals under a semimetric criterion provides maximum likelihood estimators of signals embedded in very impulsive noise [ 131, [ 141. We consider a space of finite dimension where the Heine-Bore1 theorem [15] holds and the existence of regressions is easily proven. Given a nonlocally monotonic signal v , a signal w in the set of locally monotonic signals that is closest to v is said to be a locally monotonic regression of U . In this paper, the existence of optimal approximations is shown; algorithms for computing the approximations are devised.

1053-587X/93$03.00 @ 1993 IEEE

Authorized licensed use limited to: Peking University. Downloaded on November 3, 2008 at 07:12 from IEEE Xplore. Restrictions apply.

RESTREPO AND BOVIK: LOCALLY MONOTONIC REGRESSION

0

2

2791

4

6

8

10

6

8

10

(a) 350

3w

250

200

150

0

2

(b) Fig. 1 . Comparison of 3-point median filtering and 101110-3 regression. The original signal is indicated by white squares. (a) Three-point median filtered signal (black diamonds); (b) lomo-3 regression is shown in black diamonds.

Even for signals of relatively short duration, the algorithms are computationally expensive; here we define and explore a new smoothing technique; faster algorithms are the subject of further research [16]. To illustrate these ideas, consider the signal segment [224, 192,254,278,249, 312,259, 223,2571, shown in white squares in Fig. 1, taken from [17] (data on bank suspensions corresponding to the years 192 1 - 1929). When this signal is filtered with a (padding) median filter of window size 3 , the signal [224, 224, 254, 254, 278, 259,259,257,2571 results (black diamonds in Fig. l(a)); this signal is not smooth due to the peak at coordinate 5. A locally monotonic regression [208, 208, 254, 263.5, 263.5, 285.5, 285.5, 240, 2401 of the original signal is also shown (black diamonds in Fig. l(b)). The theoretical framework developed here applies to regressions defined with respect to any nonempty closed subset of R". The set of locally monotonic signals is relatively large in R". For example, the set of (affine) linear signals is a proper subset of the set of locally monotonic signals; a larger approximating set provides more freedom in the choice of a similar signal: a locally monotonic regression is at least as close to the signal being regressed as a linear regression is. The approximation of a signal from a set of signals having a given characteristic is an optimization problem. The study of algorithms that com-

pute regressions falls within the field of computational geometry [ 181 where properties of subsets of R" are studied with the help of computers; the design of algorithms that perform regression must be treated on a case-by-case basis. The remainder of the paper is organized as follows. In Section 11, a collection of semimetrics is introduced; they are used as the distance measures under which regression is performed. Conditions for the existence of regressions as well as bounds on the cardinality of the set of regressions are developed. Section I11 develops the concept of local monotonicity, gives a characterization of the boundary of the set of locally monotonic signals, and deals with the concepts of constant regression and of constant semiregression, which are used to compute locally monotonic regressions. In Section IV, algorithms that compute locally monotonic regression are developed; after presenting examples, the paper is concluded in Section V with suggestions for further research. 11. SIGNALREGRESSION UNDERA FAMILYOF SEMIMETRICS In this section, a particular collection of semimetrics is presented, the concept of regression (or projection) is examined, and the existence and multiplicity of regressions are addressed.

Authorized licensed use limited to: Peking University. Downloaded on November 3, 2008 at 07:12 from IEEE Xplore. Restrictions apply.

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 41, NO. 9 , SEPTEMBER 1993

2798

An integer intervalla, bL where a and b are integer numbers, is defined as the set {c E 2: a I c Ib} of integer numbers that are greater than or equal to a and smaller than or equal to b. An n-point signal (or a discrete signal of length n) is a real function x having as domain a nonempty integer intervalla, bl, where b - a = n 1. Its graph is a subset o f l a , b / x R' and is usually drawn as a coordinate-domain plot (called its time series representation); alternatively, the signal may be thought of as 01 of R" is a point in R" (Fig. 2). The origin [0, denoted as 6. Given a subset S of R", Cl(S), Bd(S), and Int(S) , respectively, denote the closure, the topological boundary, and the interior of S , in the standard topology for R" [15]. A. A Family of Semimetrics A semimetric for R" is a positive definite, symmetric function d : R" X R" + [0, 003. Such properties may be stated as follows: V x, V x,

y y

E E

R";

d ( x , y)

R";

d ( x , y)

= =

0 e x

=

y

(positive definite) (symmetric).

d ( y , x)

Thus, many functions are semimetrics for R". For measuring the similarity between signals it is convenient that the semimetric be continuous, positive homogeneous and translation invariant: V x,

Y E R"; V y E R ;

d(yx, yy)

= Iyl

d(x,y )

R" (a)

(b)

Fig. 2 . A finite-length discrete signal U may be represented (a) as a point in R", or (b) as a time series, in its coordinate domain.

B. p-Regressions Signals are usually processed on the basis of a closed formula that determines a transformation R" -+ R", e.g., circular convolution with a given signal. An alternate approach is used here; assume that a desirable property for a signal to have has been specified. Given a signal lacking the property, the problem is to find a signal from among the set of signals having the property, that is, closest to the given signal. Such a property is defined in terms of the coordinate domain of the signals and as such may be called a shape restriction. Let Q be the truth-value function that is true on signals meeting the constraint and false otherwise, and let A = {s E R": Q ( s ) } be the set of signals with the required shape. Suppose that A is a nonempty proper subset of R" and let x E R". The set

(positive homogeneous) V x,

y,z

E

R";

d(x

+ z, y + z)

= d(x,y)

(translation invariant). A metric is a semimetric that has the triangle-inequality property : V x,

y,z

E

R",

d ( x , z)

Id ( x , y )

+ d ( y , z).

The collection of the p-semimetrics is a well-known collection of translation invariant, positive homogeneous continuous semimetrics indexed by the parameter p E (0, a];given two signals x = [x,, * , x,l and y = [ V I , . . . , ynl, define

-

d,(x, y ) = max { ( x i - yi(: 1 Ii 5 n } .

RA(x) =

{U E

A : d p ( x , U ) = Dp(x, A)}

is called the set of p-regressions (or p-projections) of x with respect to A. Two-projections on subspaces of R" are projections in the usual sense of the word; that is, the difference between a point and its 2-projection on a subspace is orthogonal (given the standard inner product on R") to the subspace. The use of semimetrics different from the Euclidean metric generalizes the concept of projection in a metric way and provides several criteria to measure the similarity between signals. It turns out that it is useful to have these many criteria; for example, the maximum likelihood estimates of lomo signals contaminated with certain very impulsive noises are regressions of the noise signal under a semimetric that is not a metric; similarly, if the noise is uniform, the maximum likelihood estimates are regressions under the product metric [ 141, [20].

As a function of p , d p ( x ,y ) is a nonincreasing [19] continuous function and, as p increases, for each x and y in C. Existence and Uniqueness of Regressions R", d p ( x , y ) tends to d,(x, y ) . F o r p E [ I , 0 3 1 , dp is a For each p E (0, a)] and for each x E R", since the metric; f o r p E (0, l ) , dp is a semimetric but is not a met- p-semimetrics are continuous, the restriction dp(x, - ) of ric. The collection includes the Euclidean metric d 2 , the dp to {x} x R" is a continuous function. Given a nonsquare metric d l and the product metric d,. empty, proper, and closed subset A of R", and points a E For p E (0, 00) and x E R", the set B p ( x , p ) = { s E R": A and x E (R" - A), let U = d p ( x , a ) . The closed ball dp(x, s ) c p } is the open p-ball of radius p centered at x. = C1[Bp(x, U)] is nonempty and bounded and, from defThe p-distance between a point x and a set S is given by initions, it follows that it contains the set of p-regressions of x with respect to A. Since R" has the Heine-Bore1 D p ( x , S) = inf { d p ( x ,s): s E S } .

Authorized licensed use limited to: Peking University. Downloaded on November 3, 2008 at 07:12 from IEEE Xplore. Restrictions apply.

\

RESTREPO AND BOVIK: LOCALLY MONOTONIC REGRESSION

2199

striction. The problem of how to compute (if any) the elements of RA (x) depends on the particular choices of A and p and must be treated on a case-by-case basis. When A is the set of locally monotonic signals, locally monotonic regression results. When A is the set of constant signals, constant regression results; when A is the set of linear signals, which is the plane spanned by the signals 0 , [ 1 , n ] ,linear regression results, etc. 11 and [ 1 , 2, * 1,

-

Fig. 3 . p-balls of the same radius and center in R 2 , for p and W .

=

1 / 2 , I . 2,

111. LOCALMONOTONICITY AND

property [ 1 5 ] , is compact and since d,,(x, is continof B r l A is closed and uous, the image under d,,(x, contains its infimum D,, (x, A ) . Since the points in B n A that are mapped to D,,(x, A ) are the regressions of x, this proves the existence' of p-projections on nonempty, closed, proper subsets of R". Moreover, if A is a proper subset of R " , x is a point in (R" - A ) and p = Dp(x, A ) , the set of p-regressions RA (x of x with respect to A is given by Bd[B,,(x, p ) ] n A [20]. A related topic is that of the cardinality of the set of regressions, that is, the multiplicity of the solution set to the approximation problem. It depends on the type of convexity of both the p-balls and of the approximating set A under consideration. A subset S of R" is said to be convex if the line segment between each pair of points of S is a subset of S ; a subset S is said to be strictly convex if for each pair of points x and y in Bd(S), the open line segment ( a x (1 - a ) y : a E (0, l)} is a subsct of Int(S). For p E (1, 03), p-balls are strictly convex, for p = 1 and f o r p = 03, p-balls are convex but not strictly convex, and f o r p E (0, l), p-balls are not convex. The fact that, for p < 1 , p-balls are not convex and dp is not a metric is not a coincidence: in a large and general collection of semimetrics, those semimetrics that determine convex balls are metrics and those that determine nonconvex balls are not metrics [20]. Fig. 3 depicts p-balls in R 2 for p = 0.5, 1 , 2 , and 00. The following lemma says that if the set A is convex and Bp(O, 1 ) (the convexity of a p-ball is independent of its radius or center) is strictly convex then, for each x in R", RA@) is a singleton set, that is, regressions are unique. If Bp(O, 1 ) is only convex then regressions may or may not be unique. The proofs of all lemmas are given in Appendix A. Lemma I: If A is a nonempty convex closed subset of R",p E (1, 03) and x is in R" - A , then the boundary of the p-ball centered at x with radius Dp(x, A ) intersects A at exactly one point. In this section, the problem of signal regression with respect to a semimetric d,, has been defined within a general theoretical framework. From a mathematical standpoint, it generalizes the concept of projection of points on subsets of R"; from a signal processing perspective, it defines the concept of shaping a signal, given a shape re0

CONSTANT REGRESSION

)

e )

+

'The authors thank Prof. I . W. Sandberg for showing them this elegant proof of the existence of regressions.

In this section, the concepts of local monotonicity and of constant regression are explored. Monotonicity is a property of time series that has been successfully exploited in the field of statistical estimation [21]. Local monotonicity is a property of one-dimensional signals that provides a measure of the smoothness of a signal; it constrains the roughness of a signal in a particular way. It does so not by limiting the support of its Fourier transform, nor by limiting the number of its level crossings, but rather, by imposing a local constraint on how often a change of trend (increasing to decreasing or vice versa) may occur. In a sense, it limits the frequency of the oscillations that a signal may have, without restricting the magnitude of the changes the signal makes. Constant regression, or signal approximation with constant signals, is an important tool for the computation of locally monotonic regressions, and is used extensively in Section IV. If u is an n-point signal and y is an integer less than or equal to n , then a segment of u of length y is a signal [v, + 1 , * , v, +,I whose components are consecutive components of v ; it is the restriction of U to/i 1, i y 1 A signal U : / 1, n / -+ R is monotonic if either v 1 Iv2 5 I v , o r v l 1 v2 2 1 v, and strictly * < v,, or v l > v2 > monotonic if either v 1 < v2 < . * > v,. If v l = v2 = . = v,, v is constant. Signals of length 1 are defined as monotonic. The concatenation u'1u2of U' = [ v i , , v,] and v2 = [ v , + ~ ,* * , vr+,] is the signal [ v i , * * V r , v r + l , * * * , v,+,], of length r s. A constant segment [ y o , * , ub] whose components take the value x is denoted ( / a , b / , x), where the intervalla, b/is the width of the signal and the real number x is the level of the signal. A signal can be uniquely expressed as the concatenation of constant segments, where the level of each segment is different from the levels of its (at most two) neighbor segments. This is called the (canonical) segmentation of a signal into constant segments.

+

e

-

.

--

+

3

+

-

--

A . Local Monotonicity A signal is locally monotonic of degree a , or lomo-a (or Zomo, if a is understood) if each of its segments of length a is monotonic. Except when stated otherwise, from this point on it is assumed that the length n and the degree of local monotonicity CY are given, with 3 I(Y I n. The set of signals of length n that are locally monotonic

Authorized licensed use limited to: Peking University. Downloaded on November 3, 2008 at 07:12 from IEEE Xplore. Restrictions apply.

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 41, NO. 9, SEPTEMBER 1993

2800

of degree CY is denoted as A and the collection of monotonic signals of length n is denoted as M. If components xi and xi+ of a signal x are such that xi < xi + I (x; > xi+ it is said that x has an increasing (decreasing) transition at coordinate i. If a signal is lomoCY and has an increasing transition at j and a decreasing transition at k then the signal has a constant segment of length at least a - 1, with coordinates between j and k [l]. Each signal in (A - M) has a constant segment of length at least CY - 1. If (3 and y are natural numbers with 3 Ip 5 y,a signal that is lomo-y is lomo-6 as well; thus, the lomotonicity of a signal is defined as the highest degree of local monotonicity that it possesses. (The minimal degree of local monotonicity any signal has is one; also, any signal of length at least two is lomo-2.) Regression with respect to the set A is called locally monotonic regression; from the results in Section 11, locally monotonic regressions exist provided that A is closed. Lemma 2: A is closed. Let I? be the set of signals having at least one constant segment of length at least two. Lomo regressions of nonlomo signals lie on the boundary of A; Lemma 3 says that Bd(A) = A n I? and that Int(A) = A n r'. Lemma 3: Bd(A) = A n r and Int(A) = A n I". From Lemma 3 we conclude that the lomo regressions of nonlomo signals have constant segments of length at least two. This suggests a way in which lomo regressions may be computed.

B. Constant Regression Constant regression is regression with respect to the set of constant signals 9 , which is the line spanned by 6' and [ l , 1, * * , 11. As shown in Section IV, constant regressions are used when computing lomo regressions to replace segments of the signal being regressed. In what follows, we use the order statistics v ( ~ ) , , of a signal v with components v I , . ,v, [22], where v(;) is the ith order statistic of U . If n is odd, v ( [ n+ is the median of U . If n is even, the closed interval [v(,12), v ( [ , , / ~+ ]I)] bounded by the two central ranked values of v , is the set of medians of v . The midrange of v is [ v ( ~ )+ vcfl,]/2; the range of v is v(,) - v ( ~ ) . For p E (1, oo), the strict convexity of p-balls and the convexity of 9 guarantee the uniqueness of constant regressions. a-balls are not strictly convex; nevertheless, constant oo-regressions are unique. By definition, a constant p-regression of a signal v = [ V I , * * * , v,] is a constant signal of length n where each of its components has a value of x and its distance to v is minimal. For p = 2, the value of x is the average of the components of U . For p = 1, x is the median of v if n is odd, and any of the medians of v if n is even. For p = a,x is the midrange of v [20]. F o r p E (1, 00) a n d p f 2, there are known closed-form expressions for computing constant regressions; they can be numerically approx-

-

-

TABLE I MULTIPLICITY OF CONSTANT REGRESSIONS

(0, 1 )

E

1 E

Multiplicity of Constant Regressions

Convexity of p-ball

p

(1, w )

W

Finite (Possibly Multiple) Unique or Uncountable Unique Unique

Not Convex Convex, Not Strictly Strictly Convex Convex, Not Strictly

TABLE I1 COMPUTATION OF CONSTANT REGRESSIONS Constant Regressions of

P E

[U,,

. . . , U,]

. . . , U,, U,]: i = / l , n / ) Median(s) of [ U , , U?, . . . , U,] Average of [ U , , . . . , u,J N o Known Closed-Form Expression Midrange of [ U ! , . . . , U"]

(0, 1)

E { [ u 8 , U,,

1

2 € ( I , m) - (21 00

imated by minimizing the error function n

For p E (0, l), given a signal v = [vl, each point in {vi: i E/ 1, n / 1, the function /

II

*

, v,], for

*

\

has a local minimum in the variable x; each of the convj), j E/ 1, n/that locally minimizes stant signals ( /1, ./, the error is a constant semiregression; the constant regressions of ZJ are its constant semiregressions at the smallest distance. Tables I and I1 summarize the multiplicity and computability of constant regressions, while Fig. 4 shows constant regressions of the signal [ 6 , 15, 1 , 3 , 2, 51 f o r p = 1/2, 1, 2 and 00. The values of the components of the regressions shown are, in respective order, 3 , 4, 5.33, and 8. IV. COMPUTING LOCALLYMONOTONIC REGRESSIONS In this section algorithms for computing lomo p-regressions are given. The set of lomo signals is a disjoint union of a large number of convex cones; computing lomo regressions is a complex problem. Two algorithms are described: the blotching algorithm and the tube algorithm. A . The Complexity of the Problem The sign skeleton s of an n-point signal v is the (n 1)-point trivalued signal that, for each i in/ 1, n - 1 has ith component si with a value of - 1, 0 or 1, respectively, if (vi + - vi) is negative, null, or positive. A sign skeleton contains sufficient information to determine the lomotonicity of the signal it is derived from. The set of signals having a given sign skeleton is a convex cone (a

Authorized licensed use limited to: Peking University. Downloaded on November 3, 2008 at 07:12 from IEEE Xplore. Restrictions apply.

+

RESTREPO AND BOVIK: LOCALLY MONOTONIC REGRESSION

0

1

2

2801

3

4

5

7

6

Fig. 4. A signal (white squares) and four constant regressions under the product metric (black squares), Euclidean metric (white diamonds), square metric (black squares), and 0.5-semimetric (black diamonds).

cone is a subset of R" closed under multiplication by positive real numbers); the collection of all such cones partitions R" into 3"-' convex cones; let G = {Ci: i €/I, 3" - I/} denote this partition. The set 9 of constant signals is the cone in of signals having null sign skeleton; 9 is the only cone in G that is a subspace. Clearly, a subset C'of G provides a partition of the set A of lomo signals: A is a finite union of convex cones, it is not a subspace, nor a convex set, nor a cone. The problem of finding 2-projections on algebraic subspaces or convex subsets of R" is solved [23]. Since the set of projections of a signal onto each of the cones in G' contains the projections of the signal on A, this number of cones provides a measure of the complexity of the problem of computing projections on A. For example, the set of nondecreasing signals and the set of nonincreasing signals of length n are each the union of 2"-' cones in G,and the set of monotonic signals is the union of 2" - 1 cones (constant signals are both nondecreasing and nonincreasing). The set of locally monotonic signals of degree 3 is &)A;-* (3/2 - &)A;-' cones the union of (3/2 of G,where XI = 1 + &and X3 = 1 - h ( s e e A p p e n dix B for the derivation of this formula). Accordingly, for n = 3, 4, 5 , 6, 7, and 8, this number is 7, 17, 41, 99, 239, and 577, respectively. For degrees of local monotonicity larger than 3 the complexity is somewhat smaller, but it is clear that these numbers grow very fast with n .

+

+

B. Blotched Signals The boundary of A consists of lomo signals that have constant segments of length at least 2; each lomo 2-regression of a signal may be obtained by replacing some segments of the signal with their corresponding regressions [24]. This is also true for other values of p ; locally monotonic regressions are obtained by replacing certain segments with constant regressions if 1 Ip < 03 [25], and with constant semiregressions if p < 1. For p = 03 some, but not necessarily all, lomo regressions are obtained in this way. The procedure of replacing a collection of nonoverlapping segments of a signal with their corresponding con-

stant regressions or semiregressions is called blotching and the resulting signal is a blotched version of the original signal. The term "blotch" has been used in the literature with a similar meaning to describe a sometimes undesirable effect of the median filter [26] when applied to images. In [3] the term block is used with the meaning of longest constant segment. Consider the collection of the 2" - subsets of/ 1, n 1 1 partially ordered by set inclusion. Each such subset K, determines the subspace S, = {U: (V j E K,) v, = v, + of R" of the signals whose sign skeletons have null components (at least at) at the coordinates in K,; their collection is denoted S = { S I :i E/ 1, n - 1/v, = v, + Moreover, to each subset K, there corresponds a set H, of disjoint open subintervals of / 1, n 1 whose union express K, U K: as a minimal union of integer intervals, where K : = { j : (3 k E K,)j = k l } . For example, to K = (1, 2, 5, 6) there corresponds H = { / l , 3 / , / 5 , 7/}. For p E [ l , 001, given a set H , , a blotching transformation on a signal produces the signals that result from replacing the segments with coordinates indicated by H, with their constant regressions. Thus, for p E [ 1, 03) the blotched versions of a signal are its projections on SI (see the proof of Lemma 4 in Appendix A). For p E (1, 0 3 3 such blotched versions are unique. For p = 1, the number of blotched versions may be infinite, if the length of a segment to be replaced is even. For p = 00, all blotched versions are projections on S, but not all projections are blotched versions; the reason has to do with the particular way in which the product metric is defined, see the comment after the proof of Lemma 4 in Appendix B. For p < 1, given a set H , , a blotching transformation produces signals that result from replacing the indicated segments with either constant regressions or constant semiregressions. Thus, all projections on SI are blotched but not all blotched versions are projections. Summarizing, there are 1- 1 correspondences between the sets K,, the sets H , , and the spaces S,; H, denotes the blotching transformation and S, its projecting set. For example, under the product metric, the H = { / 1, 3 1 /5, 6/} applied to [ 1, 2, 5 , 4, 3, 9, 1, 6, 71 implies the replacement of segments [ l , 2, 51 and [3, 9, 11 with con-

+

Authorized licensed use limited to: Peking University. Downloaded on November 3, 2008 at 07:12 from IEEE Xplore. Restrictions apply.

2802

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 41, NO. 9, SEPTEMBER 1993

0

2

4

6

8

10

Fig. 5 . The 8-point signal 11, 2 , 5 , 4 , 3 , 9, 1 , 6, 71 (white squares) is blotched with { / l , 3 / / 5 , 7 h , producing the blotched signal [3, 3 , 3 , 4 , 2 , 2 , 2 , 6, 71 (black diamonds).

0

2

4

6

8

10

Fig. 6. A signal (white squares) and its lomo-5 1/2-regression (black squares) obtained by replacing the segments with coordinates in/2, 4/and in/5, 7/with constant semiregressions.

stant regressions, producing [3, 3, 3, 4 , 5 , 5, 5, 6, 71 (Fig. 5). For length-n signals, there are 2"-' blotching transformations (counting the "do-nothing" transformation given by the sets K = 6,H = {6}and the space S = R",and the "replace-the-whole-signal" transformation given by the sets K = / 1, n - 1 1 H = { / 1, n / } and the space S = 4'). The collection of blotched versions of v produced by Hiis denoted Hi(U).Lemmas 4 and 5 show that, for p E (0, oo), all lomo regressions are blotched versions of the signal being regressed; f o r p = 03, some, but not necessarily all regressions are blotched versions of the signal. Lemma 4: Let p E [ 1, 03) and w be a lomo p-regression of U. Then w is a blotched version of v . F o r p < 1, as the following lemma shows, the constant segments in a lomo regression may not be constant regressions, but they are constant semiregressions. The example after the lemma shows that constant semiregressions must be considered when p < 1. Lemma 5: Let p E (0, l ) , let w be a p-projection of a 1 wm be the segmensignal v on A and let w = w' 1 w 2 (* tation into constant signals of w ; let v = v11v21 (vmbe the segmentation of v corresponding (i.e., having some coordinates) to that of w . Then, for each i ~ / 1 ,m l wi is a constant semiregression of U'.

Example: Consider the 8-point signal [ 0 , 7 , 5, 7 , 4 , 6, 4, 111 shown in Fig. 6 and its lomo-5 (1 /2)-regression (which is monotonic) [0, 5 , 5 , 5, 6,6,6,111 obtained via the blotching transformation { /2, 4 1/ 5 , 7 / } replacing the segments [7, 5, 71 and [4, 6, 41 with constant semiregressions. That the result is a regression follows since it is a blotched version at the shortest distance: (5.6569)2.

C. %e Blotching Algorithm As a result of the discussion above, we have the following algorithm. Algorithm 1 (Blotching Algorithm) input: A signal v of length n > 2, a desired degree CY n of local monotonicity and a value of p in (0, 031 output: A set of lomo-a regressions of the signal (forp < 03 it gives the entire set of lomo regressions 1) for each i in/ 1, 2" - 1 compute all blotched versions in of 0. 2) for the blotched versions that are lomo-a, compute their distance to v 3 ) choose those lomo-cr versions at the smallest distance from U.

'

L

RESTREPO AND BOVIK: LOCALLY MONOTONIC REGRESSION

For p = 1, the collection of blotched signals may be infinite; to have a practical algorithm, the tube algorithm is used together with the blotching algorithm. Since there are 2“ - blotching operations, the complexity of the algorithm is exponential. F o r p 2 l,not all blotching transformations must be performed: some produce signals at a larger distance than others; Lemma 6 shows that the error does not increase if a shorter segment of the signal is regressed. Lemma 6: Let v be a segment of w , let v’ and w ’ be their corresponding constant p-regressions. Then, for p E (0, 031, d p ( v ,v’) 5 d p ( w , w’). Thus for p z 1, blotching transformations may be tried in a particular order such that not all blotched versions need be computed (for p < 1, blotching transformations may replace a segment with a constant semiregression that is not a regression, and Lemma 6 does not guarantee an improvement of the algorithm). This partial order xi + (see forced transitions of the signals in T Fig. 11). It is not hard to show that the collection of forced 2) find a representative collection L of signals transitions determined by each pair of consecutive seghaving such transitions ments is necessary and sufficient. 3) compute the lomotonicity of the signals in [i When three or more consecutive forced transitions are 4) choose a signal with largest lomotonicity of the same sign, they can be localized further, without A procedure for finding the collection U of necessary and harming the necessity or sufficiency of the collection: if [ + l , / k j , lj/I (resufficient transitions, as required in Step 1 of the tube al- [ + l , / k i , l i / I , [ + l , / k i + l , l i + l / I . x E, be the given spectively, [ - l , / k i , /;/], [-1,/ki+l, l ; + l / I * * [-1, gorithm, is given. Let T = El x / k j , l j / ] ) are consecutive increasing (respectively, detube; for each i i n / l , m/,let creasing) forced transitions, they are replaced by more lox = max (Zi) calized forced transitions; the first and last transitions are replaced, [ + l , / k , , li/] by [ + l , / l i , li/] and [+l,/kj, l j / l y = min (Zi) by [ + 1 , / k j , k j / ] , and the intermediate transitions [ 1, / k , , &/I, r E l i 1 , j - 1/, are replaced each by [+1, p = min { j: x E z k for each k in/j, i / } / h , h / ] where, for each r , h is a number in/kr, &/(see q = max { j: x E z k for each k in/i, j / } Fig. 12 where the transitions in Fig. 12(a) become local-

+

,

-

.-

+

Authorized licensed use limited to: Peking University. Downloaded on November 3, 2008 at 07:12 from IEEE Xplore. Restrictions apply.

+

RESTREPO AND BOVIK: LOCALLY MONOTONIC REGRESSION

Fig. 10. A subcollection of segments from the collection in Fig. 9 is found.

2805

1) choose one coordinate from each forced transition 2 ) form a collection d of constant segments with ending coordinates given by those from Step 1 and with levels given by the corresponding levels of the constant segments in the collection 0 3) concatenate the segments in J, this gives one of the required signals 4) repeat steps 1-3 until all combinations of coordinates from forced transitions have been chosen. This gives the collection L. The number of signals in L is the product of the widths of the forced transitions, so the complexity of the tube algorithm is combinatorial; a bound on this number is found in Appendix B.

1

2

3

4

5

6

1

8

9

Fig. 1 1 . The segments in Fig. 10 determines the forced transitions [ - 1 , /3, 4fl and [ + 1 , / 5 , 6 f l .

(a)

(b)

Fig. 12. The forced transitions shown in (a) are localized, as indicated in (b).

ized in Fig. 12 (b)); analogously if the transitions are all decreasing. Call the resulting collection of transitions T. In step 2 of the tube algorithm, given the resulting collections 0 of constant segments and T of forced transitions, a collection L of signals having those transitions only is obtained as follows:

Algorithm 4 input: a collection of forced transitions U and the collection of constant segments 0 it is obtained from output: a collection L of signals having the indicated transitions

E. AlRorithms for the Square and Product Metrics When using the blotching algorithm for p = 1 , each blotching transformation dictates the replacement of a collection of segments from the original signal with constant regressions; if all such segments have odd lengths, each regression is unique and the blotching transformation produces one signal only. Otherwise, one or more of the segments to be replaced have even lengths, and a tube results; the factors of the tube are singleton sets and closed intervals [ a , b ] , where a and b are the central ranked values of the components of each segment of even length being replaced. After the tube algorithm finds a signal with largest lomotonicity , the blotching algorithm proceeds. For example, (under the square metric) the blotching transformation ( /1 , 2 A 4, 5,') on the signal [4, 5, 1 , 3, 21 produces the tube [4, 51 x [4, 51 x [3, 31 x [2, 31 x [2, 31 containing the signal [4, 4, 3, 3, 31. The blotching algorithm can be used to compute lomo regressions under the product metric, by blotching with the midrange. Also, a lomo oo-regression of a signal is a lomo signal on the boundary of an oo-ball (which is a tube) centered at the given signal; a tube that is centered at the given signal is grown until it contains a signal with a lomotonicity larger than or equal to the specified degree of local monotonicity. The tube does not have to be grown continuously, the radius of the tube (ball) is increased stepwise, each time increasing it to the minimum value that increases the width of at least one segment in the collections of segments; only a finite number of tubes is considered and, with a radius no greater than that of a ball that contains the constant regression of the whole signal, a lomo oo-regression is found. For the product metric, the algorithm based on the tube algorithm runs much faster than the blotching algorithm; the reasons being that the product of the widths of the forced transitions is less than the number of blotching transformations necessary to find a lomo regression and that there is no need to compute approximation errors (Appendix B). Summarizing, the tube algorithm may be used to compute lomo regressions f o r p = 03; f o r p = 1 a combination

Authorized licensed use limited to: Peking University. Downloaded on November 3, 2008 at 07:12 from IEEE Xplore. Restrictions apply.

2806

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 41, NO. 9, SEPTEMBER 1993

-4.

0

2

4

6

8

I

I

6

0

10

6

0

10

6

8

10

10

(a)

0

I

I

2

4

(b)

0

2

4

(C)

0

2

4

( 4 Fig. 13. Original signal (white squares). (a) Three-point median filtered version (black diamonds). (b)-(d): LOmO-3 regressions (black squares) under the (b) Euclidean metric; (c) square metric; (d) product metric.

RESTREPO AND BOVIK: LOCALLY MONOTONIC REGRESSION

2807

1

Fig. 14. A signal

U

v :-

and a lomo-5 regression w . under the Euclidean metric.

n

Fig. 15. A signal v and a lomo-3 regression w , under the Euclidean metric.

of both algorithms must be used; the blotching algorithm may be used alone for each value of p different from 1.

F. Examples First we compare the performance of locally monotonic regression with that of the median filter. Since we consider finite length signals only, we have a choice regarding the behavior of the median filter at the extremes of the signal. If a padding median filter (one that appends elements of value equal to the value of the first element and last elements of the signal at its ends, e.g., Fig. 1) is used, a signal of the same length is obtained but the first and last elements are overemphasized. If no elements are appended, a median filter of window size 2k + 1 that filters a signal of length m produces a signal of length m - 2k. We use a nonpadding median filter so that all elements of the signal to be filtered are considered equally important. The (rough) signal [ l , -1, 2 , - 2 , 3, -3, 4, -4, 51 of length 9 depicted by white squares in Fig. 13 is filtered with a (nonpadding) median filter of window size 3; the filtered signal (of length 7) is shown in Fig. 13(a) as black diamonds. Note the ineffectiveness of the filter at reducing the roughness of the signal. The same signal was lomo regressed, under the Euclidean, square, and product metrics. The lomo-3 regressions are shown in black diamonds in Figs. 13(c)-(e), the smoothing under the criterion of local monotonicity being very effective. Certainly the median filter is a useful device, particularly for the smoothing of images, because of its (relative) computational simplicity. However, local monotonicity is not a characteristic that the median filter necessarily preserves or attains. In Fig. 14, both a rough signal U of length 16 and a lomo-5 regression w under the Euclidean metric are

2

3

4

5

6

7

8

9

Fig. 16. An interval (indicated by thick vertical line segments) of dimension 9, and 3 signals in it.

shown. In spite of the fact that the regression cannot be very similar in appearance to the original signal U , it does show an underlying trend present in U . Fig. 15 shows a less rough signal U and a lomo-3 regression w under the Euclidean metric. V . CONCLUSION A theoretical framework for a new type of finite-length discrete signal processing was presented. Numerous criteria of smoothness have been proposed (e.g., see [27]); many are defined in the frequency domain and give rise to linear smoothers. Under the criterion of smoothness of local monotonicity, locally monotonic regression provides a nonlinear tool for the optimal smoothing of discrete signals. Locally monotonic regression provides a further understanding on the concepts of local monotonicity and of the smoothing of discrete signals. Locally monotonic regressions show an underlying pattern that may not be easy to grasp in the original signal. Many physical processes give rise to locally monotonic signals, or signals that may be approximated as locally monotonic. For example, the sampled height as a function of time of an airplane trajectory may be locally monotonic; if noisy versions of such signals are observed, lomo regression provides an approach for estimation, for example, in [13], the definition of regression is such that algorithms that compute p-regressions are maximum likelihood estimators [ 141, [20]of signals embedded in white additive noise, for a wide family of noise densities. The algorithms presented currently require large amounts of computational resources to smooth long signals; the problem remains complex. The algorithms do provide a way of smoothing signals and the algorithm that computes lomo a-regressions using the tube algorithm is the least expensive, computationally. If the requirements on the optimality of the approximations are relaxed, it is possible to design algorithms that compute suboptimal lomo approximations, fast and inexpensively [ 161 ; they are the subject of current research. The application of the technique of projection for the processing of signals using shape descriptors different from local monotonicity provides additional tools for the shaping of signals, and is being currently explored [28]. In particular, local convex/concavity can be used as a smoothness criterion with the advantage that a sinusoidal

Authorized licensed use limited to: Peking University. Downloaded on November 3, 2008 at 07:12 from IEEE Xplore. Restrictions apply.

2808

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 41. NO. 9 , SEPTEMBER 1993

the components of w‘,by appropriately perturbing any of these components, the distance between U and w may be decreased while the sign skeleton of w remains the same, contradicting the assumption that w is a locally monotonic regression of U . F o r p = 00 the condition of the lemma is sufficient but APPENDIXA not necessary: there exist lomo regressions that are Proof of Lemma I : Let p E (1, CO), then the ball blotched versions of the regressed signal but there also Bp(8, 1) is strictly convex; let A be a nonempty, proper, may exist regressions that are not blotched versions. Conconvex, closed subset of R “ . Let x be a point not in A and sider the distance d,(v, w) = max {d,(vl, w’), * * , p = D p ( x , A); then Bd[B,(x, p)] n A , which is the set & ( U m , w“)} between a nonlomo signal U and a lomo of regressions of x with respect to A , is nonempty. If the regression w of v ; as long as the segment(s) w J with the cardinality of R A @ )is larger than one, in the relative to- largest distance &(U’, w’) are replaced with midranges, pology for the line spanned by two of its elements, the the remaining constant segments of w need not be coninterior of the line segment between them is in A and in stant regressions: the maximum is not changed. This sugthe interior of Bp(x, p ) . Points in the interior of Bp(x, p ) gests flexibility in the design of an algorithm that comare at a distance less than p from x; therefore, there is a putes lomo regressions under the product metric; the tube ball of radius smaller than p and centered at x that inter- algorithm yields regressions that are not always blotched versions of the signal being regressed. sects A, contradicting the definition of p . Proof of Lemma 2: Let a,the degree of local monoProof of Lemma 5: Again let p, v and w be as in the tonicity, be given. The set A of locally monotonic signals hypothesis; if for s o m e j ~ / 1 ,m / , w’ is not a constant is closed because its complement is open: let U E (R“ - semiregression of U’, then by perturbing w’, dp(v’, w’) A); since U is not lomo, it has components U, and U’ with can be made closer to a local minimum so that a new signal that is at a smaller p-distance from U and has the same U, > u,+l,U’ < u,+l where Ii - j l Icx - 2. Let p = min{lu, - U , + I ~ Iu,, - ~ , + ~ 1 } / 2 . L e t Z b e t h e o p e n b a l l sign skeleton as w is obtained, contradicting the minim &,(U, p ) (Z is an n-dimensional interval whose edges have mality of dp (U,w). equal length 2 p ; it is the Cartesian product of n open inProof of Lemma 6: The lemma is proved for the case tervals: ZII = (U, - p , U, + p ) . From the definition of p , where the length of w is the length of U plus one; (if v = it follows that each signal in Z has a skeleton with the w , the lemma is clearly true) the general case then folsame ith and jth components, then Z is an open set con- lows. Let v = [ v l , * , vn] be a signal with constant taining U and points of R” - A only. This shows that R” regression U ’ = [ v i , * , v;] and let w = [ v I , Vn, - A is open (see Fig. 12 where all signals in the interval v, + be the signal obtained by appending one additional have the fifth and sixth components of their sign skeleton component to v (without loss of generality) at the end, , w;, and having constant regression w ’ = [wi, equal to - 1 and + l ) . Proof of Lemma 3: A fl r is a subset of Bd(A): if a wA+ let w ” = [w;, , w;]. signal has two consecutive components with the same L e t p E (0, 03) and suppose that dp(w,w’)< d p ( v ,U’). value, then by slightly perturbing one of them, a non- Either U ’ # w” or U ’ = w’’.If U ’ # w ” then w r ris a lomo-3 signal arbitrarily close to A can be obtained. On constant signal and is at a smaller distance from v than U ’ the other hand, using arguments similar to the proof of is the previous lemma, it can be shown that the set A fl I?‘ [dp(v, wrr)Ip = [dp(w,w’)Ip- l v n + l - wA+IIp of strictly monotonic signals is open: thus, A n I“ is a subset of the interior of A. Since A contains its boundary, 5 [dp(w,W’)IP< tdp(v, v’>IP it follows that Bd(A) = A f l I‘ and Int(A) = A n r‘. Thus the boundary of A equals the set of locally mono- and then U ’ is not a constant regression of U , which is a tonic signals that have at least two consecutive equal com- contradiction. If v ’ = w’’then ponents. Also, it can be shown that the interior of A is [dp(w,w’>Ip= [ d p ( v , v’>Ip+ J w n + l - wA+1JP equal to the interior of M (the set of monotonic signals) consisting of strictly monotonic signals only. 1 [ d p ( v ,v’>IP Proof of Lemma 4: Let p , v and w be as in the hypothesis. Suppose that w11w21 * Iwm is the segmenta- which is also a contradiction. Next, consider the case p = 00. In general, for each tion of w into constant signals. Then the pth power of the constant signal c , the distance d,(c, x) between c and a p-distance between U and w is signal x is I C - m( r / 2 , where m is the midrange of [ d p ( v ,w)]P = [ d p ( u lw’)y , * * + [dp(vm, W“)]P x, r is its range, and c is the value of the components of where, for each i E / 1, m / , vi is the signal with same c ; if c is the constant regression of x under the product metric then c = m and the distance is equal to half the coordinates as wi.Suppose that for some i ~ / 1 m , / wi is not a constant regression of the corresponding segment v1. range of the signal being regressed. w, + I is either in the interval Iv,,,, bounded by the smallest and largest Then, since the error Id,(v’. wi)lPis a convex function of - ...... v,,,l, ...

signal is locally convex/concave of some degree; this is not the case under the criterion of smoothness of local monotonicity: a continuous sinusoidal signal, regardless of its frequency, is not locally monotonic.

-

=

-

+

-

y

.

,

+

I -

Authorized licensed use limited to: Peking University. Downloaded on November 3, 2008 at 07:12 from IEEE Xplore. Restrictions apply.

--

9

RESTREPO AND BOVIK: LOCALLY MONOTONIC REGRESSION

components of U , or it is not; in the first case the distance does not change and in the second it increases, which shows the lemma. APPENDIX B 1. Complexity of Lomo Regression Let c(n) be the number of elements in the partition 62' of A into convex cones, for a signal of length n . We propose this number as a measure of the complexity of the problem of finding the set of lomo-a! regressions of a signal, and derive an expression for c(n) for the case Q! = 3. A positive component of the sign skeleton of a lomo-3 signal may be followed by a positive or null component only; similarly, a negative component may be followed by a negative or null component only. A null component may be followed by any type of component. The number of sign skeletons that a lomo-3 signal may have is equal to the sum of the elements of a power of a certain 3 X 3 matrix m obtained as follows. Let g:/ - 1, 1 /-+/1, 3/be the function g(r) = 2 - r that maps the first and last values of a sign skeleton to matrix coordinates; the element of m n P 2at row g(sl) and column g(s, - is the number of sign skeletons with first component s1 and last component s, - that a lomo-3 signal may have. For n = 3, the maximal value of the elements of m is 1 (e.g., the skeleton [0, -13 corresponds to element 2, 3 of m). A signal of length 3 has a sign skeleton with two components and its skeleton is one of nine (= 32) possible skeletons, among them, only seven correspond to lomo-3 signals:

m =

i'

1 1 1 O I Lo 1 1 J

Likewise, it may be checked that the sum of the elements of

i' '1

m2= 2 3 2

L1 2 2 1 gives the number of sign skeletons that a lomo-3 signal of length 4 may have. Thus c(3) = 7 and c(4) = 17. In each case, the element (i, j ) is the number of skeletons with first and last components g-'(i) and g - ' ( j ) , respectively, that a lomo-3 signal may have. In general, for a! = 3, c ( n ) is the sum of the components of ridp2.This can be proven using the following induction step. Let n 1 4 and suppose that each element mi,in m" - gives the number of sign skeletons with first and last components g - ' ( i ) and g-I( j ) that a lomo-3 signal of length n - 1 may have. Let U = [ u l , * * * , U,] be a lomo-3 signal of length n ; the segments [U', . * > v n - 2 , v,and [ v , - ~ , v " - ~ U,] , must be lomo-3 as well; the skeleton possibilities for the first ( n - 1)-segment are given by m n P 3and the skeleton possibilities for the last

2809

3-segment are given by m. Their product m" - 2 gives the skeleton possibilities for U : the product of the ith row of om" - and the jth column of m gives the number of sign skeletons that a lomo-3 signal of length n , with first skeleton component s, = g-I(i) and last skeleton component s, + I = g - ' ( j ) , may have. Since m is symmetric, it may be expressed m = pTdp where d is a diagonal matrix with the eigenvalues of m on the diagonal and p is a unitary matrix: p-' = pT. The eigenvalues of m are = 1 = 1, h3 = 1 h,and

x1

Since

+ Jz, x2

mk = p,Tcdkp~,

c(n) = (3/2)(X;-2

+ A!-')

+ &(Ayp2

-

A;-2).

F o r n = 3, 4, 5, 6, 7, and 8, c ( n ) is given by 7 , 17, 41, 99, 239, and 577, respectively. The complexity depends on n and grows exponentially; for n large, c ( n ) = 2.9 xn-2 . For a > 3 the complexity is somewhat smaller.

2. Complexities of Blotching and Tube Algorithms The complexity of the blotching algorithm is effectively expressed by the number of blotching transformations performed on a signal of length n ; this number is 2" - I . In addition (in Step 2 of the blotching algorithm) for each lomo blotched signal, the distance to the original must be computed. We bound the complexity of the tube algorithm with the maximal number of signals in the set L that may be obtained, given a tube of n factors. A signal of length n may have at most n - 1 transitions. The number of signals in L is the product of the widths of the forced transitions. The sum of the widths of the forced transitions is at most n - 1 . Thus, the bound is the maximum value of the product njmi given that C j mi = n - 1, where each m i is a natural number. This maximum value is easily shown to be 3" * 2' I3', where a = c - b, b = p(r); c , r are the quotient and remainder of (n + 1)/3, respectively, andp:/O, 2 / + / 0 , 2/is the functionp(r) = 2 r . The relatively small exponents a , b indicate that the complexity of the tube algorithm is much lower than that of the blotching algorithm (compare 3"/3 to 2"). REFERENCES S . G . Tyan, "Median filtering: Deterministic properties," in TwoDimensional Digital Signal Processing II: Transforms and Median Filters, T. S . Huang, Ed. Berlin: Springer-Verlag, 1981, pp. 197217. N . C. Gallagher and G . L. Wise, "A theoretical analysis of the properties of median filters," IEEE Trans. Acoust., Speech, Signal Processing, vol. ASSP-29, pp. 1136-1 141, 1981. J . Brandt. "Invariant signals for median-filters,'' Uril. Math., vol. 31, pp. 93-105, 1987. D . Eberly, H . G . Longbotham, and J . Aragon, "Complete classification of roots of one-dimensional median and rank-order filters," IEEE Trans. Signal Processing, vol. 39, pp. 197-200, 1991.

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 41, NO. 9, SEPTEMBER 1993

[20]

[21]

[22] [23] [24]

[25] [26]

A. R. Butz, “A class of rank order smoothers,” IEEE Trans. Acoust., Speech, Signal Processing, vol. ASSP-34, pp. 157-165, 1986. T. A. Nodes and N. C . Gallagher, “Median filters: Some modifications and their properties,” IEEE Trans. Acoust. Speech, Signal Processing, vol. ASSP-30, pp. 739-746, Oct. 1982. J. B. Bednar and T . L. Watt, “Alpha-trimmed means and their relationship to median filters,” IEEE Trans. Acoust., Speech, Signal Processing, vol. ASSP-32, pp. 145-153, 1984. P. Heinonen and Y. Neuvo, “FIR median hybrid filters,” IEEE Trans. Acoust., Speech, Signal Processing, vol: ASSP-35, pp. 832838, 1987. C. H. Rohwer and L. M. Toerien, “Locally monotone robust approximation of sequences,” J . Comput. Appl. Math., vol. 36, pp. 399-408, 1991. A. C. Bovik, T. S. Huang, and D. C. Munson, “A generalization of median filtering using linear combinations of order statistics,” IEEE Trans. Acoust., Speech, Signal Processing, vol. ASSP-31, pp. 13421350, 1983. H. G. Longbotham and A. C. Bovik, “Theory of order statistic filters and their relationship to linear FIR filters,” IEEE Trans. Acousr., Speech, Signal Processing, vol. 37, pp. 275-281, 1989. L. Naaman and A. C. Bovik, “Least squares order statistic filters for signal restoration,” IEEE Trans. Circuits Syst., vol. 38, pp. 244257, 1991. A. Restrepo and A. C. Bovik, “On the statistical optimality of locally monotonic regression,” Signal Processing, submitted for publication. A. Restrepo and A. C . Bovik, “Statistical optimality of locally monotonic regression,” presented at the SPIE/SPSE Conf. Nonlinear Image Processing, Santa Clara, CA, 1990. T. M. Apostol, Mathematical Analysis. Reading, MA: AddisonWesley, 1957. A. Restrepo and A. C. Bovik, “Windowed locally monotonic regression,” in Proc. IEEE Int. Conf. Acoust., Speech, Signal Processing, Toronto, May 1991. J . W. Tukey, “Nonlinear (nonsuperposable) methods for smoothing data,” previously unpublished manuscript (1974), from The Collected Works ofJohn W. Tukey, vol. 11, Time Series: 1965-1984, D. R. Brillinger, Ed. Wadsworth, Monterey, CA, 1984. F. P. Preparata and I. S. Shamos, Computational Geometry: An Introduction. New York: Springer-Verlag, 1985. J. L. W. V . Jensen, “Sur les fonctions convexes et les inCgalitCs entre les valeurs moyennes,” Acta Math., vol. 30, pp. 175-193, Stockholm, 1906. A. Restrepo, “Locally monotonic regression and related techniques for signal smoothing and shaping,” Ph.D. dissertation, Univ. Texas at Austin, 1990. R. E. Barlow, D. J. Barholomew, J. M. Bremner, and H . D . Brunk, Statistical Inference Under Order Restrictions. New York: Wiley, 1972. H.A. David, Order Statistics. New York: Wiley. 1981. D. G . Luenberger, Optimization by Vector Space Methods. New York: Wiley, 1969. A. Restrepo and A. C. Bovik, “Locally monotonic regression,” in Proc. IEEE Int. Conf. Acoust., Speech, Signal Processing, Glasgow, Scotland, 1989, pp. 1318-1321. A. Restrepo, I. W. Sandberg and A. C . Bovik, “Non-Euclidean locally monotonic regression,” in Proc. IEEE Int. Con$ Acoust., Speech, Signal Processing, Albuquerque, NM, 1990, pp. 1201-1204. A. C. Bovik, “Streaking in median filtered images,” IEEE Trans. Acoust., Speech, Signal Processing, vol. ASSP-35, pp. 493-503, 1987.

1271 I . J . Schoenberg. “Some analytical aspects of the problem of smoothing,” in Studies and Essays Presented to R. Courant on his 60th Birthday. New York: Interscience, 1948. [28] A. Restrepo, “Nonlinear regression for signal processing,” in Proc. SPIEISPSE Conf. Nonlinear Image Processing I I , San Jose, CA, Feb. 1991.

Alfredo Restrepo (S’84-M’90) was born in Bogota, Colombia, on November 28, 1959. He received the Ingeniero Electronic0 degree from the Pontificia Universidad Javeriana at Bogota in 1983, and the M.S. and Ph.D. degrees from the University of Texas at Austin in 1986 and 1990, respectively. He worked as a Laboratory Engineer at Texins de Colombia and a Lecturer at the Universidad Javeriana. Later he was a Teaching Assistant and a Research Assistant in the Laboratory for Vision Systems at the University of Texas. Currently, he is a Research Professor at the Universidad de 10s Andes in Bogota. His research interests are nonlinear signal processing, statistical signal processing, and computer vision.

Alan C. Bovik (S’80-M’80-SM’89) was born in Kirkwood, MO, on June 25, 1958. He received the B.S. degree in computer engineering in 1980, and the M.S. and Ph.D. degrees in electrical and computer engineering in 1982 and 1984, respectively, all from the University of Illinois, UrbanaChampaign. He is currently the Hartwig Endowed Fellow and Professor in the Department of Electrical and Computer Engineering, the Department of Computer Sciences, and the Biomedical Engineering Program at the University of Texas at Austin, where he is also the Director of the Laboratory for Vision Systems, During the spring of 1992, he held a visiting position in the Division of Applied Sciences, Harvard University, Cambridge, MA. His current research interests include image processing, computer vision, three-dimensional microscopy, and computational aspects of biological visual perception. He has published over 160 technical articles in these areas. Dr. Bovik received the University of Texas Engineering Foundation Halliburton Faculty Excellence Award, was an Honorable Mention winner of the International Pattern Recognition Society Award for Outstanding Contribution, and was a National Finalist for the 1990 Eta Kappa Nu Outstanding Young Electrical Engineer Award. He has been involved in numerous professional society activities. He has been an Associate Editor for the ON SIGNAL PROCESSING since 1989 and Associate EdIEEE TRANSACTIONS itor for the international journal Paftern Recognition, 1988. He has been a ON IMAGE member of the Steering Committee of the IEEE TRANSACTIONS PROCESSING, since 1991; General Chairman of the First IEEE International Conference on Image Processing, to be held in Austin, TX, in November 1994; Local Arrangements Chairman of the IEEE Computer Society Workshop on the Interpretation of 3-D Scenes, October 1989; Program Chairman, SPIE/SPSE Symposium on Electronic Imaging, February 1990; and Conference Chairman, SPIE Conference on Biomedical Image Processing, 1990 and 1991. He is a registered Professional Engineer in the State of Texas and is a frequent consultant to local industrial and academic institutions.

Authorized licensed use limited to: Peking University. Downloaded on November 3, 2008 at 07:12 from IEEE Xplore. Restrictions apply.