Vector Cascade Algorithms and Refinable Function Vectors in Sobolev Spaces ∗ Bin Han Department of Mathematical and Statistical Sciences University of Alberta Edmonton, Alberta, Canada T6G 2G1 E-mail:
[email protected] Abstract In this paper we shall study vector cascade algorithms and refinable function vectors with a general isotropic dilation matrix in Sobolev spaces. By investigating several properties of the initial function vectors in a vector cascade algorithm, we are able to take a relatively unified approach to study several questions such as convergence, rate of convergence and error estimate for a perturbed mask of a vector cascade algorithm in a Sobolev space Wpk (Rs )(1 6 p 6 ∞, k ∈ N∪{0}). We shall characterize the convergence of a vector cascade algorithm in a Sobolev space in various ways. As a consequence, a simple characterization for refinable Hermite interpolants and a sharp error estimate for a perturbed mask of a vector cascade algorithm in a Sobolev space will be presented. The approach in this paper enables us to answer some unsolved questions in the literature on vector cascade algorithms and to comprehensively generalize and improve results on scalar cascade algorithms and scalar refinable functions to the vector case.
Key words: vector cascade algorithm, vector subdivision scheme, refinable function vector, Hermite interpolant, initial function vector, error estimate, sum rules, smoothness. 2000 AMS subject classification: 42C40, 41A25, 41A05, 41A63.
∗
Research supported in part by the Natural Sciences and Engineering Research Council of Canada (NSERC Canada) under Grant G121210654 and by Alberta Innovation and Science REE under Grant G227120136. Web: http://www.ualberta.ca/∼bhan, Fax: 1-780-492-6826 date: July 18, 2002.
1
1
Introduction
Refinable function vectors and vector subdivision schemes, as two of the most important and extensively studied fundamental objects in the literature of wavelet analysis, are useful in many applications such as signal processing and computer aided geometric design ([3, 10, 13, 14, 24, 25, 26, 28, 39, 40, 49]). It is the purpose of this paper to study refinable function vectors and cascade algorithms in a relatively unified approach to have a better picture and understanding of some of their properties. An s × s integer matrix M is called a dilation matrix if all its eigenvalues are greater than one in modulus. In this paper, we are concerned with the following vector refinement equation X a(β)φ(M · −β), (1.1) φ = |detM | β∈Zs
where φ = (φ1 , . . . , φr )T is called an M -refinable function vector which is an r × 1 column vector of compactly supported functions or distributions, and a is called a (matrix) mask with multiplicity r which is a finitely supported complex-valued sequence of r × r matrices on Zs . Let N0 denote all the nonnegative integers. For µ = (µ1 , . . . , µs ) ∈ Ns0 , we denote |µ| := |µ1 | + · · · + |µs |, µ! := µ1 ! · · · µs ! and ξ µ := ξ1µ1 · · · ξsµs for ξ = (ξ1 , . . . , ξs ) ∈ Rs . The partial derivative of a differentiable function f with respect to the j-th coordinate is denoted by Dj f , j = 1, . . . , s, and for µ = (µ1 , . . . , µs ) ∈ Ns0 , Dµ is the differential operator D1µ1 · · · Dsµs . We denote by Wpk (Rs ) the Sobolev space that consists of all functions f such that Dµ f ∈ Lp (Rs ) for all µ ∈ Ns0 and |µ| 6 k, equipped with the norm defined by X kDµ f kLp (Rs ) . kf kWpk (Rs ) := |µ|6k
For any Banach space (B, k · kB ), we denote (B m×n , k · kB m×n ) the Banach space of all m × n matrices (bj,k )16j6m,16k6n whose entries are elements in B, equipped with the following norm ° ° k(bj,k )16j6m,16k6n kB m×n = °(kbj,k kB )16j6m,16k6n °Rm×n , where k·kRm×n denotes some norm on Rm×n . Note that all norms k·kRm×n on Rm×n are equivalent. In particular, Rs := Rs×1 for short. Start with some appropriate initial function vector φ0 ∈ (Wpk (Rs ))r×1 . In order to solve the vector refinement equation (1.1), we employ the iteration scheme Qna,M φ0 (n ∈ N0 ), where Qa,M is the cascade operator on (Lp (Rs ))r×1 (1 6 p 6 ∞) given by X Qa,M f := |detM | a(β)f (M · −β), f ∈ (Lp (Rs ))r×1 . (1.2) β∈Zs
This iteration scheme is called a (vector) cascade algorithm (see [3, 10]) associated with mask a and dilation matrix M . If φ is a fixed point of Qa,M (that is, Qa,M φ = φ), then φ must satisfy (1.1). When the multiplicity r = 1, a vector cascade algorithm and a refinable function vector are called a scalar cascade algorithm and a scalar refinable function, respectively. 2
Convergence of vector cascade algorithms and various properties of refinable function vectors have been extensively studied in the literature. To mention only a few references here, see [1]–[50] and numerous references therein. For example, a comprehensive study of stationary cascade algorithms was given in [3]. Convergence of cascade algorithms has been studied in [4, 16, 18, 22, 24, 28, 34, 36, 39, 42, 43, 50]. This paper is largely motivated by the work in Chen, Jia and Riemenschneider [4] on convergence of vector cascade algorithms and by the work in [25] on refinable Hermite interpolants and their applications in computer aided geometric design. Though vector cascade algorithms and vector subdivision schemes have been relatively well studied in the literature, there are still several unanswered questions in this area and we feel that a relatively unified and self-contained approach is helpful to have a better picture and understanding of these and related topics. For a compactly supported r × 1 function vector f on Rs , we say that the shifts of f are stable (see [35]) if span{fb(ξ +2πβ) : β R∈ Zs } = Cr×1 for all ξ ∈ Rs , where the Fourier transform gb of g ∈ L1 (Rs ) is defined to be gb(ξ) = Rs g(t)e−it·ξ dt, ξ ∈ Rs and can be naturally extended to tempered distributions. In the following, let us mention some questions that motivate this work. Q1: As in [4], let Yk denote the set of all appropriate initial function vectors in a cascade algorithm. It was asked in Chen, Jia and Riemenschneider [4] that “It would be interesting to know whether there always exists some F = (f1 , . . . , fr )T in Yk such that the shifts of f1 , . . . , fr are stable.” Q2: Suppose that Q1 is true and the cascade algorithm with such an initial function vector F converges in a Sobolev space. Is it true that the cascade algorithm with every initial function vector in Yk will converge in the Sobolev space? Q3: As an interesting family of refinable function vectors, refinable Hermite interpolants are of interest in computer aided geometric design (see [13, 20, 25, 40, 49]). How to characterize a refinable Hermite interpolant in terms of its mask? Q4: In many situations, truncation and perturbation of a mask are needed in applications. How will the perturbation of a matrix mask affect its vector cascade algorithm and its refinable function vector? The structure of the paper is as follows. In Section 2, we shall introduce some auxiliary results which are of interest in their own right. Then we shall demonstrate that based on a simple observation, vector cascade algorithms and refinable function vectors can be essentially investigated using techniques from the scalar case. At the end of Section 2, we shall study the structures of two very important subspaces in wavelet analysis. In Section 3, we shall investigate necessary conditions for the initial function vectors in a cascade algorithm. The difficulty in Q1 partially lies in the fact that the set Yk , which is described in [4], has a rather complicated structure. Our investigation leads to a very simple way of describing the set Yk of all possible initial function vectors and consequently allows us to affirmatively answer Q1 (See Proposition 3.4). In Section 3, we shall also investigate the mutual relations among the initial function vectors in a cascade algorithm. It turns out that such mutual relations are very useful in investigating many problems related to cascade algorithms. 3
In Section 4, we shall characterize the convergence of a vector cascade algorithm in a Sobolev space in terms of its mask in various ways. In particular, we shall give a positive answer to Q2 (See Theorem 4.3). It turns out that there is a very important quantity νp (a; M ) defined in (4.3) in Section 4 which connects the convergence of cascade algorithms with the smoothness of refinable function vectors. More precisely, when M is isotropic and the shifts of a refinable function vector φ with mask a and dilation matrix M are stable, the quantity νp (a; M ) characterizes the Lp smoothness of φ and in fact gives us the critical Lp smoothness exponent of φ. On the other hand, we shall show in Section 4 that a vector cascade algorithm associated with mask a and dilation matrix M for every initial function vector in Yk converges in the Sobolev space Wpk (Rs ) if and only if νp (a; M ) > k. In the rest of Section 4, we shall also investigate the rate of convergence of a vector cascade algorithm. In Section 5, we shall completely characterize a refinable Hermite interpolant in terms of its mask which settles Q3 (See Corollary 5.2). We show that a refinable function vector φ with mask a and dilation matrix M is a Hermite interpolant of order r if and only if its mask a is a Hermite interpolatory mask of order r and ν∞ (a; M ) > r. In Section 6, we shall study how the perturbation of a mask will affect its vector cascade algorithm and its refinable function vector. We settle Q4 by obtaining a sharp error estimate for a vector cascade algorithm and a refinable function vector with a perturbed mask in Section 6 (See Theorem 6.3). The results in Section 6 are not trivial generalizations of the corresponding results in the scalar case since when r > 1 the set Yk of initial function vectors indeed depends on the perturbed mask and therefore, is not invariant under perturbation. Since the quantity νp (a; M ) is very important, in Section 7, we shall discuss how to compute the particular quantity ν2 (a; M ) by an efficient numerical algorithm (See Theorem 7.1). We shall also discuss how to compute νp (a; M ) by factorizing the symbol of a univariate matrix mask a. In this paper, we not only give alternative proofs for and improve some known results in the literature, but also obtain some new results on vector cascade algorithms. Our approach in this paper is relatively unified and may yield relatively simple proofs. The approach in this paper will be helpful for other problems related to vector cascade algorithms and refinable function vectors.
2
Auxiliary Results and the Structure of Two Subspaces
In this section, we shall introduce some auxiliary results. Then we shall investigate the structure of two subspaces which play an important role in analyzing various properties of vector cascade algorithms and refinable function vectors. For k ∈ N ∪ {0}, let Ok be the ordered set {µ ∈ Ns0 : |µ| = k} under the lexicographic order. That is, ν = (ν1 , . . . , νs ) is less than µ = (µ1 , . . . , µs ) in the lexicographic order if νj = µj for j = 1, . . . , i − 1 and νi < µi . By #Ok we denote the cardinality of the set Ok . For an s × s matrix N , S(N, Ok ) is defined to be the following (#Ok ) × (#Ok ) matrix determined by X xν (N x)µ = S(N, Ok )µ,ν , µ! ν! ν∈O k
4
µ ∈ Ok .
(2.1)
It is obvious that S(A, Ok )S(B, Ok ) = S(AB, Ok ). The (right) Kronecker product of two matrices A = (ai,j )16i6I,16j6J and B = (b`,n )16`6L,16n6N , written as A ⊗ B, is defined to be the following matrix a1,1 B a1,2 B · · · a1,J B a2,1 B a2,2 B · · · a2,J B A ⊗ B := .. .. .. . . . . . . . aI,1 B aI,2 B · · · aI,J B Clearly, A ⊗ B is an (IL) × (JN ) matrix and the (k, m)-entry of A ⊗ B is ai,j b`,n where i, j, `, n are uniquely determined by k = (i − 1)L + ` and m = (j − 1)N + n, where 1 6 i 6 I, 1 6 j 6 J, 1 6 ` 6 L and 1 6 n 6 N . So for simplicity, we can write [A ⊗ B]i,j;`,n to denote the ((i − 1)L + `, (j − 1)N + n)-entry of the matrix A ⊗ B. It is well known that (A + B) ⊗ C = (A ⊗ C) + (B ⊗ C), C ⊗ (A + B) = (C ⊗ A) + (C ⊗ B) and (A ⊗ B)(C ⊗ E) = (AC) ⊗ (BE). The following result generalizes [22, Proposition 2.6] and is convenient to deal with derivatives in Sobolev spaces. Proposition 2.1 Let D := [D1 , . . . , Ds ] be the row vector of differentiation operators. Denote the 1 × sk row vector of kth order differentiation operators by ⊗k D := D ⊗ · · · ⊗ D with k copies of D, where ⊗ denotes the (right) Kronecker product. Let N be an s × s real-valued matrix. For any matrix f of functions in C k (Rs ) and for any matrices B and C of complex numbers such that the multiplication Bf C is well defined, then ¡ ¢ ¡ ¢ [⊗k D] ⊗ [Bf (N ·)C](·) = B [⊗k D] ⊗ f (N ·) [⊗k N ] ⊗ C , (2.2) or equivalently, Dk ⊗ [Bf (N ·)C](·) = B(Dk ⊗ f )(N ·)(S(N, Ok ) ⊗ C),
(2.3)
where Dk := (Dµ )µ∈Ok is a 1 × (#Ok ) row vector of kth order differentiation operators and S(N, Ok ) is defined in (2.1). Proof: Let F = Bf C and assume that F is an m × n matrix. For 1 6 i 6 s, 1 6 j 6 m and 1 6 ` 6 n, we have £
s X ¤ D ⊗ [F (N ·)] 1,i;j,` = [D1,i ][F (N ·)]j,` = Di [Fj,` (N ·)] = [Dt Fj,` ](N ·)Nt,i
=
s h³ X
´
i
t=1
Dt Nt,i Fj,` (N ·) = [(DN )1,i Fj,` ](N ·)
t=1
= [(DN ) ⊗ F ]1,i;j,` (N ·). By induction, we have [⊗k D] ⊗ [Bf (N ·)C](·) = [⊗k (DN )] ⊗ [Bf C](N ·) = B[⊗k (DN ) ⊗ (f C)](N ·) ¡ ¢ ¡ ¢ = B [⊗k D] ⊗ f (N ·) [⊗k N ] ⊗ C .
5
In order to prove (2.3), we define a (#Ok ) × sk matrix H by ( 1, if [⊗k D]1,j = Dµ ; Hµ,j := µ ∈ Ok , j = 1, . . . , sk 0, otherwise, and an sk × (#Ok ) matrix G by ( 1, if j = min{i : [⊗k D]1,i = Dµ }; Gj,µ := 0, otherwise,
j = 1, . . . , sk , µ ∈ Ok .
It is easy to verify that HG = I#Ok ,
Dk = (⊗k D)G,
⊗k D = (Dk )H
and S(N, Ok ) = H(⊗k N )G.
Suppose that C is an ` × n matrix. It follows from (2.2) that ¡ ¢ Dk ⊗ [Bf (N ·)C] = ([⊗k D]G) ⊗ (BF (N ·)C) = [⊗k D] ⊗ [Bf (N ·)] (G ⊗ C) ¡ ¢ ¡ ¢ = B [⊗k D] ⊗ f (N ·) [⊗k N ] ⊗ I` (G ⊗ C) ¡ ¡ ¢ = B [Dk H] ⊗ f )(N ·) ([⊗k N ]G) ⊗ C ¡ ¢ = B(Dk ⊗ f )(N ·)(H ⊗ I` ) ([⊗k N ]G) ⊗ C ¡ ¢ = B(Dk ⊗ f )(N ·) (H[⊗k N ]G) ⊗ C = B(Dk ⊗ f )(N ·)(S(N, Ok ) ⊗ C). This completes the proof. The following result will be needed later and is of interest in its own right. Lemma 2.2 Let M be an s × s matrix. Let A, B, C, E, F be given matrices of 2π-periodic trigonometric polynomials such that A, B, C, E are square matrices. Let X be an unknown m × n matrix of 2π-periodic trigonometric polynomials such that A(ξ)X(M T ξ)B(ξ) − C(ξ)X(ξ)E(ξ) − F (ξ) is well defined. Suppose that X(0) satisfies A(0)X(0)B(0) = C(0)X(0)E(0) + F (0). If (⊗j M ) ⊗ B(0)T ⊗ A(0) − Isj ⊗ E(0)T ⊗ C(0) (or equivalently, S(M, Oj ) ⊗ B(0)T ⊗ A(0) − I#Oj ⊗ E(0)T ⊗ C(0)) is invertible for every j = 1, . . . , k, then the following system of linear equations given by Dµ [A(·)X(M T ·)B(·)](0) = Dµ [C(·)X(·)E(·)](0) + Dµ F (0),
0 < |µ| 6 k
has a unique solution for {Dµ X(0) : 0 < |µ| 6 k}. Proof: It is well known that vec(CXE) = (E T ⊗ C)vec(X), where for X = (Xi,j )16i6m,16j6n , we denote vec(X) := (X1,1 , . . . , Xm,1 , X1,2 , . . . , Xm,2 , . . . , X1,n , . . . , Xm,n )T . ¡ ¢ ¡ ¢ Rewrite the equations as Dµ [ B(·)T ⊗ A(·) vec(X(M T ·))](0) = Dµ [ E(·)T ⊗ C(·) vec(X(·))](0) + Dµ [vec(F )](0). So, it suffices to prove the claim with B = E = I. Now the system of linear equations becomes [⊗j D] ⊗ [A(0)X(M T ·) − C(0)X(·)](0) £ ¤ = [⊗j D] ⊗ (C(·) − C(0))X(·) + F (·) + (A(0) − A(·))X(M T ·) (0) =: Gj 6
for j = 1, . . . , k. By the Leibniz differentiation formula, we observe that Gj only involves Dµ X(0), |µ| < j. So, by Proposition 2.1, we have ¡ ¢ ¡ ¢ ¡ ¢ A(0) [⊗j D] ⊗ X (0) [⊗j M T ] ⊗ In −C(0) [⊗j D] ⊗ X (0)Isj +n = Gj , j = 1, . . . , k. That is, ¡ j ¢ ¡ ¢ [⊗ M ] ⊗ In ⊗ A(0) − Isj +n ⊗ C(0) vec ([⊗j D] ⊗ X)(0) = vec(Gj ),
j = 1, . . . , k.
Since the matrix (⊗j M ) ⊗ In ⊗ A(0) − Isj +n ⊗ C(0) is invertible for every j = 1, . . . , k, we have ¡ ¢ ¡ ¢−1 vec ([⊗j D] ⊗ X)(0) = [⊗j M ] ⊗ In ⊗ A(0) − Isj +n ⊗ C(0) vec(Gj ). The proof is completed by induction on j = 1, . . . , k. A dilation matrix M is isotropic if M is similar to a diagonal matrix diag(σ1 , . . . , σs ) such that |σ1 | = · · · = |σs | = |detM |1/s . It was proved in [22] that an s × s matrix M is isotropic if and only if there exists a norm k · kM on Cs×1 such that kM xkM = |detM |1/s kxkM
∀ x ∈ Cs×1 .
(2.4)
When M is an isotropic matrix, we denote by k · kM a norm on Cs×1 such that (2.4) holds. For a matrix or an operator A, we denote ρ(A) := limn→∞ kAn k1/n the spectral radius of A. When A is an s × s isotropic matrix, we have ρ(A) = |detA|1/s . The Fourier series or symbol of a sequence a on Zs is defined to be X b a(ξ) := a(β)e−iβ·ξ , ξ ∈ Rs .
(2.5)
β∈Zs
Throughout this paper, we denote an (n ∈ N0 ) to be the sequence defined by b an (ξ) :=
n Y
b a((M T )n−j ξ) = b a((M T )n−1 ξ) · · · b a(M T ξ)b a(ξ).
(2.6)
j=1
The sequence an is closely related to a vector subdivision scheme used in computer aided geometric design and plays an important role in investigating a vector cascade algorithm. Lemma 2.3 Let M be an s × s isotropic dilation matrix. Suppose that fn (n ∈ N) are function vectors in (Wpk (Rs ))r×1 such that the sequence fn converges to f∞ in the Sobolev space (Wpk (Rs ))r×1 and all fn vanish outside a fixed compact set of Rs . Then for all ξ 6= 0 and |µ| 6 k, lim ρ(M )kn fbn ((M T )n ξ) = lim Dµ [fbn ((M T )n ·)](ξ) = 0.
n→∞
n→∞
Proof: Since all fn are supported on a compact set, by H¨older inequality, it follows from the assumption limn→∞ kfn − f∞ k(Wpk (Rs ))r×1 = 0 that limn→∞ kfn − f∞ k(W1k (Rs ))r×1 = 0. µ f (ξ) = (iξ)µ fb (ξ) and [ Let N := M T and ξ be a fixed nonzero point in Rs . Since D n n n µ \ s r×1 kDµ (f − f )(N ξ)k 6 kD (f − f )k 6 kf − f k , for |µ| 6 k, we have k n ∞ n ∞ (L1 (R )) n ∞ (W1 (Rs ))r×1 µ f (N n ξ)k 6 kD µ f (N n ξ)k + kf − f k [ \ k(iN n ξ)µ fbn (N n ξ)k = kD n ∞ n ∞ (W1k (Rs ))r×1 .
7
By the Riemann-Lebesgue lemma, we conclude that µ f (N n ξ)k + lim kf − f k \ lim k(iN n ξ)µ fbn (N n ξ)k = lim kD ∞ n ∞ (W1k (Rs ))r×1 = 0 ∀ |µ| 6 k. n→∞
n→∞
n→∞
The claim follows directly from the above identity, Proposition 2.1 and the assumption that M is an isotropic dilation matrix. We denote by `0 (Zs ) the linear space of all finitely supported sequences on Zs . Similarly, ¡P ¢ p 1/p `p (Zs ) denotes the linear space of all sequences v on Zs such that kvk`p (Zs ) := |v(β)| < s β∈Z ∞. When K is a compact subset of Zs , `(K) denotes the linear space of all v ∈ `0 (Zs ) such that v vanishes outside K. By δ we denote the Dirac sequence on Zs such that δ(0) = 1 and δ(β) = 0 for all β ∈ Zs \{0}. Let a be a matrix mask with multiplicity r. We say that a satisfies the sum rules of order k + 1 with respect to the lattice M Zs ([1, 2, 20, 29, 32, 38]) if there exists a sequence y ∈ (`0 (Zs ))1×r such that yb(0) 6= 0, Dµ [b y (M T ·)b a(·)](0) = Dµ yb(0)
∀ |µ| 6 k, µ ∈ Ns0
(2.7)
and Dµ [b y (M T ·)b a(·)](2πβ) = 0
∀ |µ| 6 k, β ∈ (M T )−1 Zs \Zs .
(2.8)
Using the Leibniz differentiation formula, the definition of sum rules can be given in the time domain ([2, 20, 32, 38]). For example, the definition of sum rules in [20, (1.6)] is equivalent to (2.7) and (2.8) by replacing yµ in [20, (1.6)] by (−iD)µ yb(0)/µ!. The following result is quite useful in studying vector cascade algorithms and refinable function vectors. Proposition 2.4 Let y ∈ (`0 (Zs ))1×r such that yb(0) 6= 0. Then there exists Uy ∈ (`0 (Zs ))r×r by (ξ) is a nonzero constant (that is, U by (ξ)−1 is a matrix of 2π-periodic trigonometric such that det U by (ξ) satisfies polynomials) and b ye(ξ) = [b ye1 (ξ), . . . , b yer (ξ)] := yb(ξ)U b ye1 (0) = 1,
D µb yej (0) = 0
∀ j = 2, . . . , r
and |µ| 6 k.
b T ξ) = b b Let a be a finitely supported mask with multiplicity r and let φ satisfy φ(M a(ξ)φ(ξ). Define b by (M T ξ)−1b by (ξ) e a(ξ) := U a(ξ)U
and
be b by (ξ)−1 φ(ξ). φ(ξ) := U
be T be Then φ(M ξ) = b e a(ξ)φ(ξ). The equation in (2.7) holds if and only if Dµ [b ye(M T ·)b e a(·)](0) = Dµb ye(0)
∀ |µ| 6 k.
Consequently, when (2.7) holds, b e a(ξ) must take the following form: · ¸ b a1,1 (ξ) b a1,2 (ξ) with b a1,1 (0) = 1, Dµb a1,2 (0) = 0 b a2,1 (ξ) b a2,2 (ξ)
∀ |µ| 6 k,
(2.9)
where a1,1 , a1,2 , a2,1 and a2,2 are sequences whose symbols are 1 × 1, 1 × (r − 1), (r − 1) × 1 and (r − 1) × (r − 1) matrices of 2π-periodic trigonometric polynomials. Moreover, the following statements are equivalent: 8
a) The mask a satisfies the sum rules of order k + 1 in (2.7) and (2.8) with the sequence y; b) The mask e a satisfies the sum rules of order k + 1 in (2.7) and (2.8) with the sequence whose symbol is [b ye1 (ξ), 0, . . . , 0]; c) b e a(ξ) takes the form of (2.9) and D µb a1,1 (2πβ) = 0
and
D µb a1,2 (2πβ) = 0
∀ |µ| 6 k, β ∈ (M T )−1 Zs \Zs .
(2.10)
Proof: Write yb(ξ) = [b y1 (ξ), . . . , ybr (ξ)]. Since yb(0) 6= 0, we can assume yb1 (0) 6= 0; otherwise we can permute the entries in yb. Since yb1 (0) 6= 0, it is easy to see that there exist cj ∈ `0 (Zs ), j = 2, . . . , r such that Dµ [b yj (·) − b cj (·)b y1 (·)](0) = 0
∀ |µ| 6 k, j = 2, . . . , r,
or equivalently, Dµb cj (0) = Dµ [b yj (·)/b y1 (·)](0) for all |µ| 6 k and j = 2, . . . , r. Define a sequence s r×r Uy ∈ (`0 (Z )) by · ¸ 1 1 −b c (ξ) by (ξ) = with b c(ξ) = [b c2 (ξ), . . . , b cr (ξ)]. U yb1 (0) 0 Ir−1 by (ξ) is desired. Other statements can be easily proved by a It is easy to verify that b ye(ξ) = yb(ξ)U direct computation and the Leibniz differentiation formula. The convolution of two sequences is defined to be X [v1 ∗ v2 ](α) := v1 (β)v2 (α − β), v1 ∈ (`0 (Zs ))`×m , v2 ∈ (`0 (Zs ))m×n . β∈Zs
Define a semi-convolution of a function and a sequence as follows: X v ∗ f := v(β)f (· − β), v ∈ (`0 (Zs ))`×m , f ∈ (Lp (Rs ))m×n ,
(2.11)
β∈Zs
or f ∗ v :=
P β∈Zs
f (· − β)v(β) for f ∈ (Lp (Rs ))`×m and v ∈ (`0 (Zs ))m×n . It is easy to verify that
v1 ∗ (v2 ∗ f ) = (v1 ∗ v2 ) ∗ f,
v1 ∈ (`0 (Zs ))`×m , v2 ∈ (`0 (Zs ))m×n , f ∈ (Lp (Rs ))n×k .
Given y ∈ (`0 (Zs ))1×r , we now define two interesting subspaces associated with y which play an important role in wavelet analysis. Let D := [D1 , . . . , Ds ] be the row vector of differentiation operators. Define © ª Vk,y := v ∈ (`0 (Zs ))r×1 : Dµ [b y (·)b v (·)](0) = 0 ∀ |µ| 6 k (2.12) and
© ª Pk,y := [p(· − iDT )b y ](0) ∈ (Πk )1×r : p ∈ Πk , 2
(2.13)
where i denotes the imaginary unit such that i = −1, Πk denotes the linear space of all polynomials with total degree no greater than k, and X X (−iD)µ (Dµ p)(·) [p(· − iDT )b y ](0) := yb(0) = p(· − β)y(β) = p ∗ y, p ∈ Πk . µ! s s µ∈N β∈Z 0
9
So Pk,y = {p ∗ y : p ∈ Πk }. For p ∈ (Πk )m×n , we shall use p to denote both the polynomial matrix p(·) and the polynomial sequence (p(β))β∈Zs since they can be easily distinguished in the context. Using convolution, we see that Vk,y = {v ∈ (`0 (Zs ))r×1 : p ∗ (y ∗ v)(0) = 0
∀ p ∈ Πk }.
(2.14)
Proposition 2.5 Let y ∈ (`0 (Zs ))1×r such that yb(0) 6= 0. Let Vk,y and Pk,y be defined in (2.12) and (2.13), respectively. Then 1) v ∈ Vk,y ⇒ v(· − β) ∈ Vk,y for all β ∈ Zs ; that is, Vk,y is shift invariant; 2) p ∈ Pk,y ⇒ Dµ p ∈ Pk,y and p(· − β) ∈ Pk,y for all µ ∈ Ns0 and β ∈ Zs ; © ª P 3) Vk,y = v ∈ (`0 (Zs ))r×1 : ∀ p ∈ Pk,y ; β∈Zs p(β)v(−β) = p ∗ v(0) = 0 ª © P p(β)v(−β) = p ∗ v(0) = 0 ∀ v ∈ V ; 4) Pk,y = p ∈ (Πk )1×r : s k,y β∈Z 5) Let Uy be given in Proposition 2.4. Then Vk,y := span{v(· − β) : v ∈ Bk,y , β ∈ Zs }, that is, Bk,y generates the shift invariant space Vk,y , where Bk,y is defined to be µ δ(ξ)U d by (ξ)e1 , |µ| = k + 1} ∪ {v : vb(ξ) = U by (ξ)ej , j = 2, . . . , r}, Bk,y := {v : vb(ξ) = ∇ (2.15) −iξ µ −iξ µ µ δ(ξ , . . . , ξ ) := (1 − e 1 d where ∇ ) 1 · · · (1 − e s ) s for µ = (µ1 , . . . , µs ) and ej denotes 1 s r the jth coordinate unit vector in R ;
6) The mask a satisfies the sum rules of order k + 1 in (2.7) and (2.8) with the sequence y if and only if Sa,M Pk,y ⊆ Pk,y (In fact, Sa,M p − p(M −1 ·) ∈ Pdeg(p)−1,y for all p ∈ Pk,y ), where the subdivision operator Sa,M is defined to be X Sa,M v(α) := |detM | v(β)a(α − M β), v ∈ (`0 (Zs ))1×r ; (2.16) β∈Zs
7) The mask a satisfies the sum rules of order k + 1 in (2.7) and (2.8) with the sequence y if and only if Ta,M Vk,y ⊆ Vk,y , where the transition operator Ta,M is defined to be X Ta,M v(α) = |detM | a(M α − β)v(β), v ∈ (`0 (Zs ))r×1 . (2.17) β∈Zs
Proof: By the definition of Vk,y and Pk,y , 1) and 2) hold. 3) follows directly from (2.14) and (p∗y)∗v = p∗(y∗v). 4) can be easily verified by considering the special case yb(ξ) = [b y1 (ξ), 0, . . . , 0]. by (ξ). By Proposition 2.4, we have Vk,ey = Vk,[ey ,0,...,0] = Vk × `0 (Zs ) × · · · × Take b ye(ξ) = yb(ξ)U 1 `0 (Zs ) with (r − 1) copies of `0 (Zs ), where Vk is defined to be n o n o X s s µ Vk := v ∈ `0 (Z ) : p(β)v(−β) = 0 ∀ p ∈ Πk = v ∈ `0 (Z ) : D vb(0) = 0 ∀ |µ| 6 k . β∈Zs
(2.18) 10
It is known (see [29]) that Vk = span{∇µ δ(· − β) : β ∈ Zs , |µ| = k + 1} which can be proved by using long division (see [17, 18]). Consequently, we observe that {∇µ δe1 : |µ| = k + 1} ∪ {δej : j = 2, . . . , r} generates Vk,ey . Now it is easy to see that Bk,y generates Vk,y . In order to prove 6), by Proposition 2.4, it suffices to prove it for the special case that yb(ξ) = [b y1 (ξ), 0, . . . , 0] and b a(ξ) takes the form of (2.9). Let b ∈ `0 (Zs ). It is an easy exercise to show that X p(β)b(· − M β) ∈ Πk ∀ p ∈ Πk ⇐⇒ Dµbb(2πβ) = 0 ∀ |µ| 6 k, β ∈ (M T )−1 Zs \Zs . (2.19) β∈Zs
When (2.19) holds, one has X
p(β)b(· − M β) =
β∈Zs
X (−iM −1 DT )µ b 1 Dµ p(M −1 ·) b(0) |detM | µ∈Ns µ!
∀ p ∈ Πk .
(2.20)
0
In particular, one has X p(β)b(· − M β) = 0 ∀ p ∈ Πk
⇐⇒
Dµbb(2πβ) = 0 ∀ |µ| 6 k, β ∈ (M T )−1 Zs . (2.21)
β∈Zs
Now by Proposition 2.4, it is straightforward to see that 6) is true since Pk,y = {[p, 0, . . . , 0] : p ∈ Πk } and for all p ∈ Πk , hX i X Sa,M [p, 0, . . . , 0] = |detM | p(β)a1,1 (· − M β), p(β)a1,2 (· − M β) = [Sa1,1 ,M p, 0, . . . , 0]. β∈Zs
β∈Zs
−1 By (2.20) and a d ·) ∈ Πdeg(p)−1 for all p ∈ Πk . 1,1 (0) = 1, we have Sa1,1 ,M p − p(M
By a simple computation, for p ∈ (Πk )1×r and v ∈ (`0 (Zs ))r×1 , we have X X p(α)Ta,M v(−α) = |detM | p(α)a(−M α − β)v(β) α∈Zs
α,β∈Zs
= |detM | =
X
X X
β∈Zs
p(α)a(β − M α)v(−β)
α∈Zs
Sa,M p(β)v(−β).
β∈Zs
Now 7) follows directly from 6) and the above identity.
3
Initial Function Vectors in a Cascade Algorithm
In this section, we shall study the initial function vectors in a cascade algorithm. Results in this section will be useful in investigating vector cascade algorithms and refinable function vectors. In the following, we study some necessary conditions for initial function vectors in a cascade algorithm. By expanding a trigonometric polynomial by its Taylor series, we see that the condition in (2.7) is equivalent to saying that yb(M T ξ)b a(ξ) = yb(ξ) + o(|ξ|k ), 11
ξ → 0.
All the results and proofs involving y in this paper depend only on the values Dµ yb(0), |µ| 6 k. So, when Dµb ye(0) = Dµ yb(0) for all |µ| 6 k, we can replace y by ye. Throughout this paper, we assume that (2.7) holds. The assumption in (2.7) is justified by the following result which generalizes [4, Lemma 2.1]. Proposition 3.1 Let M be an s × s isotropic dilation matrix. Let φ be a nonzero compactly b T ξ) = b b supported M -refinable function vector satisfying φ(M a(ξ)φ(ξ). Let f be an r × 1 column k s b vector of compactly supported functions in Wp (R ) such that span{f (2πβ) : β ∈ Zs } = Cr×1 . If the sequence Qna,M f (n ∈ N) converges to a nonzero function vector f∞ in the Sobolev space (Wpk (Rs ))r×1 , where the cascade operator Qa,M is defined in (1.2), then 1 is a simple eigenvalue of b a(0) and all other eigenvalues of b a(0) are less that ρ(M )−k in modulus.
(3.1)
b = 1 and (2.7) holds. Moreover, up Consequently, there exists y ∈ (`0 (Zs ))1×r such that yb(0)φ(0) µ to a scalar multiplication, the values D yb(0), |µ| 6 k satisfying (2.7) are uniquely determined by the mask a. Proof: Let fn := Qna,M f . Then fbn ((M T )n ξ) = b an (ξ)fb(ξ) and limn→∞ kfn − f∞ k(Wpk (Rs ))r×1 = 0, where an is defined in (2.6). Note that b an (0) = [b a(0)]n . It follows from Lemma 2.3 that lim ρ(M )kn [b a(0)]n fb(2πβ) = lim ρ(M )kn fbn ((M T )n 2πβ) = 0 n→∞
n→∞
∀ β ∈ Zs \{0}.
(3.2)
We claim that fb(0) 6= 0. Otherwise, combining limn→∞ ρ(M )kn [b a(0)]n fb(0) = 0, (3.2) and the assumption span{fb(2πβ) : β ∈ Zs } = Cr×1 , we deduce that ρ(b a(0)) < ρ(M )−k 6 1. It follows that fb∞ (ξ) = limn→∞ b an ((M T )−n ξ)fb∞ ((M T )−n ξ) = 0 which is a contradiction to our assumption f∞ 6= 0. Now it is easy to verify that (3.1) holds. b 6= 0. Note that (3.1) implies that [⊗j M ]⊗b By (3.1) and φ 6= 0, we must have φ(0) a(0)T −Isj +r is invertible for every j = 1, . . . , k. By Lemma 2.2 and the fact that 1 is a simple eigenvalue of b a(0), there is a unique solution {Dµ yb(0) : |µ| 6 k} to the system of linear equations in (2.7) b = 1. with yb(0) satisfying the additional condition yb(0)φ(0) For initial function vectors in a vector cascade algorithm, we have the following result. Proposition 3.2 Let φ be a nonzero compactly supported M -refinable function vector satisfying b T ξ) = b b b φ(M a(ξ)φ(ξ). Assume that there exists y ∈ (`0 (Zs ))1×r such that yb(0)φ(0) = 1 and s r×1 k (2.7) holds. For any compactly supported function vector f ∈ (Wp (R )) , if the sequence n \ f (0) = 1, then Qna,M f (n ∈ N) converges in the Sobolev space (Wpk (Rs ))r×1 and limn→∞ yb(0)Q a,M
yb(0)fb(0) = 1
and
Dµ [b y (·)fb(·)](2πβ) = 0
∀ |µ| 6 k, β ∈ Zs \{0}.
(3.3)
In particular, if φ ∈ (Wpk (Rs ))r×1 and (3.1) holds, then b Dµ [b y (·)φ(·)](0) = δ(µ)
b and Dµ [b y (·)φ(·)](2πβ) =0 12
∀ |µ| 6 k, β ∈ Zs \{0}.
(3.4)
Proof: Let N = M T . Define fn by n \ fbn (ξ) := yb(ξ)Q b(ξ)b an (N −n ξ)fb(N −n ξ). a,M f (ξ) = y
By (2.7) and the Leibniz differentiation formula, for β ∈ Zs and |µ| 6 k, we have Dµ [fbn (N n ·)](2πβ) = Dµ [b y (N n ·)b an (·)fb(·)](2πβ) = Dµ [b y (N n ·)b a(N n−1 ·) · · · b a(N ·)b a(·)fb(·)](2πβ) = Dµ [b y (N n−1 ·)b a(N n−2 ·) · · · b a(N ·)b a(·)fb(·)](2πβ) . = .. = Dµ [b y (N ·)b a(·)fb(·)](2πβ) = Dµ [b y (·)fb(·)](2πβ). Since the sequence Qna,M f converges in (Wpk (Rs ))r×1 , we deduce that the sequence fn converges in Wpk (Rs ). By Lemma 2.3, we conclude that Dµ [b y (·)fb(·)](2πβ) = lim Dµ [fbn (N n ·)](2πβ) = 0 n→∞
∀ |µ| 6 k, β ∈ Zs \{0}.
n \ So, (3.3) holds since yb(0)fb(0) = yb(0)Q b(0)b a(0) = yb(0). a,M f (0) = 1 by y
b When φ ∈ (Wpk (Rs ))r×1 , take f = φ. Then Qna,M φ = φ. It follows that Dµ [b y (·)φ(·)](2πβ) =0 s T b b for all |µ| 6 k and β ∈ Z \{0}. Since φ(M ξ) = b a(ξ)φ(ξ), by (2.7), we have b T ·)](0) = Dµ [b b b Dµ [b y (M T ·)φ(M y (M T ·)b a(·)φ(·)](0) = Dµ [b y (·)φ(·)](0)
∀ |µ| 6 k.
Since M is a dilation matrix, by Lemma 2.2, the above system of linear equations has a unique b solution for {Dµ [b y (·)φ(·)](0) : 0 < |µ| 6 k}. Obviously, the above system of linear equations µ b holds with D [b y (·)φ(·)](0) = δ(µ), |µ| 6 k which completes the proof. For a compactly supported function vector f ∈ (Wpk (Rs ))r×1 , we say that f satisfies the moment conditions of order k + 1 with respect to y if (3.3) holds. It is well known that (3.4) is equivalent to X [p(β − iDT )b y ](0)φ(· − β) = (p ∗ y) ∗ φ = p ∀ p ∈ Πk . β∈Zs
Similarly, (3.3) is equivalent to X p− [p(β − iDT )b y ](0)f (· − β) = p − (p ∗ y) ∗ f ∈ Πdeg(p)−1
∀ p ∈ Πk ,
β∈Zs
where deg(p) denotes the total degree of p. The following lemma will be needed later. Lemma 3.3 Let {cµ : |µ| 6 k, µ ∈ Ns0 } be arbitrarily given complex numbers such that c0 = 0. For any ε > 0, there exists c ∈ `0 (Zs ) such that kb c(·)kL∞ < ε and Dµb c(0) = cµ for all |µ| 6 k. 13
Proof: We prove the claim by induction. When k = 0, the claim holds by setting c = 0. Suppose that the claim holds for k = j − 1 with j > 1. By induction hypothesis, there exists a ∈ `0 (Zs ) such that kb a(·)kL∞ < ε/2 and Dµb a(0) = cµ for all |µ| 6 j − 1. It is easy to see that there exists s µb b ∈ `0 (Z ) such that D b(0) = 0 for all |µ| 6 j − 1 and Dµbb(0) = cµ − Dµb a(0) for all |µ| = j. −jb For a large enough integer n, we see that kn b(n·)kL∞ < ε/2. Set b c(ξ) = b a(ξ) + n−jbb(nξ). Then c ∈ `0 (Zs ) is desired since it is easy to verify that kb c(·)kL∞ < ε and Dµb c(0) = cµ for all |µ| 6 j. So, the claim holds for k = j. The proof is completed by induction. For an r × 1 vector f of compactly supported distributions on Rs , we say that the shifts of f are linearly independent if span{fb(ξ + 2πβ) : β ∈ Zs } = Cr×1 for all ξ ∈ Cr×1 . Therefore, if the shifts of f are linearly independent, then the shifts of f are stable. Let f be a compactly supported function vector in (Lp (Rs ))r×1 . It is well known (see [35]) that the shifts of f are stable if and only if there exist two positive constants C1 and C2 such that ° °X ° ° 6 C2 kvk(`p (Zs ))1×r ∀ v ∈ (`p (Zs ))1×r . (3.5) v(β)f (· − β)° C1 kvk(`p (Zs ))1×r 6 ° Lp (Rs )
β∈Zs
Note that when f ∈ (Lp (Rs ))r×1 and f is compactly supported, it can be easily proved that the upper bound in (3.5) holds for some positive constant C2 . Before proceeding further, let us answer the question in [4] (see Q1 in Section 1 for more detail) by the following stronger result. Proposition 3.4 Let y ∈ (`0 (Zs ))1×r such that yb(0) 6= 0. Let bµ (0 < |µ| 6 k) be any complex numbers. Then there is an r × 1 compactly supported function vector f in (C k (Rs ))r×1 such that 1) f satisfies the moment conditions of order k + 1 in (3.3) with respect to y; 2) The shifts of f are stable; 3) yb(0)fb(0) = 1 and Dµ [b y (·)fb(·)](0) = bµ for all 0 < |µ| 6 k. Proof: By Proposition 2.4, it suffices to prove the claim for yb(ξ) = [b y1 (ξ), 0, . . . , 0] with yb1 (0) = 1. It is well known ([10]) that there is a univariate compactly supported orthogonal (r + 1)refinable function φ ∈ C k (R) and there exist compactly supported C k wavelet functions ψ1 , . . . , ψr such that {φ(· − β), ψj (· − β) : j = 1, . . . , r; β ∈ Z} is an orthogonal system. By Proposition 3.2, b = 1 and Dµ φ(2πβ) b φ(0) = 0 for all 0 6 µ 6 k and β ∈ Z\{0}. Now we take the tensor product s in R . So, we have an (r + 1)Is -refinable function Φ and wavelet functions Ψ1 , . . . , Ψ(r+1)s −1 such b b = 0 for all |µ| 6 k and that their shifts are orthogonal. It is clear that Φ(0) = 1 and Dµ Φ(2πβ) s β ∈ Z \{0}. Let c0 = 1 and recursively define cµ := bµ −
X
µ! b Dµ−ν [b y1 (·)Φ(·)](0)c ν, ν!(µ − ν)! 06ν1
Let ΓM be a complete set of representatives of the distinct cosets of Zs /M Zs . To relate the quantity ρk (a; M, p, y) to the `p -norm joint spectral radius, we introduce the linear operator Aε (ε ∈ ΓM ) on (`0 (Zs ))r×1 as follows: X Aε v(α) := a(M α − β + ε)v(β), v ∈ (`0 (Zs ))r×1 , α ∈ Zs . (4.5) β∈Zs
It was proved in [24, Lemma 2.3] that if a is finitely supported, then for any finitely supported sequence v on Zs , there exists a finite dimensional subspace V (v) of (`0 (Zs ))r×1 such that V (v) contains v and V (v) is the smallest subspace of (`0 (Zs ))r×1 which is invariant under the operators Aε , ε ∈ ΓM . We call such V (v) the minimal {Aε : ε ∈ ΓM } invariant subspace generated by v. Let A := {Aε |W : ε ∈ ΓM } where W is the minimal {Aε : ε ∈ ΓM } invariant subspace generated by Bk,y , where Bk,y is defined in (2.15). By [24, Lemma 2.2 and Lemma 2.4], there exists a positive constant C such that C −1 kAn kp 6 max{kan ∗ vk(`p (Zs ))r×1 : v ∈ Bk,y } 6 CkAn kp
∀ n ∈ N.
(4.6)
Consequently, ρk (a; M, p, y) = ρp (A). Moreover, since (see [24]) |detM |n(1/p−1/q) kAn kq 6 kAn kp 6 kAn kq
∀ 1 6 q 6 p 6 ∞, n ∈ N,
(4.7)
it follows that |detM |1/q−1/p ρk (a; M, p, y) 6 ρk (a; M, q, y) 6 ρk (a; M, p, y) for 1 6 p 6 q 6 ∞ and k ∈ N. In other words, we have νp (a; M ) > νq (a; M ) > νp (a; M ) + (1/q − 1/p) logρ(M ) |detM |,
1 6 p 6 q 6 ∞.
(4.8)
Now we have the following result which generalizes [22, Proposition 2.7]. Proposition 4.1 Let M be an s × s dilation matrix. Let a be a finitely supported mask on Zs with multiplicity r. Let v1 , . . . , vJ ∈ (`0 (Zs ))r×1 . Then for any ρ > 0 and 1 6 p 6 ∞, lim ρn kan ∗ vj k(`p (Zs ))r×1 = 0
n→∞
17
∀ j = 1, . . . , J
(4.9)
if and only if there exist 0 < ρ0 < 1 and a positive constant C such that kan ∗ vj k(`p (Zs ))r×1 6 Cρ−n ρn0
∀ n ∈ N, j = 1, . . . , J.
(4.10)
Moreover, assume that (2.7) holds for some k ∈ N0 and a sequence y ∈ (`0 (Zs ))1×r with yb(0) 6= 0. If span{b vj (2πβ) : j = 1, . . . , J} = Cr×1 for all β ∈ (M T )−1 Zs \Zs and (4.9) holds with ρ = |detM |1−1/p ρ(M )k , then the mask a must satisfy the sum rules of order at least k + 1 in (2.8) with the sequence y. In particular, if ρj (a; M, p, ye) < |detM |1/p−1 ρ(M )−k for some 1 6 p 6 ∞, j ∈ N0 and ye ∈ (`0 (Zs ))1×r with b ye(0) 6= 0, then a must satisfy the sum rules of order k + 1 in (2.7) and (2.8) with the sequence y and one must have j > k and Vk,ey = Vk,y . Proof: Let Aε be defined in (4.5). Let A := {Aε |W : ε ∈ ΓM } where W is the minimal {Aε : ε ∈ ΓM } invariant subspace generated by {vj : j = 1, . . . , L}. By [24, Lemma 2.2 and Lemma 2.4], there exists a positive constant C such that C −1 kAn kp 6 max{kan ∗ vj k(`p (Zs ))r×1 : j = 1, . . . , J} 6 CkAn kp
∀ n ∈ N.
(4.11)
When (4.9) holds, then it follows from (4.11) that limn→∞ ρn kAn kp = 0. It follows from (4.4) 1/n that ρp (A) = inf n→∞ kAn kp < ρ−1 . Consequently, there exist 0 < ρ0 < 1 and C1 > 0 such that ρp (A) < ρ−1 ρ0 < ρ−1 and kAn kp 6 C1 ρ−n ρn0 for all n ∈ N. It follows from (4.11) that (4.10) holds. Obviously, (4.10) implies (4.9). When span{b vj (2πβ) : j = 1, . . . , J} = Cr×1 for all β ∈ (M T )−1 Zs \Zs and (4.9) holds with 1−1/p ρ = |detM | ρ(M )k , by induction, we show that a must satisfy the sum rules of order k + 1. Since (4.9) holds with ρ = |detM |1−1/p ρ(M )k , by (4.7) and H¨older inequality, we have lim ρ(M )nk kan ∗ vj k(`1 (Zs ))r×1 = 0
n→∞
∀ j = 1, . . . , J.
So there exist 0 < ρ0 < 1 and C > 0 such that kan ∗ vj k(`1 (Zs ))r×1 6 Cρ(M )−nk ρn0
∀ n ∈ N, j = 1, . . . , J.
(4.12)
Obviously, a satisfies the sum rules of order 0 since (2.8) is trivially true. Let N := M T . Since a, y and vj , j = 1, . . . , L are finitely supported sequences, we assume that all of them are supported on a set {α ∈ Zs : kαk 6 C1 } for some constant C1 . Now it is easy to see that the degreeP of the trigonometric polynomial yb(N n ξ)a\ b(N n ξ)b an (ξ)b vj (ξ) n ∗ vj (ξ) = y n j n is no larger than C1 + C1 j=0 kN k 6 C2 kN k for all n ∈ N, where k · k denotes the operator P −j norm of a matrix and C2 = C1 + C1 ∞ k < ∞. j=0 kN Suppose that a satisfies the sum rules of order L in (2.7) and (2.8) with y for 0 6 L < k + 1. By (2.7), for all β ∈ N −1 Zs \Zs and |µ| = L, we have Dµ [b y (N n ·)b an (·)b vj (·)](2πβ) = Dµ [b y (N n ·)b a(N n−1 ·) · · · b a(·)b vj (·)](2πβ) µ n−1 n−2 = D [b y (N ·)b a(N ·) · · · b a(·)b vj (·)](2πβ) µ = D [b y (N ·)b a(·)b vj (·)](2πβ) X µ! Dν [b y (N ·)b a(·)](2πβ)Dµ−ν vbj (2πβ) = Dµ [b y (N ·)b a(·)](2πβ)b vj (2πβ) + ν!(µ − ν)! 06ν k and Vk,ey = Vk,y . By Proposition 2.4, without loss of generality, we can assume that yb(ξ) = [b y1 (ξ), . . . , ybr (ξ)] s s s with yb1 (0) = 1. So Vk,y = Vk × `0 (Z ) × · · · × `0 (Z ) with r − 1 copies of `0 (Z ). It is trivial to see that {[v, 0, . . . , 0]T : v ∈ Vj } ⊆ Vj,ey ⊆ Vk,y which implies Vj ⊆ Vk . Hence, by the definition of Vk in (2.18), we must have j > k. Denote [yeb1 (ξ), . . . , yebr (ξ)] = b ye(ξ). In the following, we show that yeb1 (0) 6= 0 and Dµ yeb` (0) = 0
∀ |µ| 6 k, ` = 2, . . . , r.
(4.14)
Suppose that yeb` (0) 6= 0 for some 2 6 ` 6 r. Say, yeb2 (0) 6= 0. There exists v2 ∈ `0 (Zs ) such that Dµ vb2 (0) = −Dµ [yeb1 (·)/yeb2 (·)](0) ∀ |µ| 6 j,
that is,
Dµ [yeb1 (·) + vb2 (·)yeb2 (·)](0) = 0 ∀ |µ| 6 j.
So [δ, v2 , 0, . . . , 0]T ∈ Vj,ey ⊆ Vk,y which is a contradiction since δ 6∈ Vk . Therefore, we conclude that yeb` (0) = 0 for all ` = 2, . . . , r. Since b ye(0) 6= 0, we must have yeb1 (0) 6= 0. 19
Since yeb1 (0) 6= 0, there exists v1 ∈ `0 (Zs ) such that Dµ vb1 (0) = −Dµ [yeb2 (·)/yeb1 (·)](0),
that is,
Dµ [b v1 (·)yeb1 (·) + yeb2 (·)](0) = 0 ∀ |µ| 6 j.
(4.15)
So, [v1 , δ, 0, . . . , 0]T ∈ Vj,ey ⊆ Vk,y which implies v1 ∈ Vk ; that is, Dµ vb1 (0) = 0 for all |µ| 6 k. Since j > k, it follows from (4.15) that Dµ yeb2 (0) = −Dµ [b v1 (·)yeb1 (·)](0) = 0 for all |µ| 6 k. Similarly, we can prove that Dµ yeb` (0) = 0 for all |µ| 6 k and ` = 2, . . . , r. So, (4.14) holds. Now it follows directly from (4.14) that Vk,ey = Vk,y . By a similar argument as in [19, Theorem 3.1], one can prove that if a satisfies the sum rules of order k + 1 with some sequence y ∈ (`0 (Zs ))1×r , then ρj (a; M, p, y) = max{ρk (a; M, p, y), |detM |1/p−1 ρ(M −1 )j+1 }
∀ 1 6 p 6 ∞, 0 6 j 6 k.
In order to investigate vector cascade algorithms in Sobolev spaces, we need the following result which is essentially known in approximation theory (see Jia [31] and cf. [6]). Lemma 4.2 Let M be an s×s isotropic dilation matrix. Let g be a compactly supported function in Wpk (Rs ) (when p = ∞, replace Wpk (Rs ) by C k (Rs )) such that gb(0) 6= 0 and Dµ gb(2πβ) = 0 for all |µ| 6 k and β ∈ Zs \{0}, then for any compactly supported function f ∈ Wpk (Rs ), ° ° X X ° ° inf s °f − v(β)g(M n · −β)° ωp (Dµ f, ρ(M −n )) ∀ n ∈ N, 6 Cρ(M −n )k v∈`0 (Z )
β∈Zs
Lp (Rs )
µ∈Ns0 ,|µ|=k
where C > 0 is independent of f and n, and ωp (f, h) := supktk6h kf − f (· − t)kLp (Rs ) , h > 0 denotes the modulus of continuity. Now we have the main result in this section which characterizes the convergence of a vector cascade algorithm in a Sobolev space. Theorem 4.3 Let M be an s × s isotropic dilation matrix and ΓM be a complete set of representatives of the distinct cosets of Zs /M Zs . Let a be a finitely supported matrix mask on Zs with multiplicity r. Let φ be a nonzero r×1 column vector of compactly supported distributions satisfyb T ξ) = b b b =1 ing φ(M a(ξ)φ(ξ). Assume that there is a sequence y ∈ (`0 (Zs ))1×r such that yb(0)φ(0) and (2.7) holds for a nonnegative integer k. Then the following statements are equivalent: 1) For every compactly supported function vector f ∈ (Wpk (Rs ))r×1 such that f satisfies the moment conditions of order k + 1 with respect to y, the cascade algorithm with mask a, dilation matrix M and the initial function vector f converges in (Wpk (Rs ))r×1 ; that is, Qna,M f (n ∈ N) is a Cauchy sequence in the Sobolev space (Wpk (Rs ))r×1 ; 2) For some compactly supported function vector f ∈ (Wpk (Rs ))r×1 (When p = ∞, f is required to be in (C k (Rs ))r×1 ) such that f satisfies the moment conditions of order k + 1 with respect to y and the shifts of f are stable (the existence of such an initial function vector f is guaranteed by Proposition 3.4), the cascade algorithm with mask a, dilation matrix M and the initial function vector f converges in (Wpk (Rs ))r×1 ; 20
3) limn→∞ |detM |(1−1/p)n ρ(M )nk kan ∗ vk(`p (Zs ))r×1 = 0 for all v ∈ Bk,y , where an is defined in (2.6) and Bk,y is defined in (2.15); 4) limn→∞ |detM |(1−1/p)n ρ(M )nk kan ∗ vk(`p (Zs ))r×1 = 0 for all v ∈ Vk,y , where Vk,y is defined in (2.12); 5) ρk (a; M, p, y) < |detM |1/p−1 ρ(M )−k , where ρk (a; M, p, y) is defined in (4.1); 6) ρk (a; M, p, y) < |detM |1/p−1 ρ(M )−k and the mask a satisfies the sum rules of order k + 1 in (2.7) and (2.8) with the sequence y; 7) ρ(a; M, p) < |detM |1/p−1 ρ(M )−k , where ρ(a; M, p) is defined in (4.2); 8) νp (a; M ) > k, where νp (a; M ) is defined in (4.3); 9) ρp ({Aε |W : ε ∈ ΓM }) < |detM |1/p−1 ρ(M )−k , where the operators Aε are defined in (4.5) and W is the minimal {Aε : ε ∈ ΓM } invariant subspace generated by {v : v ∈ Bk,y }; P −j 10) ρp ({Aε |Vk,y ∩(`(K))r×1 : ε ∈ ΓM }) < |detM |1/p−1 ρ(M )−k , where K := Zs ∩ ∞ K0 and j=1 M s s K0 := {0, α ∈ Z : a(α) 6= 0} − ΓM + {α ∈ Z : |α| 6 1}. Moreover, any of the above statements implies that (3.1) holds; that is, 1 is a simple eigenvalue of b a(0) and all other eigenvalues of b a(0) are less than |detM |−k/s in modulus. Consequently, the sequence Qna,M f converges to φ in (Wpk (Rs ))r×1 . Proof: Obviously, 1) ⇒ 2). Suppose 2) holds. By Proposition 2.4, without loss of generality, we assume yb(ξ) = [b y1 (ξ), 0, . . . , 0]. Let fn := Qna,M f . By assumption in 2), limn→∞ kfn − f∞ k(Wpk (Rs ))r×1 = 0 for some f∞ ∈ (Wpk (Rs ))r×1 . When p = ∞, we must have f∞ ∈ (C k (Rs ))r×1 since f ∈ (C k (Rs ))r×1 . Let m := |detM |. By induction, X fn = m n an (β)f (M n · −β) ∀ n ∈ N0 . (4.16) β∈Zs
Therefore, for µ = (µ1 , . . . , µs ) ∈ Ns0 , we have X ∇µ,n fn = mn [∇µ an ](β)f (M n ·−β) with ∇µ,n := ∇µM1−n e1 · · · ∇µMs −n es
∀ n ∈ N0 . (4.17)
β∈Zs
Since the shifts of f are stable, from (4.17), there exists a positive constant C depending only on f such that mn−n/p k∇µ an k(`p (Zs ))r×r 6 Ck∇µ,n fn k(Lp (Rs ))r×1 6 Ck∇µ,n f∞ k(Lp (Rs ))r×1 + Ck∇µ,n (fn − f∞ )k(Lp (Rs ))r×1 . Note that all the functions fn and f∞ are supported on [−L, L]s for some integer L independent of n. Since M is isotropic, there is a constant C1 independent of n such that k∇µ,n (fn − f∞ )k(Lp (Rs ))r×1 6 C1 m−nk/s kfn − f∞ k(Wpk (Rs ))r×1 21
∀ |µ| = k + 1.
Since f∞ ∈ (Wpk (Rs ))r×1 when 1 6 p < ∞ and f∞ ∈ (C k (Rs ))r×1 when p = ∞, we have limn→∞ mnk/s k∇µ,n f∞ k(Lp (Rs ))r×1 = 0 for all |µ| = k + 1. Therefore, by assumption limn→∞ kfn − f∞ k(Wpk (Rs ))r×1 = 0, we deduce that lim mn(k/s+1−1/p) k∇µ an k(`p (Zs ))r×r = 0
∀ |µ| = k + 1, µ ∈ Ns0 .
n→∞
Since [an ∗ ∇µ (δe1 )](β) is the first column in the matrix [∇µ an ](β), in particular, we have lim mn(k/s+1−1/p) kan ∗ ∇µ (δe1 )k(`p (Zs ))r×r = 0
n→∞
∀ |µ| = k + 1, µ ∈ Ns0 .
(4.18)
y1 (ξ), 0, . . . , 0] Denote g := eT1 f to be the first component of f . Since we assume that yb(ξ) = [b and f satisfies the moment conditions of order k + 1 with respect to y, we have gb(0) 6= 0 and Dµ gb(2πβ) = 0
∀ |µ| 6 k, β ∈ Zs \{0}.
Since fn ∈ (Wpk (Rs ))r×1 , by Lemma 4.2, there exists vn ∈ (`0 (Zs ))r×1 such that ° ° X X ° ° n n 6 Cm−nk/s ωp (Dµ fn , ρ(M −n )), f − m v (β)g(M · −β) ° ° n n s r×1 (Lp (R ))
β∈Zs
(4.19)
|µ|=k
where C is a constant independent of fn and n. By (4.16), we have X X¡ ¢ gn := fn − mn vn (β)g(M n · −β) = mn an − [vn , 0, . . . , 0] (β)f (M n · −β). β∈Zs
β∈Zs
Since the shifts of f are stable, there exists a positive constant C1 such that mn(k/s+1−1/p) kan − [vn , 0, . . . , 0]k(`p (Zs ))r×r 6 C1 mnk/s kgn k(Lp (Rs ))r×1 X 6 CC1 ωp (Dµ fn , ρ(M −n )). |µ|=k
Note that ωp (Dµ fn , ρ(M −n )) 6 ωp (Dµ f∞ , ρ(M −n )) + ωp (Dµ (fn − f∞ ), ρ(M −n )) 6 ωp (Dµ f∞ , ρ(M −n )) + 2kfn − f∞ k(Wpk (Rs ))r×1 . Since Dµ f∞ ∈ (Lp (Rs ))r×1 (when p = ∞, Dµ f∞ ∈ (C(Rs ))r×1 ), it follows from the above inequality that limn→∞ ωp (Dµ fn , ρ(M −n )) = 0. Consequently, ° ° lim mn(k/s+1−1/p) °an − [vn , 0, . . . , 0]°(`p (Zs ))r×r = 0. n→∞
Since [an ∗ (δej )](β) is the jth column in the matrix (an − [vn , 0, . . . , 0])(β) for j = 2, . . . , r, in particular, we have lim mn(k/s+1−1/p) kan ∗ (δej )k(`p (Zs ))r×1 = 0
n→∞
∀ j = 2, . . . , r.
(4.20)
Since {∇µ (δe1 ) : |µ| = k+1}∪{δej : j = 2, . . . , r} = Bk,y generates Vk,y and ρ(M ) = |detM |1/s for an isotropic matrix M , it follows that 3) holds. 22
3) ⇒ 4) ⇒ 5) are trivial. By Proposition 4.1, 5) implies that a satisfies the sum rules of order k + 1 with the sequence y. So 5) ⇒ 6). By the definition of ρ(a; M, p) and νp (a; M ) in (4.2) and (4.3), it is obvious that 6) ⇒ 7) and 7) ⇔ 8). The equivalence relations between 6), 9) and 10) are standard results on `p -norm joint spectral radius. In the following, we show that 6) ⇒ 1). Since a satisfies the sum rules of order k + 1 in (2.7) \ and (2.8) with the sequence y, by Q a((M T )−1 ξ)fb((M T )−1 ξ), it is easy to verify that a,M f (ξ) = b Qa,M f also satisfies the moment conditions of order k + 1 with respect to y. By assumption in 6), there exist two constants 0 < ρ < 1 and C > 0 such that kan ∗ vk(`p (Zs ))r×1 6 Cmn(1/p−1) ρ(M )−kn ρn
∀ v ∈ Bk,y , n ∈ N.
(4.21)
Let g = Qa,M f − f . By Corollary 3.6, we have X
[⊗k D] ⊗ g =
v ∗ hk,v
v∈Bk,y k
for some compactly supported function vector hk,v in (Lp (Rs ))1×s . Since X fn+1 − fn = Qna,M g = mn an (β)g(M n · −β), β∈Zs
by Proposition 2.1, we have [⊗k D] ⊗ [fn+1 − fn ] = mn
X
¡ ¢ an (β) [⊗k D] ⊗ g (M n · −β)(⊗k M n )
β∈Zs
= mn
X X
an (β)(v ∗ hk,v )(M n · −β)(⊗k M n )
v∈Bk,y β∈Zs
= mn
X X
[an ∗ v](β)hk,v (M n · −β)(⊗k M n ).
v∈Bk,y β∈Zs k
Since ρ(⊗k M n ) = ρ(M )kn < [ρ(M )k ρ−1/2 ]n and all hk,v ∈ (Lp (Rs ))1×s are compactly supported, there exist positive constants C1 and C2 such that ° k ° °[⊗ D]⊗[fn+1 − fn ]° k (Lp (Rs ))r×s ° X X ° ° ° 6 C1 mn (ρ(M )k ρ−1/2 )n ° [an ∗ v](β)hk,v (M n · −β)° k (Lp (Rs ))r×s
v∈Bk,y β∈Zs
6 C1 C2 mn(1−1/p) ρ(M )kn ρ−n/2
X
kan ∗ vk(`p (Zs ))r×1 .
v∈Bk,y
It follows from (4.21) that ° k ° n/2 °[⊗ D] ⊗ [fn+1 − fn ]° k 6 CC1 C2 (#Bk,y )ρ (Lp (Rs ))r×s
∀n ∈ N.
k
Thus, [⊗k D] ⊗ fn is a Cauchy sequence in (Lp (Rs ))r×s since 0 < ρ < 1. Note that all the function vectors fn are supported on a fixed compact set. Therefore, we must have lim kQna,M f − f∞ k(Wpk (Rs ))r×1 = lim kfn − f∞ k(Wpk (Rs ))r×1 = 0 n→∞
n→∞
23
for some f∞ ∈ (Wpk (Rs ))r×1 . When (3.1) holds, we must have fb∞ = φb and consequently, f∞ = φ. To complete the proof, let us show that 7) ⇒ 2). By definition of ρ(a; M, p) in (4.2), we have ρJ (a; M, p, ye) < |detM |1/p−1 ρ(M )−k for some nonnegative integer J and ye ∈ (`0 (Zs ))1×r with b ye(0) 6= 0 such that a satisfies the sum rules of order J + 1 but not J + 2 in (2.7) and (2.8) with the sequence ye. By Proposition 3.4, there exists a function vector f ∈ (C k (Rs ))r×1 such that (a) The function vector f satisfies the moment conditions of order J + 1 with respect to ye; (b) The shifts of f are stable; (c) Dµ [b ye(·)fb(·)](0) = δ(µ) for all |µ| 6 J. By Proposition 4.1, J > k and Vk,ey = Vk,y which implies that by appropriately scaling ye by a scalar constant, f must satisfy the moment conditions of order k + 1 with respect to y. It is easy to check that Qa,M f also P satisfies the above three conditions. Let g = Qa,M f − f . µ Then by Theorem 3.5, D g = v∈BJ,y v ∗ hµ,v , |µ| 6 k for some compactly supported functions hµ,v ∈ Lp (Rs ). Now the same argument to show 6) ⇒ 1) yields that the sequence Qna,M f converges in (Wpk (Rs ))r×1 . So, 2) holds. Let us make some remark here. The equivalence between 1) and 10) has been obtained in [4]. The statements in 2), 3), 6), 7) and 8) of Theorem 4.3 are new. From the proof of Theorem 4.3, we see that without assuming that M is isotropic, the statements 3), 4), 5), 6), 9), 10) are equivalent to each other and any one of them implies 1). In fact, in the above proof, 2) ⇒ 3) and 7) ⇒ 2) are the only two places where we need the assumption that M is isotropic. More technical argument shows that Theorem 4.3 holds when M is a dilation matrix with all its eigenvalues having the same modulus. The Lp smoothness of a function f ∈ Lp (Rs ) is measured by its Lp critical exponent νp (f ) defined by νp (f ) := sup{n + ν : ωp (Dµ f, h) 6 Cf hν ∀ |µ| = n, h > 0}. When f = (f1 , . . . , fr )T , νp (f ) := min{νp (fj ) : j = 1, . . . , r}. The same proof of Theorem 4.3 to show 2) ⇒ 3) and [21, Theorems 3.1 and 3.3] yield that νp (φ) > νp (a; M ). Moreover, when the shifts of φ are stable, then one has (see [8] for p = 2 and r = 1) νp (a; M ) 6 νp (φ) 6 νp (a; M )
ln ρ(M ) . ln ρ(M −1 )−1
In particular, when M is isotropic, then νp (φ) = νp (a; M ). For discussion on smoothness of scalar refinable functions and refinable function vectors, see [7, 8, 12, 19, 20, 21, 24, 30, 33, 37, 38, 41, 46] and many references therein. In the rest of this section, let us discuss the rate of convergence of a vector cascade algorithm. Theorem 4.4 Let M be an s × s isotropic dilation matrix. Let a be a finitely support matrix mask on Zs with multiplicity r. Suppose that the mask a satisfies the sum rules of order J but not J + 1 in (2.7) and (2.8) with a sequence y ∈ (`0 (Zs ))1×r . Let f ∈ (Wpk (Rs ))r×1 satisfy the moment conditions of order J with respect to y and Dµ [b y (·)fb(·)](0) = δ(µ) for all |µ| < J −k (the 24
existence of such an initial function vector f is guaranteed by Proposition 3.4). If νp (a; M ) > k, then the cascade algorithm associated with mask a, dilation matrix M and the initial function vector f converges in (Wpk (Rs ))r×1 and for any 0 < ρ < ρ(M )k−νp (a;M ) , there exists a positive constant C such that kQna,M f − φk(Wpk (Rs ))r×1 6 Cρn ,
n n kQn+1 a,M f − Qa,M f k(Wpk (Rs ))r×1 6 2Cρ
∀ n ∈ N,
(4.22)
b T ξ) = b b b = 1. where φ is the M -refinable function vector satisfying φ(M a(ξ)φ(ξ) and yb(0)φ(0) s 1×r b = 1. By Proof: By Proposition 3.1, (3.4) and with yb(0)φ(0) P (2.7) hold for some y ∈ (`0 (Z )) µ Theorem 3.5, we have D [f −φ] = v∈BJ−1,y v∗hµ,v for some compactly supported hµ,v ∈ Lp (Rs ). Now the rest of the proof is identical to that of Theorem 4.3 to show 6) ⇒ 1).
Note that when the shifts of φ are stable, then we have νp (φ) = νp (a; M ). Moreover, the integer J in the above theorem can be taken to be the integer such that J − 1 < νp (a; M ) 6 J.
5
Refinable Hermite Interpolants
As an important family of refinable function vectors, refinable Hermite interpolants are useful in computer aided geometric design ([13, 14, 20, 25, 40]). In this section, we shall give a simple criterion to characterize a refinable Hermite interpolant in terms of its mask and consequently we settle the question Q3 in Section 1. As a direct consequence of Theorem 4.3, we have the following result which generalizes [24] and was also obtained in [6] (but the proof in [6] has a minor flaw). Corollary 5.1 Let M be an s × s isotropic dilation matrix and a be a finitely supported mask on Zs with multiplicity r. Let φ be a nonzero compactly supported M -refinable function vector with mask a and dilation M . If φ ∈ (Wpk (Rs ))r×1 (when p = ∞, we require φ ∈ (C k (Rs ))r×1 ) and the shifts of φ are stable, then νp (a; M ) > k; that is, the vector cascade algorithm associated with mask a and dilation matrix M converges in the Sobolev space (Wpk (Rs ))r×1 . Proof: By Proposition 3.1, (3.4) and (2.7) hold for some y ∈ (`0 (Zs ))1×r with yb(0) = 6 0. Condition 2) in Theorem 4.3 is satisfied by taking f = φ. The claim follows directly from Theorem 4.3. Let us recall the definition of Hermite interpolants given in [20, 25]. Let Λr := {µ ∈ Ns0 : |µ| 6 r} and by #Λr we denote the cardinality of the set Λr . Now the elements in Λr can be ordered in such a way that ν = (ν1 , . . . , νs ) is less that µ = (µ1 , . . . , µs ) if either |ν| < |µ| or when |ν| = |µ|, νj = µj for j = 1, . . . , i − 1 and νi < µi for some 1 6 i 6 s. Let φ = (φµ )µ∈Λr be a column vector of functions on Rs . We say that φ is a Hermite interpolant of order r if φ ∈ (C r (Rs ))(#Λr )×1 and [Dν φµ ](α) = δ(µ − ν)δ(α) 25
∀ µ, ν ∈ Λr , α ∈ Zs .
(5.1)
Let Dj be defined in Proposition 2.1. In other words, (5.1) is equivalent to saying that [1, D, D2 , . . . , Dr ] ⊗ φ(α) = δ(α)I#Λr
∀ α ∈ Zs .
The definition of a Hermite interpolant can be generalized by replacing Λr by a finite subset Λ of Ns0 such that 0 6 ν 6 µ ∈ Λ implies ν ∈ Λ. The following result gives us a simple criterion to characterize a multivariate refinable Hermite interpolant in terms of its mask. Corollary 5.2 Let M be an s × s isotropic dilation matrix and a be a finitely supported mask on Zs with multiplicity #Λr for some r ∈ N0 . Let φ = (φµ )µ∈Λr be a compactly supported M b T ξ) = b b refinable function vector with mask a and dilation matrix M ; that is, φ(M a(ξ)φ(ξ). Then φ is a Hermite interpolant of order r if and only if 1) φb0 (0) = 1 (this is a normalization condition for a refinable function vector); 2) ν∞ (a; M ) > r (In particular, the inequality ν2 (a; M ) > r + s/2 implies ν∞ (a; M ) > r); 3) a(0) = S(M −1 , Λr )/|detM | and a(M β) = 0 for all β ∈ Zs \{0}, where the matrix S(M −1 , Λr ) is defined to be X (M −1 x)µ xν = S(M −1 , Λr )µ,ν , µ ∈ Λr ; (5.2) µ! ν! ν∈Λ r
4) The mask a satisfies the sum rules of order r + 1 in (2.7) and (2.8) with a sequence y ∈ (`0 (Zs ))1×r such that X (−iD)µ (−β)µ yb(0) = y(β) = eTµ , µ! µ! β∈Zs
|µ| 6 r, µ ∈ Ns0 ,
(5.3)
where eµ denotes the µ-th coordinate unit vector in R(#Λr ) . Proof: Suppose that φ is a Hermite interpolant of order r. Then φ ∈ (C r (Rs ))(#Λr )×1 and the shifts of φ are stable (in fact, linearly independent). By PCorollary 5.1 or Theorem 4.3, ν∞ (a; M ) > −1 r. From the refinement equation φ(M ·) = |detM | β∈Zs a(β)φ(· − β), by Proposition 2.1, for j = 0, . . . , r, we have X [Dj ⊗ φ](M −1 ·)S(M −1 , Oj ) = Dj ⊗ [φ(M −1 ·)] = |detM | a(β)[Dj ⊗ φ](· − β). β∈Zs
Note that S(M −1 , Λr ) = diag(S(M −1 , O0 ), S(M −1 , O1 ), . . . , S(M −1 , Or )). It follows from the definition of a Hermite interpolant of order r in (5.1) that for any α ∈ Zs , δ(α)S(M −1 , Λr )/|detM | = [1, D, . . . , Dr ] ⊗ φ(α)S(M −1 , Λr )/|detM | X = a(β)[1, D, . . . , Dr ] ⊗ φ(M α − β) = a(M α). β∈Zs
26
So 3) holds. Since φ is a refinable Hermite interpolant, by Proposition 3.2, we must have X X [p(β − iDT )b y ](0)φ(· − β) = Dµ p(β)φµ (· − β) = p ∀ p ∈ Πr (5.4) β∈Zs
µ∈Ns0
with a sequence y satisfying (5.3). By Proposition 3.1 and (5.4), (2.7) holds for such a sequence y. Consequently, by Proposition 4.1 and ν∞ (a; M ) > r, a must satisfy the sum Prules of order r +1 with such a sequence y satisfying (5.3). In particular, from (5.4), we have β∈Zs φ0 (· − β) = 1 and therefore, φb0 (0) = 1. So 1) holds. Conversely, it is known ([20]) that there is a 2-refinable function vector ψ ∈ (C r (R))(r+1)×1 which is a Hermite interpolant of order r whose mask is supported on [−1, 1]. Such ψ is in fact a B-spline function vector with multiple knots. Define a function vector f by f(µ1 ,...,µs ) (t1 , . . . , ts ) := ψµ1 (t1 ) · · · ψµs (ts ),
(µ1 , . . . , µs ) ∈ Λr .
It is easy to verify that f is a Hermite interpolant of order r and (5.4) holds with φ being replaced by f using the sequence y in 4). So f satisfies the moment conditions of order r + 1 b with respect to y. Note that 1) implies yb(0)φ(0) = φb0 (0) = 1. By Theorem 4.3, 2) and the b fact yb(0)φ(0) = yb(0)fb(0) = 1, the cascade algorithm associated with mask a, dilation matrix M and the initial function vector f converges to φ in (C r (Rs ))(#Λr )×1 . Since f is a Hermite interpolant of order r, by 3), it is easy to check by induction and Proposition 2.1 that Qna,M f is also a Hermite interpolant of order r. Consequently, φ must be a Hermite interpolant of order r since Dν φµ (α) = limn→∞ Dν [Qna,M f ]µ (α) = δ(µ − ν)δ(α) for all µ, ν ∈ Λr and α ∈ Zs . Univariate refinable Hermite interpolants have been studied in [13, 20, 40, 48, 49] and references therein. We say that a mask a is a Hermite interpolatory mask of order r if 3) and 4) in Corollary 5.2 hold. The concept of Hermite interpolatory masks in the univariate setting has been introduced in [20] and a family of Hermite interpolatory masks of order r with any dilation factor has been constructed in [20]. In the univariate setting with M = 2, a necessary and sufficient condition for a refinable function vector to be a Hermite interpolant was obtained in Zhou [49]. Our characterization in Corollary 5.2 is much simpler than that of [49] even for the univariate case. Characterization of refinable Hermite interpolants has also been discussed in [25] without much detailed proofs. The reader is referred to [25] for construction of multivariate Hermite interpolatory masks with symmetry.
6
Error Estimate of Vector Cascade Algorithms in Sobolev Spaces
In applications, when the coefficients of a mask (such as the Daubechies’ orthogonal masks and the orthogonal matrix masks in [15]) are irrational numbers, one often needs to truncate such a mask. Daubechies and Huang [11] studied how truncation affects the associated scalar refinable function in the univariate L∞ case in the frequency domain. In [17, 18], Han first provided a sharp error estimate for multivariate scalar refinable functions and for their cascade algorithms 27
with a perturbed mask in any Lp norm. More specifically, it was proved in [17, 18] that if a scalar cascade algorithm associated with a mask a converges in the Lp norm, then there exist two positive constants η and C such that for any mask b such that ka−b||`1 (Zs ) < η and b satisfies the sum rules of order 1, one has kQna,M f − Qnb,M f kLp (Rs ) 6 Cka − bk`1 (Zs )
∀ n ∈ N,
where f is an initial function in the scalar cascade algorithm, and kφa − φb kLp (Rs ) 6 Cka − bk`1 (Zs ) , where φa and φb denote the scalar refinable functions with masks a and b, respectively with the standard normalization condition φba (0) = φbb (0) = 1. The main idea in [17, 18] was used in [23] to obtain error estimate for vector cascade algorithms in the univariate Lp case, and recently was generalized by Chen and Plonka [5] to establish error estimates for scalar cascade algorithms in a Sobolev space with a particular initial function which is the tensor product of a certain B-spline function. Such a restriction on the initial functions in [5] was completely removed in [22]. As we shall discuss in the following, the situation for vector cascade algorithms is much more complicated. For a sequence y ∈ (`0 (Zs ))1×r , we denote by Fk,y the set of all compactly supported function vectors f ∈ (Wpk (Rs ))r×1 such that f satisfies the moment conditions of order k + 1 with respect to y in (3.3). Note that the set Fk,y depends only on the values Dµ yb(0), |µ| 6 k. One can prove that Qa,M Fk,y ⊆ Fk,y if and only if a satisfies the sum rules of order k + 1 in (2.7) and (2.8) with the sequence y. Let a be a finitely supported mask on Zs with multiplicity r. We denote by y a ∈ (`0 (Zs ))1×r a sequence such that Dµ [yba (M T ·)b a(·)](0) = Dµ yba (0)
∀ |µ| 6 k
and yba (0) = 6 0.
(6.1)
Note that there are many choices for such a sequence y a . But when (3.1) holds, by Proposition 3.2, up to a scalar multiplication, there exists a unique sequence y a ∈ (`(Λk ))1×r , where Λk := {β ∈ Ns0 : |β| 6 k}. In the scalar case r = 1, by uniformly normalizing y a by yba (0) = 1, we observe that the set Fk,ya is independent of the choice for a mask a since Fk,ya = Fk,δ . However, when r > 1, it is not easy to uniformly normalize the vector sequence y a and the set Fk,ya indeed depends on the sequence y a which in turn depends on the mask a. Such a difficulty makes the error estimate in the vector case much more complicated. As a matter of fact, the error estimate for the univariate vector cascade algorithms in [23] is quite rough and the perturbed mask has to satisfy a very strict condition which makes such an error estimate in [23] less useful in practice. It is the purpose of this section to satisfactorily settle the question Q4 in Section 1 for the vector case in any dimension using the results in previous sections. ye(0) 6= 0. Then Fk,y = Fk,ey if and Lemma 6.1 Let y, ye ∈ (`0 (Zs ))1×r such that yb(0) 6= 0 and b only if there exists c ∈ `0 (Zs ) such that b c(0) = 1 and D µb ye(0) = Dµ [b c(·)b y (·)](0)
∀ |µ| 6 k.
Similarly, Vk,y = Vk,ey if and only if there exists c ∈ `0 (Zs ) such that (6.2) holds. 28
(6.2)
Proof: By Proposition 2.4, it suffices to prove it for yb(ξ) = [b y1 (ξ), 0, . . . , 0] with yb1 (0) = 1. In this case, Fk,y consists of all compactly supported function vectors [f1 , f2 , . . . , fr ]T ∈ (Wpk (Rs ))r×1 such that fb1 (0) = 1 and Dµ fb1 (2πβ) = 0 for all |µ| 6 k and β ∈ Zs \{0}. Write [b ye1 , . . . , b yer ] = b ye. µb b Now it is straightforward to see that Fk,y = Fk,ey if and only if ye1 (0) = 1 and D yej (0) = 0 for all |µ| 6 k and j = 2, . . . , r. Take c ∈ `(Λk ) to be the unique sequence such that Dµb c(0) = µ b D [ye1 (·)/b y1 (·)](0) for all |µ| 6 k. We complete the proof. Theorem 6.2 Let M be an s × s isotropic dilation matrix. Let k be a nonnegative integer and Ω be a compact subset of Zs with 0 ∈ Ω. Let a be a matrix mask on Zs with multiplicity r. We assume that a) a(β) = 0 for all β ∈ Zs \Ω; that is, a ∈ (`(Ω))r×r ; b) (3.1) holds; that is, 1 is a simple eigenvalue of b a(0) and all other eigenvalues of b a(0) are less −k than ρ(M ) in modulus; c) a satisfies the sum rules of order k + 1 in (2.7) and (2.8) with a sequence y a ∈ (`0 (Zs ))1×r . d) ρk (a; M, p, y a ) < |detM |1/p−1 ρ(M )−k ; that is, the cascade algorithm associated with mask a, dilation M and every initial function vector f ∈ Fk,ya converges in the Sobolev space (Wpk (Rs ))r×1 . Then there exist positive constants η and C such that for every b ∈ Nη (a; k, M, Ω) satisfying Vk,yb = Vk,ya , one has ρk (b; M, p, y b ) < |detM |1/p−1 ρ(M )−k and kan ∗v −bn ∗vk(`p (Zs ))r×1 6 C|detM |n(1/p−1) ρ(M )−kn ka−bk(`1 (Zs ))r×r
∀ v ∈ Bk−1,ya , n ∈ N, (6.3)
where the sequences an and bn are defined to be b an (ξ) = b a((M T )n−1 ξ) · · · b a(M T ξ)b a(ξ) and bbn (ξ) = bb((M T )n−1 ξ) · · · bb(M T ξ)bb(ξ). By b ∈ Nη (a; k, M, Ω) we mean 1) b(β) = 0 for all β ∈ Zs \Ω; that is, b ∈ (`(Ω))r×r ; 2) ka − bk(`1 (Zs ))r×r < η; 3) 1 is a simple eigenvalue of bb(0) and all other eigenvalues of bb(0) are less than ρ(M )−k in modulus; 4) b satisfies the sum rules of order k + 1 in (2.7) and (2.8) with a sequence y b ∈ (`0 (Zs ))1×r . Note that both η and C are independent of b and n. Proof: Denote m := |detM |. Let Aε and Bε be defined in (4.5) for the masks a and b with K defined in 10) of Theorem 4.3 and K0 := Ω − ΓM + {α ∈ Zs : |α| 6 1}, respectively. Denote A := {Aε |Vk,ya ∩(`(K))r×1 : ε ∈ ΓM } and B := {Bε |Vk,ya ∩(`(K))r×1 : ε ∈ ΓM }. By the equivalence relation in (4.6) (also see Theorem 4.3), we have ρp (A) < m1/p−1 ρ(M )−k . By (4.4), there is a 29
1/n
positive integer n such that kAn kp < m1/p−1 ρ(M )−k . Therefore, there exists 0 < ρ < 1 such that kAn k1/n < m1/p−1 ρ(M )−k ρ. p Consequently, by the continuity of kAn kp , there exists η > 0 such that for all b ∈ Nη (a; k, M, Ω) 1/n such that Vk,yb = Vk,ya , one has kBn kp < m1/p−1 ρ(M )−k ρ which yields ρk (b; M, p, y b ) = ρk (b; M, p, y a ) = ρp (B) < m1/p−1 ρ(M )−k ρ < m1/p−1 ρ(M )−k . In order to prove (6.3), by Proposition 2.4, it suffices to prove the claims for the case yba (ξ) = [b y1a (ξ), 0, . . . , 0] with yb1a (0) = 1. In this case, by c), b a(ξ) must take the form of (2.9) and (2.10) holds. Now by Lemma 6.1, it is easy to see that Vk,yb = Vk,ya if and only if bb(ξ) also takes the form of (2.9) with a being replaced by b. By 3), (2.10) holds with a being replaced by b. Using `p norm on a finite matrix, we observe that kbn ∗ v − an ∗ vk(`p (Zs ))r×1 = k(bn − an ) ∗ vk(`p (Zs ))r×1 ³ X ´1/p kBε1 · · · Bεn v − Aε1 · · · Aεn vkp(`p (Zs ))r×1 = 6
ε1 ,...,εn ∈ΓM n X³ X j=1
kBε1 · · · Bεj−1 (Bεj − Aεj )Aεj+1 · · · Aεn vkp(`p (Zs ))r×1
´1/p
.
ε1 ,...,εn ∈ΓM
Note that Vk,ya = Vk × `0 (Zs ) × · · · × `0 (Zs ). Using the special form of b a(ξ) and bb(ξ) in (2.9), by (2.19) and (2.20), it is not difficult to show that (Bεj − Aεj )Vk−1,ya ⊆ Vk,ya . By Proposition 2.5, Aεj+1 · · · Aεn Vk−1,ya ⊆ Vk−1,ya . Note that kBεj − Aεj k 6 ka − bk(`1 (Zs ))r×r . Now by a similar argument as in Chen and Plonka [5], one has X X kBε1 · · · Bεj−1 (Bεj − Aεj )Aεj+1 · · · Aεn vkp(`p (Zs ))r×1 εj+1 ,...,εn ∈ΓM ε1 ,...,εj ∈ΓM
6 C1 m(j−1)(1−p) ρ(M )−k(j−1)p ρ(j−1)p kBεj − Aεj kp
X
kAεj+1 · · · Aεn vkp(`p (Zs ))r×1
εj+1 ,...,εn ∈ΓM
6 C1 C2 m(j−1)(1−p) ρ(M )−k(j−1)p ρ(j−1)p ka − bkp(`1 (Zs ))r×r m(1−p)(n−j) ρ(M )−k(n−j)p kvkp(`1 (Zs ))r×1 = C1 C2 mp−1 ρ(M )kp kvkp(`1 (Zs ))r×1 ρ(j−1)p ka − bkp(`1 (Zs ))r×r mn(1−p) ρ(M )−knp for some constants C1 and C2 independent of b and n. It follows from the above inequality that kbn ∗ v − an ∗ vk(`p (Zs ))r×1 1/p
1/p
6 C1 C2 m1−1/p ρ(M )k kvk(`1 (Zs ))r×1 mn(1−1/p) ρ(M )−kn ka − bk(`1 (Zs ))r×r
n X j=1
So, (6.3) holds with the constant C given by C=
1/p 1/p C1 C2 m1−1/p ρ(M )k
max{kvk(`1 (Zs ))r×1 : v ∈ Bk−1,ya }
∞ X j=0
30
ρj < ∞.
ρj−1 .
This completes the proof. Note that by Theorem 4.3, d) in Theorem 6.2 implies both b) and c) in Theorem 6.2. We explicitly listed b) and c) in Theorem 6.2 for the convenience of discussion only. Also note that we can use other norm to measure the distance between a and b. Since both a and b belong to a finite dimensional space (`(Ω))r×r and all the norms on (`(Ω))r×r are equivalent, for simplicity, in (6.3) we used `1 norm to measure the distance between the masks a and b. The following is the main result in this section which settles Q4 in Section 1. Theorem 6.3 Let M be an s × s isotropic dilation matrix. Under the same assumptions a),b), c), d) on the mask a as in Theorem 6.2. Then there exist positive constants η, C1 , C2 and C3 such that 1) For every initial function vector f ∈ Fk,ya , kQna,M f − Qnb,M f k(Wpk (Rs ))r×1 6 C1 ka − bk(`1 (Zs ))r×r
∀ n ∈ N, b ∈ Nη (a; k, M, Ω) (6.4)
provided that Fk,yb = Fk,ya ; 2) With an appropriate choice for y a and y b , we have ky a − y b k(`1 (Zs ))1×r 6 C2 ka − bk(`1 (Zs ))r×r
∀ b ∈ Nη (a; k, M, Ω);
(6.5)
3) ρk (b; M, p, y b ) < |detM |1/p−1 ρ(M )−k ; that is, for every mask b ∈ Nη (a; k, M, Ω), the cascade algorithm associated with mask b, dilation matrix M and every initial function vector f ∈ Fk,yb converges in the Sobolev space (Wpk (Rs ))r×1 ; 4) With the choice y b in 2), let φa and φb be two compactly supported refinable function vectors such that φba (M T ξ) = b a(ξ)φba (ξ), yba (0)φba (0) = 1,
φbb (M T ξ) = bb(ξ)φbb (ξ), ybb (0)φbb (0) = 1.
(6.6)
Then φa and φb belong to the Sobolev space (Wpk (Rs ))r×1 and one has the following estimate kφa − φb k(Wpk (Rs ))r×1 6 C3 ka − bk(`1 (Zs ))r×r
∀ b ∈ Nη (a; k, M, Ω).
(6.7)
Note that all η, C1 , C2 and C3 are independent of n and b. Proof: Let m := |detM |. By Lemma 6.1, Fk,ya = Fk,yb implies Vk,ya = Vk,yb . Since f ∈ Fk,ya , by Corollary 3.6, X v ∗ Hk,v (6.8) [⊗k D] ⊗ f = v∈Bk−1,ya k
for some compactly supported function vectors Hk,v ∈ (Lp (Rs ))1×s . By induction, X (an − bn )(β)f (M n · −β). Qna,M f − Qnb,M f = mn β∈Zs
31
Therefore, by Proposition 2.1 and (6.8), we have X ¡ ¢ (an − bn )(β) [⊗k D] ⊗ f (M n · −β)(⊗k M n ) [⊗k D] ⊗ [Qna,M f − Qnb,M f ] = mn β∈Zs
= mn
X
X
(an − bn )(β)(v ∗ Hk,v )(M n · −β)(⊗k M n )
v∈Bk−1,ya β∈Zs
= mn
X
X
(an ∗ v − bn ∗ v)(β)Hk,v (M n · −β)(⊗k M n ).
v∈Bk−1,ya β∈Zs k
Since ⊗k M n is isotropic and all Hk,v are compactly supported function vectors in (Lp (Rs ))1×s , there exists a positive constant C0 such that X ° ° k n(1−1/p) kn °[⊗ D] ⊗ [Qna,M f − Qnb,M f ]° 6 C m ρ(M ) kan ∗ v − bn ∗ vk(`p (Zs ))r×1 . k 0 (Lp (Rs ))r×s v∈Bk−1,ya
By Theorem 6.2, (6.3) holds. Consequently, we deduce that ° ° k °[⊗ D] ⊗ [Qna,M f − Qnb,M f ]° k 6 CC0 (#Bk−1,y a )ka − bk(`1 (Zs ))r×r (Lp (Rs ))r×s for all n ∈ N and b ∈ Nη (a; k, M, Ω) such that Vk,yb = Vk,ya . Since all Qna,M f and Qnb,M f are supported on a fixed compact set, we conclude that (6.4) holds for some constant C1 independent of b and n. In the following, we prove 2), 3) and 4). By Proposition 2.4, without loss of generality, we assume that yba (ξ) = [yb1a (ξ), 0, . . . , 0] with yb1a (0) = 1. Write ybb (ξ) = [yb1b (ξ), yb2b (ξ)]. Since b ∈ Nη (a; k, M, Ω) implies that 1 is a simple eigenvalue of bb(0), there is a unique solution ybb (0) to the equation ybb (0)bb(0) = ybb (0) with yb1b (0) = 1. In fact, yb2b (0) = bb2,2 (0)[Ir−1 − bb2,2 (0)]−1 . Since b a(ξ) takes the form of (2.9), we have ρ(b a2,2 (0)) < 1 and b a1,2 (0) = 0. Therefore, yb2b (0) is well defined since ρ(bb2,2 (0)) < 1 when η is small enough. Since b a1,2 (0) = 0, by the condition 3) in b a b b Theorem 6.2, ky (0)− y (0)k 6 Cka−bk(`1 (Zs ))r×r for some constant C. It follows from Lemma 2.2 that there is a unique solution {Dµ ybb (0) : 0 < |µ| 6 k} to the system of linear equations Dµ [ybb (·)bb(·)](0) = Dµ ybb (0),
0 < |µ| 6 k
and in fact one can show that there exists a positive constant C0 independent of b such that kDµ ybb (0) − Dµ yba (0)k 6 C0 ka − bk(`1 (Zs ))r×r
∀ b ∈ Nη (a; k, M, Ω).
(6.9)
It is well known that there exists a unique y ∈ (`(Λk ))1×r such that Dµ yb(0), |µ| 6 k are preassigned. Choose the sequences y a and y b in (`(Λk ))1×r . It follows from (6.9) that (6.5) holds for some constant C2 independent of b. Let y b be the sequence chosen above with yb1b (0) = 1. As in the proof of Proposition 2.4, there exists a unique sequence c ∈ (`(Λk ))1×(r−1) such that Dµ [yb2b (·) − b c(·)yb1b (·)](0) = 0 32
∀ |µ| 6 k.
Consequently, it follows from (6.9) that kck(`1 (Zs ))1×(r−1) 6 Cky b − y a k(`1 (Zs ))1×r 6 CC2 ka − bk(`1 (Zs ))r×r (6.10) · ¸ 1 −b c (ξ) s r×r b (ξ) := for some constant C independent of b. Define U ∈ (`0 (Z )) by U . Define 0 Ir−1 b eb(ξ) := U b (M T ξ)−1bb(ξ)U b (ξ),
e b (ξ)−1 φbb (ξ), b (ξ) and φbeb (ξ) := U ybb (ξ) = ybb (ξ)U
b e e where φb is given in (6.6). Then φbb (M T ξ) = eb(ξ)φbb (ξ). It follows from (6.10) that there exists a small enough η 0 > 0 such that b ∈ Nη0 (a; k, M, Ω) implies eb ∈ Nη (a; k, M, Ω) since kU − Ir δk(`1 (Zs ))r×r = kck(`1 (Zs ))1×(r−1) 6 CC2 ka − bk(`1 (Zs ))r×r .
(6.11)
be Since Dµ ybja (0) = Dµ yjb (0) = 0 for all |µ| 6 k and j = 2, . . . , r, using the relation between b and eb, one can easily verify that ka−ebk(`1 (Zs ))r×r 6 C3 ka−bk(`1 (Zs ))r×r for some constant C3 independent be of b. Replace η by the smaller η 0 . By Lemma 6.1 and yba (0) = y b (0) = ybb (0) = 1, F e = F a . 1
1
1
By what has been proved in 1), we have the following estimate
k,y b
k,y
e kφb − φa k(Wpk (Rs ))r×1 6 C1 ka − ebk(`1 (Zs ))r×r 6 C1 C3 ka − bk(`1 (Zs ))r×r e
and consequently, kφb k(Wpk (Rs ))r×1 6 2kφa k(Wpk (Rs ))r×1 for small enough η. On the other hand, e b (ξ) − Ir )φbeb (ξ), we deduce that since φbb (ξ) − φbb (ξ) = (U e
e
kφb − φb k(Wpk (Rs ))r×1 6 kU − Ir δk(`1 (Zs ))r×r kφb k(Wpk (Rs ))r×1 6 2CC2 kφa k(Wpk (Rs ))r×1 ka − bk(`1 (Zs ))r×r . Consequently, we have the estimate in (6.7) since e
e
kφb − φa k(Wpk (Rs ))r×1 6 kφb − φb k(Wpk (Rs ))r×1 + kφb − φa k(Wpk (Rs ))r×1 6 (C1 C3 + 2CC2 kφa k(Wpk (Rs ))r×1 )ka − bk(`1 (Zs ))r×r . e By Theorem 6.2, we have ρk (eb; M, p, y b ) < m1/p−1 ρ(M )−k since Vk,yeb = Vk,ya . By the relae tion between b and eb, it is easy to verify that ρk (b; M, p, y b ) = ρk (eb; M, p, y b ). Consequently, ρk (b; M, p, y b ) < m1/p−1 ρ(M )−k . By Theorem 4.3, 3) holds.
Suppose that a satisfies the sum rules of order k+1 but not k+2. The proofs of Theorems 6.2 and 6.3 also tell us that limη→0,b∈Nη (a;k,M,Ω) νp (b; M ) = νp (a; M ) which is not a trivial fact since the space Vk,yb in the definition of νp (b; M ) changes with b and is not invariant under perturbation.
7
Computing the Important Quantity ν2(a; M )
Since the quantity νp (a; M ), which is defined in (4.3), plays a very important role in characterizing the convergence of a vector cascade algorithm in a Sobolev space and in characterizing the Lp 33
smoothness of a refinable function vector, it is of interest to find a numerical algorithm for efficiently computing or estimating the quantity νp (a; M ). T
For a matrix A, we denote A∗ := A . Define an inner product on (`2 (Zs ))m×n by ³X ´ ³ 1 Z ´ ∗ ∗ hu, vi := trace u(β)v(−β) = trace u b (ξ)b v (ξ) dξ , u, v ∈ (`2 (Zs ))m×n . (7.1) s (2π) s R β∈Zs There are two operators Sa,M and Ta,M which are convolved version of the operators Sa,M and Ta,M in Proposition 2.5. Define Sa,M and Ta,M to be X a(β − M γ)∗ v(γ)a(β − α), α ∈ Zs , v ∈ (`0 (Zs ))r×r , Sa,M v(α) = |detM | β,γ∈Zs
Ta,M v(α) = |detM |
X
a(M α − β)v(γ)a(γ − β)∗ ,
α ∈ Zs , v ∈ (`0 (Zs ))r×r .
β,γ∈Zs
It is easy to check that hSa,M u, vi = hu, Ta,M vi for all u, v ∈ (`0 (Zs ))r×r . For a matrix A or an operator A acting on a finite dimensional space V , we denote spec(A) or spec(A|V ) the set of all eigenvalues of A or A|V counting the multiplicity of the eigenvalues. It is known in the literature that ν2 (a; M ) can be computed by finding the spectral radius of a finite matrix (see [8, 12, 19, 21, 24, 30, 33, 38, 37, 46, 47] and references therein). In the vector case, Jia and Jiang [33] found the following algorithm for computing ν2 (a; M ) for an isotropic dilation matrix M for which we shall provide a self-contained and simple proof here. Theorem 7.1 Let M be an s × s dilation matrix and σ = (σ1 , . . . , σs ), where spec(M ) = {σ1 , . . . , σs }. Let a be a finitely supported mask on Zs with multiplicity r such that a satisfies the sum rules of the highest possible order k + 1 but not k + 2 in (2.7) and (2.8) with some sequence y ∈ (`0 (Zs ))1×r . Then the quantity ν2 (a; M ), which is defined in (4.3), can be calculated by the following procedure: 1) Form a new sequence b on Zs with multiplicity r2 by X b(α) := |detM | a(β) ⊗ a(α + β),
α ∈ Zs ;
(7.2)
β∈Zs
2) Calculate the set K = Zs ∩
P∞ j=1
M −j (suppb), where suppb := {β ∈ Zs : b(β) 6= 0};
3) Define the set Ek to be © ª Ek := λσ −µ , λσ −µ : λ ∈ spec(b a(0))\{1}, |µ| 6 k ∪{σ −µ : |µ| 6 2k + 1}; p Then the quantity ρk (a; M, p, y), which is defined in (4.1), is given by ρk /|detM |, where ª ¢ © ¡ ρk := max |λ| : λ ∈ spec (b(M α − β))α,β∈K \Ek . Consequently, ν2 (a; M ) = − logρ(M )
√
ρk . 34
(7.3)
Proof: By Proposition 2.4, without loss of generality, we can assume yb(ξ) = [b y1 (ξ), 0, . . . , 0]. By assumption on mask a and Proposition 2.4, b a(ξ) must take the form of (2.9) such that (2.10) holds. Let Wk := span{w : w(ξ) b =u b(ξ)b v (ξ)∗ , u, v ∈ Vk,y }. Since Vk,y = Vk ×`0 (Zs )×· · ·×`0 (Zs ), where Vk is defined in (2.18), we can easily deduce that ½· ¸ ¾ v1,1 v1,2 1×(r−1) (r−1)×1 s (r−1)×(r−1) Wk = : v1,1 ∈ V2k+1 , v1,2 ∈ (Vk ) , v2,1 ∈ (Vk ) , v2,2 ∈ (`0 (Z )) . v2,1 v2,2 For any finite dimensional subspace V of (`0 (Zs ))r×r such that Ta,M V ⊆ V , it was proved in [24] that spec(Ta,M |V ) ∪ {0} = spec(Ta,M |V ∩(`(K))r×r ) ∪ {0}. So, for simplicity, in the following spec(Ta,M |V ) always means spec(Ta,M |V ∩(`(K))r×r ). For v ∈ Vk,y , let w denote the sequence given by w(ξ) b = vb(ξ)b v (ξ)∗ . By induction, one has à ! Z 1 2 ∗ ∗ kan ∗ vk(`2 (Zs ))r×1 = han ∗ v, an ∗ vi = trace b an (ξ)v(ξ)b v (ξ) b an (ξ) dξ (2π)s [−π,π)s à ! Z 1 1 n = trace T\ w(ξ) dξ . |detM |n (2π)s [−π,π)s a,M For w ∈ (`0 (Zs ))r×r such that w(ξ) b > 0 (that is, w(ξ) b is positive semidefinite), one has Z Z ³ ´ ³Z ´ n n n \ \ \ trace Ta,M w(ξ) dξ 6 kTa,M w(ξ)k`1 dξ 6 r × trace Ta,M w(ξ) dξ . [−π,π)s
[−π,π)s
[−π,π)s
By the Cauchy-Schwartz inequality, we deduce that 1/n
ρk (a; M, 2, y) = sup{kan ∗ vk(`2 (Zs ))r×1 : v ∈ Vk,y } =
q ρ(Ta,M |Wk )/|detM |.
In order to calculate ρ(Ta,M |Wk ), we define three types of subspaces Uj1 , Uj2 , Uj3 of (Πj )r×r by ½· ¸ ¾ ½· ¸ ¾ p 0 0 0 1 2 (r−1)×1 Uj := : p ∈ Πj , Uj := : p ∈ (Πj ) , j ∈ N0 (7.4) 0 0 p 0 and Uj3 := {p : pT ∈ Uj2 }. Due to the special form of b a(ξ) in (2.9) and (2.10), by a simple computation, it follows directly from (2.19) and (2.21) that · ¸ · ¸ · ¸ P |detM | β,γ∈Zs a∗1,1 (· − M γ + β)p(γ)a1,1 (β) 0 p 0 Sc,M p 0 Sa,M = = , p ∈ Π2k+1 , 0 0 0 0 0 0 P where c ∈ `0 (Zs ) is given by c(α) = β∈Zs a1,1 (α + β)a1,1 (β); that is, b c(ξ) = |b a1,1 (−ξ)|2 . Since a1,1 satisfies the sum rules of order k + 1, so c satisfies the sum rules of order 2k + 2. By 6) in Proposition 2.4, Sc,M p − p(M −1 ·) ∈ Πdeg(p)−1 for all p ∈ Π2k+1 . Therefore, Sc,M p ≡ p(M −1 ·) mod Πj−1 for all p ∈ Πj /Πj−1 and j = 0, . . . , 2k + 1. Consequently, spec(Sc,M |Πj /Πj−1 ) = spec(S(M −1 , Oj )) = {σ −µ : |µ| = j}, where S(M −1 , Oj ) is defined in (2.1). Hence, 1 spec(Sa,M |U2k+1 ) = spec(Sc,M |Π2k+1 ) =
2k+1 [
spec(Sc,M |Πj /Πj−1 ) = {σ −µ : |µ| 6 2k + 1}.
j=0
35
By Proposition 2.4 and a simple computation, it follows from (2.19) and (2.21) that ¸ · ¸ ·P ∗ 0 0 0 s a2,1 (β)p(γ)a1,1 (β − · + M γ) β,γ∈Z Sa,M = |detM | P ∗ p 0 β,γ∈Zs a2,2 (β)p(γ)a1,1 (β − · + M γ) 0 ¸ ·P ∗ 0 s a2,1 (β)[Sd,M p](· − β) β∈Z = P , p ∈ (Πk )(r−1)×1 , ∗ a (β)[S p](· − β) 0 d,M β∈Zs 2,2
(7.5)
where d(β) = a1,1 (−β), β ∈ Zs . Since a1,1 satisfies the sum rules of order of k + 1, so does d. By Proposition 2.5, for p ∈ (Πk )(r−1)×1 , we have X X a2,2 (0)∗ p(M −1 ·) mod (Πdeg(p)−1 )(r−1)×1 . a∗2,2 (β)p(M −1 (· − β)) ≡ b a∗2,2 (β)[Sd,M p](· − β) ≡ β∈Zs
β∈Zs
2 1 1 ) is invariant under Sa,M and for )/(Uj−1 ⊕ U2k+1 Consequently, the quotient group (Uj2 ⊕ U2k+1 j = 0, . . . , k, ¸ · ¸ · ¸ · 0 0 0 0 0 0 1 2 1 Sa,M ≡ , ∈ (Uj2 ⊕ U2k+1 )/(Uj−1 ⊕ U2k+1 ). b a2,2 (0)∗ p(M −1 ·) 0 p 0 p 0
Now by spec(b a2,2 (0)) = spec(b a(0))\{1}, it is easy to verify that 1 2 ⊕U 1 spec(Sa,M |(Uj2 ⊕U2k+1 ) = {λη : λ ∈ spec(b a2,2 (0)∗ ), η ∈ spec(S(M −1 , Oj ))} )/(Uj−1 2k+1 )
a(0))\{1}, |µ| = j}. = {λσ −µ : λ ∈ spec(b Therefore, we have 1 1 spec(Sa,M |(Uk2 ⊕U2k+1 )= )/U2k+1
k [
1 2 ⊕U 1 spec(Sa,M |(Uj2 ⊕U2k+1 ) )/(Uj−1 2k+1 )
j=0
= {λσ −µ : λ ∈ spec(b a(0))\{1}, |µ| 6 k}. 1 1 1 3 ⊕U 1 Similarly, we have spec(Sa,M |(Uk3 ⊕U2k+1 ) = ∪kj=0 spec(Sa,M |(Uj3 ⊕U2k+1 ) = {λσ −µ : )/U2k+1 )/(Uj−1 2k+1 ) λ ∈ spec(b a(0))\{1}, |µ| 6 k}. Since by the definition of Vk in (2.18), Vk = Π⊥ k , it is straightfor1 2 3 ⊥ ward to see that Wk = (U2k+1 ⊕ Uk ⊕ Uk ) . By the duality relation, we conclude that 1 spec(Ta,M |(`0 (Zs ))r×r /Wk ) = spec(Sa,M |U2k+1 ⊕Uk2 ⊕Uk3 ) 1 1 1 1 1 = spec(Sa,M |U2k+1 ) ∪ spec(Sa,M |Uk2 ⊕U2k+1 ) ∪ spec(Sa,M |Uk3 ⊕U2k+1 ) = Ek . /U2k+1 /U2k+1
Using the vec operation as discussed in Lemma 2.2, it is easy to see that spec(Ta,M |(`(K))r×r ) = spec((b(M α−β))α,β∈K ). Therefore, spec(Ta,M |Wk ) = spec((b(M α−β))α,β∈K )\Ek which completes the proof. The above proof can be carried out similarly by using Ta,M directly instead of using Sa,M (see [21]). The above proof can be also easily adapted to take into account the symmetry of the mask. For computing ν2 (a; M ) for scalar masks by taking into account symmetry to significantly reduce the size of the problem, see [21]. One way of computing the set K in Theorem 7.1 is as follows. Choose any initial finite subset K0 of Zs such that K ⊆ K0 ⊆ Zs . Recursively define Kj := Kj−1 ∩ M −1 (Kj−1 + suppb), j ∈ N. Then there must exist some j such that Kj = Kj−1 . 36
An easy argument shows that K = Kj . For more detail, see [21, Proposition 2.2]. From the proof of Theorem 7.1, we observe that K can be replaced by any finite subset K0 of Zs such that M −1 (K0 + suppb) ∩ Zs ⊆ K0 and for every 0 6 j 6 k, there is a subset Bj of (`(K0 ))r×1 such that Bj generates Vj,y ; that is, span{v(· − β) : v ∈ Bj , β ∈ Zs } = Vj,y . In the univariate case, one can compute ν2 (a; M ) by factorizing the symbol of a mask (see [7, 41, 44]) as follows. Proposition 7.2 Let M be an integer such that |M | > 1. Let a be a matrix mask on Z such that a satisfies the sum rules of order k + 1 in (2.7) and (2.8) with some sequence y ∈ (`0 (Z))1×r . by (M ξ)−1b by (ξ) takes the form of (2.9). Define a Let Uy be given in Proposition 2.4 so that U a(ξ)U new sequence b by · ¸−1 · ¸ −iM ξ k+1 −iξ k+1 (1 − e ) 0 (1 − e ) 0 −1 bb(ξ) := by (M ξ) b by (ξ) U a(ξ)U . (7.6) 0 Ir−1 0 Ir−1 Then b is a finitely supported sequence on Z and 1/n
ρk (a; M, p, y) = ρ−1 (b; M, p, 0) := lim kbn k(`p (Z))r×r , n→∞
p where bbn (ξ) = bb(M n−1 ξ) · · · bb(M ξ)bb(ξ). Moreover, ρk (a; M, 2, y) = ρk /|detM |, where ρk is the P P −j spectral radius of ( β∈Z b(β) ⊗ b(α + β))α,β∈K , where K := Z ∩ ∞ j=1 |M | (suppa − suppa). Proof: By Proposition 2.4, it suffices to prove it for the case yb(ξ) = [b y1 (ξ), 0, . . . , 0] and b Uy (ξ) = Ir . By (2.9) and (2.10), b must be a finitely supported sequence. Let w be given by w(ξ) b = diag((1 − e−iξ )k+1 , Ir−1 ). We observe that {wej : j = 1, . . . , r} = Bk,y generates Vk,y and b a(ξ) = w(M b ξ)bb(ξ)w(ξ) b −1 . Consequently, we have 1/n
ρk (a; M, p, y) = lim kan ∗ wk(`p (Z))r×r n→∞
and a\ an (ξ)w(ξ) b = w(M b n ξ)bbn (ξ), n ∗ w(ξ) = b
n ∈ N.
Since all a, b and w are finitely supported, that they are supported on [−N, N ] for P we−1assume −iM n jξ some positive integer N . Define b cn (ξ) := 3N diag(e , Ir−1 )a\ n ∗ w(ξ). Note that j=0 bbn (ξ) = w(M b n ξ)−1 a\ n ∗ w(ξ) =
∞ X
diag(e−iM
n jξ
, Ir−1 )a\ n ∗ w(ξ).
j=0
It is easy to see that bn vanishes outside [−M n N, M n N ] and bn (β) = cn (β) for all β ∈ Z ∩ [−M n N, M n N ]. Therefore, we have (3N )−1 kbn k(`p (Z))r×r 6 (3N )−1 kcn k(`p (Z))r×r 6 kan ∗ wk(`p (Z))r×r 6 kwk(`1 (Z))r×r kbn k(`p (Z))r×r . Consequently,
1/n
1/n
ρk (a; M, p, y) = lim kan ∗ wk(`p (Z))r×r = lim kbn k(`p (Z))r×r n→∞
n→∞
which completes the proof. Acknowledgment The author would like to thank Professor Rong-Qing Jia at the University of Alberta for several discussions that motivate this work. 37
References [1] C. de Boor, R. DeVore, and A. Ron, Approximation orders of FSI spaces in L2 (Rd ), Constr. Approx., (1998) 631–652. [2] C. Cabrelli, C. Heil, and U. Molter, Accuracy of lattice translates of several multidimensional refinable functions, J. Approx. Theory, 95 (1998), 5–52. [3] A. S. Cavaretta, W. Dahmen, and C.A. Micchelli, Stationary subdivision, Mem. Amer. Math. Soc., 453 (1991), American Math. Soc, Providence. [4] D. R. Chen, R. Q. Jia, and S. D. Riemenschneider, Convergence of vector subdivision schemes in Sobolev spaces, Appl. Comput. Harmon. Anal., 12 (2002), 128–149. [5] D. R. Chen and G. Plonka, Convergence of cascade algorithms in Sobolev spaces for perturbed refinement masks, J. Approx. Theory, to appear. [6] D. R. Chen and X. Zhang, Stability implies convergence of cascade algorithms in Sobolev spaces, J. Math. Anal. Appl., to appear. [7] A. Cohen, I. Daubechies, and G. Plonka, Regularity of refinable function vectors, J. Fourier Anal. Appl., 3 (1997), 295–324. [8] A. Cohen, K. Gr¨ochenig, and L. Villemoes, Regularity of multivariate refinable functions, Constr. Approx., 15 (1999), 241–255. [9] W. Dahmen and C. A. Micchelli, Biorthogonal wavelet expansions, Constr. Approx., 13 (1997), 293–328. [10] I. Daubechies, Ten Lectures on Wavelets, CBMS-NSF Series in Applied Mathematics, SIAM, Philadelphia, 1992. [11] I. Daubechies and Y. Huang, How does truncation of the mask affect a refinable function? Constr. Approx., 11 (1995), 365–380. [12] I. Daubechies and J. C. Lagarias, Two-scale difference equations: II. Local regularity, infinite products of matrices and fractals, SIAM J. Math. Anal., 23 (1992), 1031–1079. [13] N. Dyn and D. Levin, Analysis of Hermite-interpolatory subdivision schemes, In S. Dubuc and G. Deslauriers, editors, Spline Functions and the Theory of Wavelets, 18, pages 105–113, 1999. CRM Proceedings and Lectures Notes. [14] N. Dyn and D. Levin, Subdivision schemes in geometric modeling, Acta Numerica, to appear. [15] G. S. Geronimo, D. P. Hardin, and P. R. Massopust, Fractal functions and wavelet expansions based on several scaling functions, J. Approx. Theory, 78 (1994), 373–401. [16] T. N. T. Goodman and S. L. Lee, Convergence of cascade algorithms, In Mathematical methods for curves and surfaces II, pages 191–212. Innov. Appl. Math., Vanderbilt Univ. Press, Nashville, 1998. 38
[17] B. Han, Error estimate of a subdivision scheme with a truncated refinement mask, unpublished manuscript, (1997). [18] B. Han, Subdivision schemes, biorthogonal wavelets and image compression, PhD thesis, University of Alberta, 1998. [19] B. Han, Analysis and construction of optimal multivariate biorthogonal wavelets with compact support, SIAM J. Math. Anal., 31 (2000), 274–304. [20] B. Han, Approximation properties and construction of Hermite interpolants and biorthogonal multiwavelets, J. Approx. Theory, 110 (2001), 18–53. [21] B. Han, Computing the smoothness exponent of a symmetric multivariate refinable function, SIAM J. Matrix Anal. Appl., to appear. [22] B. Han, The initial functions in a cascade algorithm, Proceedings of International Conference of Computational Harmonic Analysis in Hong Kong, (D. X. Zhou ed.), (2002), to appear. [23] B. Han and T. A. Hogan, How is a vector pyramid scheme affected by perturbation in the mask? In C. K. Chui and L. L. Schumaker, editors, Approximation Theory IX, pages 97–104. Vanderbilt University Press, TN, 1998. [24] B. Han and R. Q. Jia, Multivariate refinement equations and convergence of subdivision schemes, SIAM J. Math. Anal., 29 (1998), 1177–1199. [25] B. Han, T. P.-Y. Yu, and B. Piper, Multivariate refinable Hermite interpolants, preprint, (2002). [26] C. Heil and D. Colella, Matrix refinement equation: existence and uniqueness, J. Fourier Anal. Appl., 2 (1996), 363–377. [27] C. Heil, G. Strang, and V. Strela, Approximation by translates of refinable functions, Numer. Math. 73 (1996), 75–94. [28] R. Q. Jia, Subdivision schemes in Lp spaces, Adv. Comput. Math., 3 (1995), 309–341. [29] R. Q. Jia, Approximation properties of multivariate wavelets, Math. Comp., 67 (1998), 647–665. [30] R. Q. Jia, Characterization of smoothness of multivariate refinable functions in Sobolev spaces, Trans. Amer. Math. Soc., 351 (1999), 4089–4112. [31] R. Q. Jia, Approximation with scaled shift-invariant spaces by means of quasi-projection operator, preprint, (2002). [32] R. Q. Jia and Q. T. Jiang, Approximation power of refinable vector of functions, In D. Deng, D. Huang, R. Q. Jia, W. Lin, and J. Wang, editors, Proceedings of an International Conference on Wavelet Analysis and its Applications, pages 155–178, 2002. AMS/IP Studies in Advanced Mathematics.
39
[33] R. Q. Jia and Q. T. Jiang, Spectral analysis of the transition operators and its applications to smoothness analysis of wavelets, SIAM J. Matrix. Anal. Appl., to appear. [34] R. Q. Jia, Q. T. Jiang, and S. L. Lee, Convergence of cascade algorithms in Sobolev spaces and integral of wavelets, Numer. Math., to appear. [35] R. Q. Jia and C. A. Micchelli, On linear independence of integer translates of a finite number of functions, Proc. Edinburgh Math. Soc., (1992), 69–85. [36] R. Q. Jia, S. D. Riemenschneider, and D. X. Zhou, Vector subdivision schemes and multiple wavelets, Math. Comp., 67 (1998), 1533–1563. [37] R. Q. Jia, S. D. Riemenschneider, and D. X. Zhou, Smoothness of multiple refinable functions and multiple wavelets, SIAM J. Matrix Anal. Appl., 21 (1999), 1–28. [38] Q. T. Jiang, Multivariate matrix refinable functions with arbitrary matrix dilation, Trans. Amer. Math. Soc., 351 (1999), 2407–2438. [39] W. Lawton, S. L. Lee, and Z. W. Shen, Convergence of multidimensional cascade algorithm, Numer. Math., 78 (1998), 427–438. [40] J. L. Merrien, A family of Hermite interpolants by bisection algorithms, Numer. Algorithm, 2 (1992), 187–200. [41] C. A. Micchelli and T. Sauer, Regularity of multiwavelets, Adv. Comput. Math., 7 (1997), 455–545. [42] C. A. Micchelli, and T. Sauer, On vector subdivision, Math. Z., 229 (1998), 621–674. [43] C. A. Micchelli and T. Sauer, Sobolev norm convergence of stationary subdivision schemes, In Surface fitting and multiresolution methods, pages 245–260. Vanderbilt Univ. Press, Nashville TN, 1997. [44] G. Plonka and A. Ron, A new factorization technique of the matrix mask of univariate refinable functions, Numer. Math., 87 (2001), 555–595. [45] A. Ron, Smooth refinable functions provide good approximation orders, SIAM J. Math. Anal., 28 (1997), 731–748. [46] A. Ron and Z. W. Shen, The Sobolev regularity of refinable functions, J. Approx. Theory, 106 (2000), 185–225. [47] Z. W. Shen, Refinable function vectors, SIAM J. Math. Anal., 29 (1998), 235–250 [48] T. Yu, Parametric Families of Hermite Subdivision Schemes in Dimension 1, preprint. [49] D. X. Zhou, Multiple refinable Hermite interpolants, J. Approx. Theory, 102 (2000), 46–71. [50] D. X. Zhou, Norms concerning subdivision sequences and their applications in wavelets, Appl. Comput. Harmon. Anal., 11 (2001), 329–346.
40