AAK-THEORY ON WEIGHTED SPACES 1. Introduction. “AAK-theory ...

Report 1 Downloads 20 Views
Proceedings of the Project Review, Geo-Mathematical Imaging Group (Purdue University, West Lafayette IN), Vol. 1 (2009) pp. 27-48.

AAK-THEORY ON WEIGHTED SPACES MARCUS CARLSSON∗ Abstract. We extend some of the results of classical AAK-theory to Hankel operators on certain weighted spaces, thereby providing a constructive proof of a theorem by S. Treil and A. Volberg.

1. Introduction. “AAK-theory” is short for the results on Hankel operators on the Hardy space H 2 proved in a sequence of papers by V. M. Adamyan, D. Z. Arov and M. G. Krein around 1970, (see [1], [2], [3], [4]). We will in this paper be concerned with an extension of the theorem on best rational approximation of a function in L∞ /H ∞ . The introduction is split into four parts. The first is a very brief introduction to Hankel operators, the second goes through the classical AAK-theory, the third introduces the weighted spaces we will be working with and finally in the fourth we state the new results obtained in this paper. 1.1. Classical Hankel operators. Let H 2 denote the Hardy space on the unit disc D, which we consider as a subspace of L2 on the unit circle T with normalized arc-lenght measure. That is 2 = L2 ⊖ H 2 . H 2 = {f ∈ L2 : fˆn = 0 ∀n < 0} where fˆn denotes the Fourier coefficients of f . Set H− Let z denote the identity function on T and Mz the operator of multiplication by z on L2 , i.e. Mz f = zf . Classical Hankel operators were defined as bounded operators on l2 (Z+ ) having the following Hankel matrix representation   γ0 γ1 γ2 · · ·  γ1 γ2 γ3 · · ·     (1.1) (γi+j )i,j∈Z + =   γ2 γ3 γ4 . . .  , γn ∈ C,   .. .. . . . . . . . . in the standard basis e0 = (1, 0, 0, . . .), e1 = (0, 1, 0, . . .), etc. It is easy to see that a bounded operator Γ is Hankel if and only if it satisfies the commutation relation ΓS = S ∗ Γ,

(1.2)

where S is the unilateral shift, (i.e Sen = en+1 for all n ∈ Z+ ). 2 is a Hankel operator if it satisfies the commuAnother common definition is that Γ : H 2 → H− tation relation ΓMz = P− Mz Γ,

(1.3)

2 2 denotes the orthogonal projection onto H− . It is not hard to see that this is where P− : L2 → H− n ∞ satisfied if and only if Γ has a Hankel matrix in the bases {z }n=0 and {z −n }∞ n=1 . The second more cumbersome definition has become the popular one mainly due to the deep connections between 2 , (considered as a space of analytic functions in Hankel operators and the structure of the space 'H ∞ the unit disc D). For example, setting φ(z) = n=0 γn z −n−1 we have that

Γf = P− φf.

'∞

Any function ψ such that P− ψ = n=0 γn z −n−1 is called a symbol for Γ. Let Mψ : L2 → L2 be the operator of multiplication by ψ, i.e. Mψ f = f . Nehari’s theorem states that a Hankel operator Γ is bounded if and only if there is a symbol ψ ∈ L∞ such that Γ = P− M ψ ∗ Department

of Mathematics, Purdue University, West Lafayette, IN, 47907.

27

28

M. CARLSSON

and moreover (1.4)

%Γ% = %ψ%L∞ /H ∞ .

This settles the issue of when a Hankel matrix is bounded. Using the celebrated characterization of BM O by Fefferman, this also leads to the theorem that the Hankel matrix (γi+j ) is bounded if and only if ∞ (

n=0

γn z n ∈ BM O.

˜ we define 1.2. Singular values and the AAK-theorem. Given any operator Γ ∈ L(X, X) its singular values as (1.5)

σn (Γ) = inf{%Γ − K% : Rank K ≤ n},

or equivalently (1.6)

σn (Γ) = inf{%Γ|M % : M ≤ X and codim M = n},

where M ≤ X means that M is a subspace and ΓM denotes the restriction of Γ to M. Also set σ∞ (Γ) = limn→∞ σn (Γ) and let %Γ%e denote the essential norm of Γ. When there is no risk of confusion we will let the dependence of the singular values on Γ be implicit. Clearly %Γ% = σ0 and %Γ%e = σ∞ . A vector u ∈ X will be called a σ-singular vector if %u% = 1 and σ 2 u = Γ∗ Γu. Let N be the amount of σn ’s such that σn > σ∞ . An application of the spectral theorem shows that there is an orthonormal set {un }N n=0 ⊂ X such that each un is an σn −singular vector and %Γ|{un }⊥ % = %Γ%e . If a σn is distinct, (i.e. σn−1 > σn > σn+1 ), then un is unique up to multiplication with a unimodular constant. This paper is concerned with generalizations of the following celebrated result by Adamyan, Arov and Krein. 2 be a Hankel operator and let σn be its n’th singular Theorem 1.1. (AAK) Let Γ : H 2 → H− value. Then there is a rank n Hankel operator K such that σn = %Γ − K%. If σn is distinct, then r is unique. It is easy to see that a rank 1 Hankel  1  z0  (1.7) K=  z02  .. .

operator necessarily has the following form  z0 z02 · · · z02 z03 · · ·   ..  3 4  , |z0 | < 1, . z0 z0  .. . . . . . . .

but it is not true that any finite rank Hankel operator is a sum of such. In fact, a symbol for the above rank one Hankel operator is easily seen to be (1 − z0 z)−1 and in general, any rank n Hankel operator has a symbol of the form r(z) where r is a rational function with degree1 n and all poles lie in C \ D. In terms of applications, the power of the AAK-theorem comes from the fact that the location of these poles can be easily calculated using the singular vectors. For simplicity, let us 1 If r = p/q where p and q are polynomials with no common divisors, then the degree of r is defined as max (deg p, deg q).

AAK-THEORY ON WEIGHTED SPACES

29

assume that σn is distinct and denote the corresponding singular vector by un . The proof of 1.1 then shows that un has precisely n roots (zj )nj=1 , counted with multiplicity, and that the poles of the rational function r in Theorem 1.1 are located at (1/zj )nj=1 , again counted with multiplicity. In particular, if un has distinct zeroes, then the best rank n Hankel approximant of Γ is a sum of n matrices of the form (1.7) with z0 replaced by zj , j = 1, . . . , n. Using Beurling’s and Nehari’s theorem, a short argument shows that the AAK-theorem is equivalent with the following result 2 be a Hankel operator and let σn be its n’th singular value. Theorem 1.2. Let Γ : H 2 → H− Then there is a Mz -invariant subspace M ⊂ H 2 such that codim M = n and %Γ|M % = σn . If σn is distinct, then M is unique. In the case that σn is distinct, it follows from the proof that (1.8)

M = cl(Span {z k un : k ∈ Z+ }),

so by Beurling’s theorem it follows that M is completely determined by the zeroes of un . We will denote the right hand side of (1.8) by [un ]Mz . S. Treil and A. Volberg has recently shown that Theorem 1.2 can be extended to certain generalized Hankel operators, that we will introduce in the next section. Their proof relies on a fixed point lemma by Ky Fan, and henceforth it is not clear wether (1.8) holds, which in terms of applications is important as it provides a way to actually calculate M. The purpose of the present paper is to show that (1.8) indeed holds for a large class of “generalized Hankel operators”. 1.3. Generalized Hankel operators. Recall the two definitions (1.1) and (1.3) of a Hankel operator. Most extensions of the concept of a Hankel operator are therefore based on the latter definition (1.3). To make the paper more accessible to non-specialists, we have chosen to adopt a definition closer to (1.1). We provide a brief account on various general definitions of Hankel operators and their interconnections in Section 2. Given a Hilbert space X, let L(X) denote the set of bounded operators in X. In this paper we shall primarily be concerned with infinite dimensional Hilbert spaces X such that: (i) S ∈ L(X) is an injective operator with closed range and codim Ran S = 1, (ii) There exists a be a fixed and non-zero element x0 ∈ X that is S-cyclic, i.e. [x0 ]S = X where [x0 ]S = cl(Span {S n x0 : n ≥ 0}) and cl(·) denotes the closure. (iii) B is a left inverse of S such that B(x0 ) = 0. Remarks: 1. Note that the unilateral shift on l2 (Z+ ) is such an operator, that x0 = e0 is cyclic, S ∗ e0 = 0 that S ∗ S = I. Thus S ∗ can either be considered as the (unilateral) backward shift or the left inverse of S that annihilates x0 . ˜ that both satisfy (i)−(iii), we shall denote 2. When working with two Hilbert spaces X and X ˜ ˜ ˜ the objects S, B etc. that belong to X by S, B etc. Given X satisfying (i) − (iii), set xn = S n x0 , n ≥ 0. It is easy to see that the sequence (xn )∞ n=0 is minimal, i.e. xn *∈ Span {xm }m&=n , and therefore the action of any operator in L(X) is uniquely determined by how it acts on (xn )∞ n=0 . ˜ satisfying (i) − (iii), an operator Γ ∈ L(X, X) ˜ Definition 1.3. Given Hilbert spaces X and X ∞ and ( x ˜ ) . is called Hankel if it is represented by a Hankel matrix with respect to (xn )∞ n n=0 n=0 ˜ is Hankel if and only if In analogy with (1.2), it is easy to see that Γ ∈ L(X, X) (1.9)

˜ ΓS = BΓ.

We provide some examples of spaces that satisfy (i) − (iii). Example 1.4. Let X be a Hilbert space of analytic functions on some domain, (typically RD where R > 0), in which P ol is dense and (z n )∞ n=0 is a minimal sequence. Let Mz denote

30

M. CARLSSON

multiplication by the independent variable z and set S = Mz . If Mz is bounded below, then X satisfies (i) − (iii) with x0 = 1. Examples of such spaces includes • H 2 (RD): Let R > 0 and set ( |fˆn |2 R2n < ∞}. H 2 (RD) = {f ∈ Hol(RD) : %f %2 = n≥0

(Here Hol(D) denotes the holomorphic functions on D and fˆn denotes the Taylor coefficients of such a function at 0.) • D2 (w): Let w = (wn )∞ n=0 be a strictly positive sequence of weights such that lim wn+1 /wn = R−2 < ∞ for some R > 0, and define ( D2 (w) = {f ∈ Hol(RD) : %f %2 = |fˆn |2 wn < ∞}. n≥0

Clearly H (RD) = D ((R )n∈Z+ ). If w = (1 + n)n∈Z+ , then D2 (w) is the Dirichlet space. o weight, (see [16]) and define H 2 (w) as the closure • H 2 (w): Let w ∈ L1 (T) be a Helson-Szeg¨ of polynomials in ) L2 (w) = {f : %f %2 = |f |2 w dm < ∞}. 2

2

2n

o theorem. Note Here, Mz is isometric and the sequence (z n ) is minimal by the Helson-Szeg¨ * that if we attempt to define H 2 (w) as above for a weight w with | log w| dm = ∞, then, by Szeg¨ o’s theorem, (z n ) does not become minimal. In fact, we get H 2 (w) = L2 (w). All the above examples are Hilbert spaces of analytic functions on some disc RD, and in addition they satisfy (iv) σ(S) = cl(RD), S − λ is bounded below and codim Ran S − λ = 1 for all λ ∈ RD. Definition 1.5. A Hilbert space H will be called a Hilbert space of analytic functions on RD if its elements are analytic functions on RD and (i) − (iv) holds with x0 = 1 and S = Mz . Some of our results are more natural to present for Hilbert spaces of analytic function than spaces satisfying (i) − (iv). By a modification of the proof in [8] it follows that any Hilbert space X that satisfies (i) − (iv) can be represented as a Hilbert space of analytic functions, i.e. there exists a unitary map U : X → H such that U(x0 ) = 1 and US = Mz U. We omit a proof of this result. Note that the Uxn = z n . 1.4. An AAK-theorem for generalized Hankel operators. An operator T on some Hilbert space X is called expansive if %T x% ≥ %x% for all x ∈ X and contractive if %T % ≤ 1. We will in this paper show the following results. ˜ be spaces satisfying (i) − (iii) such that S is expansive and B ˜ is Theorem 1.6. Let X, X ˜ be a Hankel operator with singular values σ0 ≥ σ1 ≥ . . .. Let uN ∈ X contractive. Let Γ : X → X be a singular vector with singular value σN . Then %Γ|[uN ]S % = σN . In particular, we have ˜ and Γ be as in Theorem 1.6. Let uN ∈ X be a singular vector with Corollary 1.7. Let X, X singular value σN and assume that σN −1 > σN . Then codim [uN ]S ≥ N. We will now discuss circumstances under which circumstances we have equality in the above Corollary. First, some definitions. Definition 1.8. Given a Hilbert space H of analytic functions on RD and an element u ∈ H, corresponding multiplicities. We define ZR (u) let (zj )N j=1 denote its zeroes and let nj ∈ N denote the' } and #(Z (u)) = nj . Finally we define to be the set of pairs {(zj , nj )N R j=1 M(ZR (u)) = {v ∈ H : zj is a zero of multiplicity at least nj , ∀j = 1, . . . , N }.

AAK-THEORY ON WEIGHTED SPACES

31

Note that (1.10)

codim M(ZR (u)) = #(ZR (u)).

and that, at least whenever u is a polynomial with no zeroes on RT, we have (1.11)

M(ZR (u)) = [u]Mz .

We omit a proof of this fact, but note that (1.11) holds for all u in H 2 (RD), by Beurling’s theorem. Let R ≥ 1 and let the space X in the preceding Corollary be a Hilbert space of analytic functions on RD such that (1.11) holds. If, for some reason, uN always satisfies (1.12)

#ZR (uN ) = N

whenever σN is distinct, then it follows by Theorem 1.6, (1.6) and (1.10) that %Γ|M(ZR (uN )) % = σN , which would generalize Theorem 1.2 and provide a constructive proof of the result by S. Treil and A. Volberg for the particular space X under consideration. By numerical experiments with finite Hankel operators on D2 (w)-spaces, using a wide range of increasing weights w, we have not found one example of a singular vector uN that does not satisfy (1.12). Unfortunately, we can at present only prove this for the case X = H 2 (RD). ˜ satisfy (i) − (iii) and assume that B ˜ is a contraction. Let Theorem 1.9. Let R ≥ 1, let X 2 n+1 ˜ ) = 0 for some n ∈ N. Let σN > 0 be a Γ : H (RD) → X be a Hankel operator such that Γ(z fixed non-zero singular value of Γ with multiplicity 1. Then there exists a σN −singular vector uN such that #(ZR (uN )) = N, and %Γ|M(ZR (uN )) % = σN . Numerical experiments indicates that the above theorem fails to hold whenever R < 1. We include an example to support this conclusion; Example 1.10. Let Γ : H 2 (RD) → H 2 (D) be given by   4 3 2 1 0 ...  3 2 1 0 0 ...     2 1 0 0 0 ...   .  1 0 0 0 0 ...    .. .. .. .. .. . . . . . . . . For R = 0.9 we then have (σN )4N =1 ≈ (67, 2, 0.4, 0.1) and #(ZR (uN ))4N =1 = (0, 1, 2, 1) and for R = 0.8 we get (σN )4N =1 ≈ (101, 4, 0.8, 0.3) and #(ZR (uN ))4N =1 = (0, 0, 0, 0). We then show that we can remove the assumptions that Γ(z n ) = 0 and σN has multiplicity 1 ˜ For simplicity we present a special in Theorem 1.9, at the cost of assuming a bit more about X. case here, a more general result is given in Theorem 5.8. Theorem 1.11. Let R ≥ 1 and let w = (wj ) be an increasing sequence of strictly positive weights. Let Γ : H 2 (RD) → D2 (w) be a Hankel operator and let N be such that σN −1 > σN = . . . = σN +µ > σN +µ+1 ≥ σ∞ .

32

M. CARLSSON

Then there exists mutually orthogonal σN −singular vectors uN , . . . , uN +µ such that N ≤ #(ZR (uN )) ≤ N + r, and %Γ|M(ZR (uN )) % = σN . Recall that Theorems 1.1 and 1.2 are equivalent for classical Hankel operators, by Beurling’s and Nehari’s theorems, and it is therefore natural to ask wether Theorem 1.11 is equivalent to a generalized version of Theorem 1.1. Unfortunately, despite extensive research in this direction, (see e.g. [5], [11] and [17]), for most generalized Hankel operators there are no versions of Nehari’s theorem that accurately describes the operator norm as in (1.4). Thus it is not clear wether Theorem 1.1 can be generalized accordingly, but preliminary results here are negative. We also point out that all constructive proofs of the AAK-theorem relies on Nehari’s theorem, and as this is not available in our setting, the proof presented here differs significantly from previous proofs. 2. A brief review of generalized Hankel operators. Recall the two definitions (1.1) and (1.3) of a Hankel operator. As mentioned before, most modern definitions of a generalized Hankel operator is an extension of (1.3). The most general one goes as follows; one exchanges the space H 2 and the operator Mz on H 2 with an arbitrary Hilbert space X1 and some operator S1 ∈ L(X1 )2 ; 2 ⊂ L2 gets replaced by arbitrary Hilbert spaces X2− ⊂ X2 and it is assumed that X2− is H− complemented in X2 , i.e. there exist some X2+ ⊂ X2 such that X2+ ∩ X2− = {0} and X2+ + X2− = X2 . Finally, one lets S2 ∈ L(X2 ) be such that S2 (X2+ ) ⊂ X2+ , P− be the projection onto X2− parallel with X2+ . Clearly, X2+ plays the role of H 2 in L2 , and S2 that of Mz on L2 . An operator Γ : X1 → X2− is now called Hankel if it satisfies the relation ΓS1 = P− S2 Γ.

(2.1)

Some of these operators have Hankel matrices in certain bases of X1 and X2− , but most of them do not. We refer to N. K. Nikolskii [16], sec B:1.7, for more details. Treil and Volberg additionally assumes that X2− ⊥ X2+ . Under this assumption they show the following Nehari type theorem. Theorem 2.1. Assume that S1 is expansive and that S2 is a contraction and let Γ : X1 → X2− be a Hankel operator. Then there exists an operator T such that Γ = P− T , T S1 = S2 T and %Γ% = %T %. To see why this is a generalization of Nehari’s theorem, note that in the classical case, (i.e. when 2 and S1 = S2 = Mz ), the commutation relation T S1 = S2 T implies X1 = H 2 , X2 = L2 , X2− = H− ∞ that T = Mφ for some φ ∈ L and clearly %Mφ % = %φ%L∞ . However, there is no way of calculating T explicitly, and in most cases one does not have a good estimate for inf{%T % : P− T = Γ}, (like %Γ% = %φ%L∞ /H ∞ in Nehari’s theorem), so this theorem is of limited use in proving Theorem 1.11. Despite this, the authors give a (non-constructive) proof of the following similar theorem. Theorem 2.2. Assume that S1 is expansive and that S2 is a contraction and let Γ : X1 → X2− be a Hankel operator. Let σN > 0 be a fixed non-zero singular value of Γ. Then there exists a S1 -invariant subspace M with codim M = N such that %Γ|M % = σN . Other popular (but more restrictive) generalizations of Hankel operators is to consider them as certain bilinear or sesquilinear forms of a certain kind. This viewpoint has been adopted by N. Arcozzi, R. Rochberg, E. Sawyer, and B. D. Wick ([5]) as well as M. Cotlar and C. Sadosky ([10]). ARSW define a Hankel form on the Dirichlet space D2 as a bounded bilinear form G : D2 × D2 which can be evaluated, (at least on polynomials), as (2.2)

2 L(X ) 1

G(f, g) = 0f g, φ1D2 = denotes all bounded operators on X1

(

n≥0

+ (n + 1)(f g)n φˆn ,

f, g ∈ P ol+ ,

AAK-THEORY ON WEIGHTED SPACES

33

where φ ∈ D2 is a “symbol” for G. If Γ : D2 → D2 denotes the Hankel operator, (in the sense of Definition 1.3), represented by the matrix 

φ0  φ1    φ2  .. .

φ1 φ2

φ2 φ3

φ3 .. .

φ4 .. .

··· ··· .. . ..

.



  ,  

' , gˆn . The main goal of ARSW is then it is easy to see that G(f, g) = 0Γf, g1D2 = n≥0 (n + 1)Γf n to obtain a boundedness criterion for G in terms of the symbol φ, i.e. a generalization of Nehari’s theorem. They prove that G is bounded if and only if |φ′ |2 dA is a certain Carleson-type measure on D. For the purposes of the present paper, this is again of limited use because the Carleson norm of |φ′ |2 dA is only comparable with the operator norm of G, not equal. Finally we wish to point out that M. Cotlar and C. Sadosky have written a number of papers about Hankel forms on Hardy spaces with two Helson-Szeg¨o weights, (they consider more general weights, but we restrict attention to these), see e.g. [10], [11], [12]. Given two such weights w1 and w2 they define a sesquilinear form G : H 2 (w1 ) × H 2 (w2 ) → C to be Hankel if it satisfies G(zf, g) = G(f, zg). (They also consider more general versions where z and z are replaced by isometric operators S1 and S2 ). In our setting, these forms correspond to Hankel operators Γ : H 2 (w1 ) → L2 ⊖ H 2 (w2 ). They do provide exact versions of the Nehari theorem and AAK-results for these operators. Corresponding results for Hankel operators between H 2 (w1 ) and H 2 (w2 ) are not known to us. This paper also fails to provide such theorems because the backward shift on H 2 (w2 ) is not a contraction unless w2 is constant. 3. Preliminaries. We will in section 3.1 provide a class of spaces that will serve as models for any space X that satisfy (i) − (iii). In the following three sections we present lemmas that will be needed in the proof of Theorems 1.6, 1.9 and 1.11. 3.1. Representation. Let c00 ⊂ ×∞ n=0 C denote the subset of sequences with finitely many nonzero elements and let S denote the shift “operator” on c00 . We let M∞ denote the set of infinite matrices. Given W = (wm,n )m,n∈Z+ ∈ M∞ we will let W ′ denote the “adjoint” W ′ = (wn,m ). W will be called Hermitian symmetric if W = W ′ and positive if a′ W a =

(

am wm,n an ≥ 0

for all a ∈ c00 , (where a′ denotes the transpose and the conjugate of a). In this case we will write %a%2W for the above expression. We let {en }n∈Z+ denote the usual basis for c00 , i.e. e0 = (1, 0, 0, . . .) and en = S n e0 . If W is strictly positive, (i.e. a′ W a > 0 whenever a *= 0), then lW will denote the completion of c00 with respect to % · %W . If {en }n∈Z+ is minimal, then each element in lW can be represented as an infinite sequence in the obvious way. Given a space X that satisfies (i) − (iii), set W = (wm,n ) = (0xn , xm 1X ). It is clear that the map such that xn 2→ en is unitary and intertwines the operator S on X with the shift S on lW . Thus we will often work with spaces lW instead of X. Note that {xn } is a minimal set so the same will be true for {en }. We will write X ∼ = lW to denote that these spaces are unitarily equivalent in the above sense. If W ∈ M∞ is Hermitian symmetric, strictly positive and such that {en }n∈Z+ is minimal, then each operator on lW is represented by a matrix in M∞ with respect to the natural basis {en }n∈Z+ . We will not separate between an operator on lW and its matrix, so e.g. for the shift S on lW we

34

M. CARLSSON

have



  S= 

0 1 0 .. .

0 0 1 .. .

0 0 0 .. .

... ... ... .. .



  . 

Note that whenever A, B ∈ L(lW ) are such that the sums involved in the matrix computation of AB contain finitely many non-zero numbers, then the matrix AB coincides with the operator AB. ˜ and an operator A ∈ L(lW , l ˜ ), Given two strictly positive Hermitian symmetric matrices W, W W ∗ note that the adjoint operator A is usually not given by the matrix A′ . In fact, under suitable ˜ , we have conditions on A, W, W ′ ˜ ′ ′ ˜ ′ −1 ′ ˜ A Wb 0a, A∗ b1W ˜ = 0Aa, b1W ˜ = (Aa) W b = a A W b = a W W

so (3.1)

˜. A∗ = W −1 A′ W

3.2. Some lemmas on Hankel matrices. This section contain a number of results which are generalizations of lemmas used by D. Clark [9] and P. Hartman [15]. Let Mn denote the set of (n + 1) × (n + 1) matrices and let hank(n) ⊂ Mn denote the set of Hankel matrices of the form   γ0 γ1 · · · γn  γ1 γ2 · · · 0    (3.2) h((γj )nj=0 ) =  . .. . . ..  , γj ∈ C.  .. . . .  γn

0

...

0

Similarly, let Hank(n) ⊂ M∞ be the set of Hankel matrices Γ such that Γ(en+1 ) = 0. Let Fn ∈ Hank(n) denote the Hankel matrix which is 1 on its anti-diagonal and zero elsewhere, i.e.   0 0 ... 0 1  0 0 ... 1 0      Fn =  ... ... . . . ... ...  .    0 1 ... 0 0  1 0 ... 0 0 Note that (3.3)

F −1 = F ′ = F.

Lemma 3.1. Let Γ = h((γj )nj=0 ) ∈ hank(n) be such that γn *= 0. Then Γ is invertible and Γ−1 = Fn ΠFn , where Π ∈ hank(n). Proof. Let π0 , . . . , πn solve the equation system     0 π0  π1   0      Γ .  =  . .  ..   ..  1 πn It is then easy to see that Γ−1 is given by the Hankel matrix   0 . . . π0   Γ−1 =  ... . . . ...  . π0

...

πn

AAK-THEORY ON WEIGHTED SPACES

35

˜ in Mn be any invertible Hermitian symmetric matrices. Let D(W, W ˜ ) ⊂ hank(n) Let W and W be the set of matrices Γ such that all roots of the polynomial ˜ Γ) pΓ (σ) = det(σI − W ΓW are distinct. Note that by (3.1), the roots of pΓ are the singular values of Γ as a Hankel operator on certain finite dimensional Hilbert spaces. Also recall that all normed topologies on hank(n) are equivalent, as hank(n) is finite dimensional. ˜ ) is dense in hank(n) for all invertible matrices W and W ˜. Lemma 3.2. D(W, W ′ ′ Proof. Let R(pΓ , pΓ ) denote the resolvent of pΓ and pΓ . It is a standard fact (see e.g. [13]) that pΓ has a root of multiplicity higher than one if and only if R(pΓ , p′Γ ) = 0. Let Γ = h((αj +iβj )nj=0 ) be arbitrary with α0 , . . . , αn ∈ R and β0 , . . . , βn ∈ R. Clearly R(pΓ , p′Γ ) can be considered as a real polynomial in the variables α0 , . . . , αn and β0 , . . . , βn , and it is a standard fact that the lemma holds if we show that this polynomial is not identically zero. This in turn follows if we show that there exists one Γ such that pΓ has distinct and non-zero roots. We will prove this by n−1 ) ∈ hank(n − 1) has induction. The existence of such a Γ is trivial for n = 0. Assume that h((γj )j=0 n distinct and non-zero roots and put Γ(γn ) = h((γj )j=0 ) ∈ hank(n), where γn is a free parameter. By the inductive hypothesis, the roots of pΓ(0) are distinct. Moreover, the roots are all non-zero ˜ Γ is invertible by Lemma 3.1. Finally, a standard argument whenever γn *= 0, because then W ΓW shows that the roots of pΓ(γn ) depend continuously on γn , and thus Γ(γn ) has distinct non-zero roots for small non-zero values of γn . Define Ir ∈ M∞ via   1 0 0 ···  0 r 0 ···    (3.4) Ir =  0 0 r2 · · ·  .   .. .. .. . . . . . . ˜ be Hermitian symmetric strictly positive infinite matrices and Lemma 3.3. Let W and W assume that each Ir is bounded on lW and converges SOT to I as r → 1− . Let Γ : lW → lW ˜ be a Hankel operator. Then Ir ΓIr is a Hankel operator for each r < 1 and Ir ΓIr converges SOT to Γ as r → 1− . If in addition the Ir ’s are contractions on lW , then we also have lim σn (Ir ΓIr ) = σ(Γ).

r→1−

Proof. That Ir ΓIr is Hankel is immediate by writing out its matrix. Given u ∈ lW we have that %Γu − Ir ΓIr u% ≤ %(I − Ir )Γu% + %Ir Γ(I − Ir )u% ≤ %(I − Ir )Γu% + %Ir Γ%%(I − Ir )u% → 0 as r → 1− , because %Ir Γ% is uniformly bounded by the uniform boundedness principle. The final statement is proved exactly as Lemma 4.2 in [9]. Given any Hermitian symmetric strictly positive infinite matrix W , let Pn denote the orthogonal ˜ projection onto Span {ej }∞ ˜ . The j=n in lW . We will write Pn for the corresponding operator in lW following Lemma generalizes Lemma 1 in [15] of P. Hartman. ˜ be a Hermitian Lemma 3.4. Let W ∈ M∞ be a diagonal strictly positive matrix and let W symmetric strictly positive infinite matrix. Let Γ : lW → lW ˜ be a Hankel operator, let r < 1 be fixed and assume that %P˜n Ir %L(lW˜ ) → 0 as n → ∞. Then there exists a sequence of Hankel operators Γn ∈ Hank(n), n ∈ N, with distinct non-zero singular values such that Γn → Ir ΓIr

36

M. CARLSSON

as n → ∞. Proof. Let Γ = (γi+j ) and let Γn ∈ Hank(n) be defined by the sequence (rj γj )nj=0 . Set ' .n Ir Γek so gkn = j≥n rj γj+k e-j . Note that gkn = P .n Ir %L(l ) %Γ%L(l ,l ) %ek %l . %gkn %lW˜ ≤ %P W ˜ ˜ W W W

(3.5)

Given f ∈ lW and g ∈ lW ˜ denote the operator g ⊗ f (h) = 0h, f 1g. As W is ˜ , let g ⊗ f : lW → lW diagonal we have %Ir Pn % = rn so Ir ΓIr (I − Pn ) → Ir ΓIr as n → ∞. Moreover Γn = Ir ΓIr (I − Pn ) − so Γn → Ir ΓIr holds if we show that limn %

'n

n ( j=0

j=0

rj gjn+1−j ⊗ (ej /%ej %2 )

rj gjn+1−j ⊗ (ej /%ej %2 )% = 0. Note that

.n Ir %%Γ% %gjn+1−j ⊗ (ej /%ej %2 )% ≤ %P

by (3.5). Let C be an upper bound for the right hand side, let ǫ > 0 be arbitrary and pick N such that rN C/(1 − r) < ǫ/2. Then %

n ( j=0

rj gjn+1−j

⊗ ej % ≤

N ( j=0

N

%gjn+1−j %

( ǫ .n Ir %%Γ% + ǫ , %P + ≤ 2 j=0 2

so by choosing n large enough it is clear that this will be less than ǫ. It follows that Γn → Ir ΓIr as n → ∞. It remains to show that Γn has distinct non-zero singular values. This might not be true, but a short argument (which we omit) shows that Lemma 3.2 can be applied to show that it is always possible to find an arbitrarily small perturbation of Γn within Hank(n) such that the perturbation has distinct non-zero singular values, and hence the desired conclusion follows. 3.3. On singular vectors for SOT -convergent sequences of compact operators. A statement similar to the below proposition appears in Clark [9] and is claimed to be well known. As we have not found a proof of either our or Clark’s result, we include a proof. Proposition 3.5. Let X be a Hilbert space and let (Tk )∞ k=1 be a sequence in L(X) of compact positive operators that converge strongly to T . Let ET respectively ETk denote the associated spectral projection measures. Assume that lim σn (Tk ) = σn (T )

k→∞

for all n ∈ Z+ and let Σ ⊂ (σ∞ (T ), ∞) be an open interval such that cl(Σ) ∩ {σn (T ) : n ∈ Z+ } = Σ ∩ {σn (T ) : n ∈ Z+ } = {σN (T )} for some N ∈ Z+ . Then ETk (Σ) converges strongly to ET (Σ). Proof. As T is positive, it follows by the spectral theorem that each σn (T ) > σ∞ (T ) is an eigenvalue to T . Set M = #{n : σn (T ) > σ∞ (T )}. Then there exists an orthonormal set {fn }M n=0 k k such that σn (T )fn = T fn . Similarly let {fnk }∞ n=1 satisfy σn (Tk )fn = Tk fn . It is no restriction to assume that N is such that σN −1 (T ) < σN (T ) = . . . = σN +µ (T ) < σN +µ+1 (T )

AAK-THEORY ON WEIGHTED SPACES for some µ ∈ Z+ . Let αnk ∈ C be coefficients such that if we show that lim k

N +r (

'∞

n=0

37

αnk fnk = fN , and note that we are done

αnk fnk = fN .

n=N

We have (T − Tk )fN = σN (T )fN −

(

σn (Tk )αnk fnk =

n

( (σN (T ) − σn (Tk ))αnk fnk . n

Let ǫ > 0 be such that σN −1 (T ) < σN (T ) − ǫ and σN (T ) + ǫ < σN +µ+1 (T ). For sufficiently large k we then have ( ( %(T − Tk )fN % = |αnk |2 (σN (T ) − σn (Tk ))2 |αnk |2 ≥ ǫ2 n

so that limk

'

n&∈{N,...,N +µ}

n&∈{N,...,N +µ}

|αnk |2 = 0 as desired.

3.4. Some results on Cauchy duals. Let W ∈ M∞ be Hermitian symmetric and strictly positive. By the Cauchy dual of lW we shall simply mean the dual with respect to the standard pairing l2 -pairing; ( 0x, y1 = xk yk , C∗ which is well defined at least for y ∈ c00 . We denote this dual by lW . Without going into details, −1 “exists”, the Cauchy dual is simply given by note that in the case when W C∗ lW = lW −1 .

(3.6) This follows by the calculation %y%2lC∗ = sup W

%W −1/2 y%2l2 |0W −1/2 z, y1| |0z, W −1/2 y1| |0x, y1| = sup = sup = = %y%lW −1 , %x%lW %z%l2 %z%l2 %W −1/2 y%l2

where y ∈ c00 . Example 3.6. The Cauchy dual of the Dirichlet space is the Bergman space. More generally, if w = (wn ) is a sequence and w−1 denotes the sequence (wn−1 ), then D2 (w) = D2 (w−1 ). In these examples, S is expansive in in general, which follows by considering  1 1 0 0 ...  1 2 0 0 ...   0 0 4 0 ... W =   0 0 0 4 ...  .. .. .. . . .. . . . . .

lW if and only if it is a contraction in lW −1 . This is false 



   ,   

W −1

2 −1 0

   =   0  .. .

−1 0 1 0 0 1/4 0 .. .

0 .. .

0 0 0 1/4 .. .

... ... ... .. . ..

.

       

Curiously, this can not happen if the off diagonal entries in the first row and column are zero. Lemma 3.7. Assume that W = (wi,j )i,j≥0 is such that w0,i = wi,0 = 0 for all i > 0. Then S is C∗ if and only if S is expansive on lW . a contraction on lW

38

M. CARLSSON

Proof. For simplicity we assume that W is bounded above and below as an operator on l2 . Let d∞ (W ) = S ′ W S = (wi,j )i,j≥1 . S is expansive (resp. contractive) if and only if d∞ (W ) ≥ W (resp. d∞ (W ) ≤ W ). Due to the special form of the matrix W , it is easy to see that d∞ (W −1 ) = (d∞ (W ))−1 , and hence d∞ (W ) ≥ W ⇐⇒ (d∞ (W ))−1 ≤ W −1 ⇐⇒ (d∞ (W −1 )) ≤ W −1 , from which the lemma follows. The first equivalence above is a general fact about invertible positive self-adjoint operators, which can be seen as follows: Let W1 and W2 be such operators. Then W1 ≥ W2 if and only if W1 = CW2 for some expansive invertible operator C. But C is expansive if and only if C −1 is a contraction and W1−1 = W2−1 C −1 = (W2−1 C −1 )∗ = (C −1 )∗ W2−1 . The main result of this section is the following. C∗ if and only if B is a contraction on lW . Lemma 3.8. S is a contraction on lW C∗ Proof. S on lW is the adjoint of B on lW . Example 3.9. Let w be any non-constant Helson-Szeg¨ o weight w. With the obvious extension of the concept of Cauchy duality we have (L2 (w))C∗ = L2 (w−1 ), but the Cauchy dual of H 2 (w) is not equal to H 2 (v) for any v ∈ L1 . This follows by the above lemma because S is always isometric on H 2 (w) whereas B is a contraction on H 2 (v) if and only if v is constant. 4. A matrix positivity lemma and proof of Theorem 1.6. Recall that c00 ⊂ ×∞ n=0 C denotes the subset of sequences with finitely many nonzero elements and that S denotes the shift “operator” on c00 . For two Hermitian symmetric matrices W1 , W2 we write W1 ≤ W2 if W2 − W1 is positive. Let X be a fixed Hilbert space and let S ∈ L(X). Given any u ∈ X, we associate with it the Toeplitz-matrix   0u, u1 0Su, u1 0S 2 u, u1 · · ·  0u, Su1 0u, u1 0Su, u1 · · ·    ..  (4.1) Tu,S =   0u, S 2 u1 0u, Su1  . 0u, u1   .. .. .. .. . . . . as well as the following matrix 

(4.2)

Nu,S

0u, u1  0u, Su1  =  0u, S 2 u1  .. .

0Su, u1 0Su, Su1

0S 2 u, u1 0S 2 u, Su1

0Su, S 2 u1 .. .

0S 2 u, S 2 u1 .. .

··· ··· .. . ..

.

     

Proposition 4.1. If S is a contraction, then Nu,S ≤ Tu,S . If S is expansive, then Tu,S ≤ Nu,S . To simplify notation we shall write %a%Nu,S for %a%lNu,S . Recall that S denotes the shift operator ' on lNu,S . Given a ∈ c00 , it is easy to see that %a%Nu,S = % an S n u%X and hence %Sa%Nu,S =

AAK-THEORY ON WEIGHTED SPACES

39

' %S an S n u%X . By these identities, the proposition is easily seen to be a consequence of the following result about matrices. Proposition 4.2. Let W = (wm,n ) be Hermitian symmetric, set tm = w0,m and t−m = wm,0 ∀m ∈ Z+ , and let T = (tn−m ) be the corresponding Toeplitz matrix. If the shift S on c00 satisfies (4.3)

(Sa)′ W (Sa) ≤ a′ W a,

∀a ∈ c00 ,

(Sa)′ W (Sa) ≥ a′ W a,

∀a ∈ c00 ,

then W ≤ T . If (4.4)

then W ≥ T . Proof. First assume that (4.3) holds and in addition that W is a strictly positive matrix. Then S is a contraction on the space lW . Thus S has a unitary dilation, i.e. there exists a Hilbert space Z that contain lW as a subspace and a unitary operator U ∈ L(Z) such that S n x = PX U n x for all x ∈ lW and n ∈ Z+ , where PlW denotes the orthogonal projection onto lW . (See [14] for more details.) Let a ∈ lW ∩ c00 be arbitrary and recall that e0 = (1, 0, 0, . . .) ∈ lW . Clearly W = Ne0 ,S , T = Te0 ,S and ( ( an S n e0 %2lW = %PlW an U n e0 %2Z . %a%2W = % Moreover, it is easy to see that T = Te0 ,S = Te0 ,U = Ne0 ,U , so ( an U n e0 %2Z . a′ Te0 ,S a = a′ Ne0 ,U a = % Thus (4.5)

a′ (T − W )a = %

(

an U n e0 %2Z − %PlW

(

an U n e0 %2Z ≥ 0,

as desired. We turn to the general case. Note that (Sa)′ T (Sa) = a′ T a for any Toeplitz matrix T , and hence (4.3) is equivalent to (Sa)′ (T − W )(Sa) ≥ a′ (T − W )a and (4.4) is equivalent to (Sa)′ (W − T )(Sa) ≥ a′ (W − T )a. ˜ is Hermitian symmetric with To conclude the proof, it is therefore enough to show that if W (4.6)

˜m,0 = 0 w ˜0,m = w

and (4.7)

˜ (Sa) ≥ a′ W ˜ a, (Sa)′ W

then (4.8)

˜ ≥ 0. W

Henceforth, we assume that W is Hermitian symmetric and satisfies (4.6) and (4.7). For each k ∈ N k ) given by we define a new matrix Wk = (wm,n  k wm,n = wm,n m, n ≤ k    wk = w m ≤ k, l > 0 m,k m+l,k+l k w = w m ≤ k, l > 0  k,m   k+l,m+l k wm,n =0 elsewhere

40

M. CARLSSON

This cumbersome definition is easily visualized, here is  0 0 0 0  0 w1,1 w1,2 w1,3    0 w2,1 w2,2 w2,3    0 w3,1 w3,2 w3,3    0 0 w3,1 w3,2  .. .. . . 0 0

W3 : 0 0 w1,3 w2,3 w3,3 .. .

0 0 .. .



     ..  .   ..  .   .. .

For each fixed a ∈ c00 we clearly have lim a′ Wk a = a′ W a,

(4.9)

k→∞

and hence it is enough to show that Wk ≥ 0 for all k ∈ Z+ . Decompose an arbitrary a as a = ab +ae , where ab = (a0 , a1 , . . . , ak−1 , 0, 0, . . .). Note that (Sab )′ Wk (Sae ) = a′b Wk ae (Sae )′ Wk (Sae ) = a′e Wk ae a′b Wk ab = a′b W ab ≤ (Sab )′ W (Sab ) = (Sab )′ Wk (Sab ) from which it follows that a′ Wk a ≤ (Sa)′ Wk (Sa)

(4.10)

Let I denote the identity matrix, set ck = max{wm,n : 0 ≤ m, n ≤ k} and note that 3 3 ′ 3 a Wk a3 < 2kck %a%2I , which follows from the calculation 3 3( 3 ′ 3 a Wk a3 = 33 3 3( 3 ≤3 3 n

(

n |m−n|≤k−1

(

|m−n|≤k−1

3 3 k am an 3 ≤ wm,n

3 (|am |2 + |an |2 ) 33 4k − 2 ck %a%2I . ck 3≤ 3 2 2

Putting Vk = I − 2kck Wk , we thus have Vk > 0 and (4.10) implies that (Sa)′ Vk (Sa) ≤ a′ Vk a for all a ∈ c00 . But then the first part of the proof applies, so (4.5) implies that 0 ≤ (I − Vk ) = 2kck Wk , and so W ≥ 0 follows by (4.9). ˜ and x ∈ X, recall that %Γ|[x] % denotes the operator norm of Γ Given an operator Γ ∈ L(X, X) S restricted to the invariant subspace [x]S generated by x. We are now in a position to prove Theorem 1.6. ˜ be such that S is expansive and B ˜ is a contraction. Let Γ : X → X ˜ be Theorem 4.3. Let X, X a Hankel operator with singular values σ0 ≥ σ1 ≥ . . .. Let u ∈ X be a singular vector with singular value σN . Then %Γ|[u]S % = σN .

AAK-THEORY ON WEIGHTED SPACES

41

Proof. Put v = Γu. By the polar decomposition of Γ it follows that %v% = σN . Obviously then %Γ|[u]S % ≥ %Γu% = %v% = σN , so we focus on proving the reverse inequality. It suffices to show that ( ( an S n u% %Γ( an S n u)% ≤ σN % '

an S n u%2 = a′ Nu,S a and similarly ( ( ˜ n Γu%2 = a′ N ˜ a. %Γ( an S n u)%2 = % an B v,B

for all a ∈ c00 . Note that %

Also note that by (1.9) we have ˜ n v, v1 = 0B ˜ n Γu, Γu1 = 0ΓS n u, Γu1 = 0S n u, Γ∗ Γu1 = σ 2 0S n u, u1 0B N 2 which implies that Tv,B˜ = σN Tu,S . The desired inequality follows via Proposition 4.1 and the calculation ( 2 2 2 σN (a′ Tu,S a) = % an S n u%2 = σN (a′ Nu,S a) ≥ σN ( an S n u)%2 . = a′ Tv,B˜ a ≥ a′ Nv,B˜ a = %Γ(

˜ and Γ be as in Theorem 4.3. Let uN ∈ X be a singular vector with Corollary 4.4. Let X, X singular value σN and assume that σN −1 > σN . Then codim [uN ]S ≥ N. Proof. This follows immediately by the definition of singular numbers; inf{%Γ|M % : M ≤ X and codim M = N − 1} = σN −1 > σN = %Γ|[u]S %. 5. Proof of Theorem 1.9 and 1.11. We begin with some definitions and lemmas. We let {em }nm=0 denote the usual basis in Cn+1 , where the context will determine the value of n, and for 0 ≤ i, j ≤ n we let ei ⊗ ej denote the matrix with 1 in the i, j’th position and zeroes elsewhere. We will sometimes treat Cn as a subset of c00 in the obvious way. Let Sn : Cn+1 → Cn+2 denote the restriction of the shift S on c00 and let Bn : Cn+1 → Cn denote the restriction of the backward shift B on c00 . Given W ∈ M∞ , (or W ∈ ∪m>n Mm ), let Rn (W ) ∈ Mn denote the “upper left corner of W ” and note that   w1,1 w1,2 ... w1,n+1  w2,1 w2,2 ... w2,n+1    (5.1) Rn (S ′ W S) = Sn′ Rn+1 (W )Sn =  . .. .. .. . .   . . . . wn+1,1 wn+1,2 . . . wn+1,n+1 Similarly,

(5.2)



  Rn (B ′ W B) = Bn′ Rn−1 (W )Bn =  

0 0 .. .

0 w0,0 .. .

... ... .. .

0 w0,n−1 .. .

0

wn−1,0

...

wn−1,n−1

It is easy to see that (5.3)

S is expansive on lW ⇐⇒ Rn (S ′ W S) ≥ Rn (W ), ∀n ∈ N,

(5.4)

S is contractive on lW ⇐⇒ Rn (S ′ W S) ≤ Rn (W ), ∀n ∈ N



  . 

42

M. CARLSSON

and (5.5)

B is contractive on lW ⇐⇒ Rn (B ′ W B) ≤ Rn (W ), ∀n ∈ N.

Lemma 5.1. Let W ∈ Mn be given and assume that Rn−1 (W ) has no eigenvalues on [−∞, 0). Then we can chose a c > 0 such that W + c en ⊗ en has no eigenvalues on [−∞, 0). Proof. By evaluating p(λ) = det(W + c en ⊗ en − λI) by expansion along the bottom row, we obtain that 5 4 g(λ) f (λ). p(λ) = (c + wn,n − λ)f (λ) − g(λ) = (c + wn,n − λ) − f (λ) It is easy to see that deg(f ) = n and deg(g) = n − 1, that f (λ) = det(Rn−1 (W ) − λI) so f (λ) *= 0 for all λ ∈ [−∞, 0). If f (0) *= 0 we thus get (5.6)

C = sup{|g(λ)/f (λ)| : λ ∈ [−∞, 0)} < ∞,

and hence the lemma follows by choosing c such that inf{|c + wn,n − λ| : λ ∈ [−∞, 0]} > C. Now suppose that f (0) = 0, i.e. that 0 is an eigenvalue of Rn−1 (W ). The multiplicity of the root of f at λ = 0 is equal then to the dimension of the kernel of Rn−1 (W ). Similarly, g(0) = det(W −wn,n en ⊗en ) = 0 and the multiplicity of the root λ = 0 is equal to dim Ker (W −wn,n en ⊗en ). As obviously dim Ker (W − wn,n en ⊗ en ) ≥ dim Ker (Rn−1 (W )), we conclude that (5.6) holds even in this case, and the Lemma follows as above. Given n > 1, we will say that a matrix is W is n-diagonal if wi,j = 0 whenever i *= j and max(i, j) > n. Proposition 5.2. Let W ∈ Mn be Hermitian symmetric and strictly positive. If (5.7)

′ Sn−1 W Sn−1 ≥ Rn−1 (W )

and/or (5.8)

Bn′ Rn−1 (W )Bn ≤ W

then there exists a bounded Hermitian symmetric strictly positive n-diagonal matrix Wnd satisfying Rn (Wnd ) = W such that the shift S is expansive on lWnd and/or the backward shift B is a contraction on lWnd . Proof. We begin with the part concerning S. Let Wn ∈ M∞ denote the matrix that equals W in the upper left corner and has zeroes elsewhere. Put ( c em ⊗ em , Wnd = Wn + m>n

where c ∈ R+ is to be determined. By (5.1) and (5.7) we have ′ Rn−1 (S ′ Wnd S − Wnd ) = Sn−1 W Sn−1 − Rn−1 (W ) ≥ 0,

and hence by Lemma 5.1 we can choose c such that Rn (Wnd ) ≤ Rn (S ′ Wnd S). It follows that S is expansive on lWnd by (5.3) and the fact that S ′ Wnd S − Wnd has zeroes “outside the range of Rn ”, (i.e. all elements with index (i, j) satisfying max(i, j) ≥ n are zero). The part concerning B is similar.

AAK-THEORY ON WEIGHTED SPACES

43

Corollary 5.3. Let W ∈ M∞ be Hermitian symmetric and strictly positive. If S is expansive and/or B is a contraction on lW , then there exists a bounded Hermitian symmetric strictly positive n-diagonal matrix Wnd satisfying Rn (Wnd ) = Rn (W ) such that S is expansive on lWnd and/or B is a contraction. Proof. This follows by combining Proposition 5.2 with the equivalences (5.1)-(5.5). Recall that Fn ∈ Hank(n) is given by   0 0 ... 0 1    0 0 ... 1 0      Fn =  ... . . . . . . . . . ...  .      0 1 ... 0 0  1 0 ... 0 0 Lemma 5.4. Let W ∈ Mn be Hermitian symmetric and strictly positive. Then

′ ′ W Sn−1 ≥ Rn−1 (W ) ⇐⇒ Sn−1 Fn W Fn Sn−1 ≤ Rn−1 (Fn W Fn ) Sn−1

Proof. We use the notation W = (wi,j ) = (wi,j )0≤i,j≤n . Note that (5.9)

Fn W Fn = (wn−i,n−j )

′ Fn W Fn Sn−1 = (wn−1−i,n−j−1 )0≤i,j 0 be a fixed singular value of Γ with multiplicity 1. Then there exists a σN −singular vector u such that #(ZR (u)) = N, and %Γ|M(ZR (u)) % = σN . 2 ∼ ˜ be such that X ˜∼ Proof. Let W = lW ˜ , as in Section 3.1, and recall that H (RD) = lIR2 . It will be convenient to consider Γ as an operator from lIR2 to lW ˜ , and thus u will denote an element in lIR2 , '∞ whereas u ˇ(z) = j=0 uj z j will denote the corresponding element in H 2 (RD). To shorten notation, ˜ ) and Rn (Γ). Note that (Γ∗ )n is typically not ˜ n and Γn for Rn (IR2 ), Rn (W we will write IR2 ,n , W ′ the same as (Γn ) , in fact, by a calculation very similar to (3.1) we obtain −1 ′ ˜ (Γ∗ )n = IR 2 ,n Γn Wn .

Also note that there is no restriction to assume that Γ(z n ) *= 0, so that Γn is invertible.

AAK-THEORY ON WEIGHTED SPACES

45

Clearly Span {em : m > n} = Ker Γ, so as σN > σ∞ = any singular vector u to σN lies in '0, n ˇ = m=0 um z m satisfies #ZR (ˇ u) = N . By Span {em : m ≤ n}. It therefore suffices to prove that u Beurling’s theorem and the fact that u ˇ is a polynomial, it follows that u)), [ˇ u]Mz = M(ZR (ˇ so %Γ|M(ZR (u)) % = σN follows directly from Theorem 4.3. Moreover u) ≥ N #ZR (ˇ

(5.12)

by Corollary 4.4. Hence we are done if we show the reverse inequality. This part of the proof is inspired by Butz’s proof of Theorem 1.1, [7]. We will from now on consider u as an element of CK+1 , as the remaining entries are zero anyway. Then u satisfies −1 2 ′ ˜ u = IR σN 2 ,n Γn Wn Γn u.

Thus −2 ˜ −1 (Γ−1 )′ IR2 ,n u. u = Γ−1 σN n n (Wn )

Moreover, by Lemma 3.1 it follows that −2 ˜ n )−1 Fn Π′ Fn IR2 ,n u, u = Fn Πn Fn (W σN n

˜ n )−1 Fn and noting that Fn IR2 ,n Fn = R2n IR−2 ,n , this where Πn ∈ hank(n). Setting Vn = Fn (W yields −2 Fn u = R2n Πn Vn Π′n IR−2 ,n Fn u. σN

Finally, with v = Fn u we obtain (5.13)

(Rn σN )−2 v = Πn Vn Π′n IR−2 ,n v.

The proof will be complete if we show that (5.14)

v ) ≥ n − N, #ZR−1 (ˇ

v ) + #ZR (ˇ u) ≤ n because the zeroes of vˇ are the inverses of the zeroes of u ˇ, and therefore #ZR−1 (ˇ so that (5.14) would imply that (5.15)

u) ≤ n − (n − N ) = N, #ZR (ˇ

which, combined with (5.12), yields the desired result. By Lemma 5.5, let V be an n−diagonal matrix V such that B on lV is a contraction and Vn = Rn (V ). Let Π ∈ Hank(n) be the Hankel matrix obtained from Πn by “adding zeroes”. Clearly IR−2 can be considered as a bounded operator from lIR−2 onto lIR2 , and Π′ as an operator from lIR2 into lV . The adjoint of Π′ IR−2 is then given by ΠV , (cf. (3.1)). Thus v, considered as an element of lIR−2 by “adding zeroes”, is a singular vector to Π′ IR−2 with singular value (Rn σN )−2 . Recall that the non-zero singular values σ0 ≥ . . . ≥ σn of Γ are assumed to be distinct. Denote the corresponding singular vectors by um (m = 0, . . . , n) and set vm = Fn (um ). (So u = uN and v = vN in the previous notation). In the above calculations, there was nothing special about the value N , so it follows that each vm is a singular vector to Π′ IR−2 with corresponding singular value (Rn σm )−2 , m = 0, . . . , n. Moreover, Π′ IR−2 has rank n + 1, and hence these are all non-zero singular values. Hence (Rn σN )−2 is the n − N :th singular value, ((Rn σn )−2 is the 0:th). By Lemma 5.6 we have that −2 |[vˇN ]Mz % = (Rn σN )−2 %Π′ IR

46

M. CARLSSON

which, by the same calculation as in Corollary 4.4 yields that codim [vˇN ]Mz ≥ n − N. (5.14) now follows by Beurling’s theorem and the fact that vˇN is a polynomial. .n from Lemma 3.4. Recall the operators P ˜ Theorem 5.8. Let R ≥ 1, let lW ˜ be such that B is a contraction. Moreover assume that %P˜n Ir %L(lW˜ ) → 0 as n → ∞ for each r < 1. Let Γ ∈ L(H 2 (RD), lW ˜ ) be a Hankel operator and let N be such that σN −1 > σN = . . . = σN +µ > σN +µ+1 ≥ σ∞ . Then there exists mutually orthogonal σN −singular vectors uN , . . . , uN +µ such that N ≤ #(ZR (uN )) ≤ N + r, and %Γ|M(ZR (uN )) % = σN . Proof. Once the first statement has been established, the latter follows as before from Theorem 4.3 and Beurling’s theorem. By Lemmas 3.3 and 3.4 it follows that there is a sequence (Γn ) of Hankel operators such that Γn ∈ Hank(n) and limn Γn√= Γ in the strong operator topology. Let EΓ be the spectral projection measure associated with Γ∗ Γ. Clearly Ran EΓ (σN ) has dimension +µ µ + 1. We need to show that EΓ (σN ) has an orthonormal basis {uk }N k=N with the desired amount of zeroes. Let P denote the orthogonal projection onto EΓ (σN ) and let uk (Γn ) be the singular vector of Γn corresponding to σk (Γn ) for k = 0, . . . , n. By Lemma 3.3 and Proposition 3.5, (5.16)

lim %P (uk (Γn )) − uk (Γn )% = 0 n

8 9N +µ +µ for all N ≤ k ≤ N + r. Consider P (uk (Γn )) k=N in ×N EΓ (σN ) as a sequence in n. By 8 k=N 9N +µ +µ compactness, this sequence has a convergent subsequence P (uk (Γnj )) k=N , j = 1, . . .. Let (uk )N k=N be the limit and note that by (5.16), 8

9N +µ +µ uk (Γnj ) k=N → (uk )N k=N

as j → ∞. This implies that uk (Γnj ) → uk uniformly on compacts in RD, and hence uk satisfies ZR (uk ) ≤ N + r by Theorem 5.7. On the other hand ZR (uk ) ≥ N by Theorem 4.3 and the assumption that σN −1 > σN . The proof is complete. Corollary 5.9. Let R ≥ 1 and let w = (wj ) be an increasing strictly positive sequence. Let Γ : H 2 (RD) → D2 (w) be a Hankel operator and let N be such that σN −1 > σN = . . . = σN +µ > σN +µ+1 ≥ σ∞ . Then there exists mutually orthogonal σN −singular vectors uN , . . . , uN +µ such that N ≤ #(ZR (uN )) ≤ N + r, and %Γ|M(ZR (uN )) % = σN . REFERENCES [1] Adamjan, V. M.; Arov, D. Z.; Krein, M. G. Infinite Hankel matrices and generalized problems of CarathodoryFejr and F. Riesz. (Russian) Funkcional. Anal. i Priloen. 2 1968 no. 1, 1–19. [2] Adamjan, V. M.; Arov, D. Z.; Krein, M. G. Infinite Hankel matrices and generalized Carathodory-Fejr and I. Schur problems. (Russian) Funkcional. Anal. i Priloen. 2 (1968), no. 4, 1–17.

AAK-THEORY ON WEIGHTED SPACES

47

[3] Adamjan, V. M.; Arov, D. Z.; Krein, M. G. Analytic properties of the Schmidt pairs of a Hankel operator and the generalized Schur-Takagi problem. (Russian) Mat. Sb. (N.S.) 86(128) (1971), 34–75. [4] Adamjan, V. M.; Arov, D. Z.; Krein, M. G. Infinite Hankel block matrices and related problems of extension. (Russian) Izv. Akad. Nauk Armjan. SSR Ser. Mat. 6 (1971), no. 2-3, 87–112. [5] Arcozzi N.,; Rochberg, R.; Sawyer, E.; Wick, B. D. Bilinear forms on the Dirichlet space. Preprint. ´ n, L. On approximation of functions with exponential sums. J. Appl. Comput. Harmon. [6] Beylkin, G.; Monzo Anal. 19 (2005), 17–48. [7] Butz, J. R. s-numbers of Hankel matrices. J. Functional Analysis 15 (1974), 297–305. [8] Carlsson, M. On the Cowen-Douglas class for Banach space operators. Integral Equations Operator Theory 61 (2008), no. 4, 593–598. [9] Clark, D. N. On the spectra of bounded, Hermitian, Hankel matrices. Amer. J. Math. 90 1968 627–656. [10] Cotlar, M.; Sadosky, C. Generalized Bochner theorem in algebraic scattering systems. Analysis at Urbana, Vol. II (Urbana, IL, 1986–1987), 144–169, London Math. Soc. Lecture Note Ser., 138, Cambridge Univ. Press, Cambridge, 1989. [11] Cotlar, M.; Sadosky, C. Abstract, weighted, and multidimensional Adamjan-Arov-Krein theorems, and the singular numbers of Sarason commutants. Integral Equations Operator Theory 17 (1993), no. 2, 169–201. [12] Cotlar, M.; Sadosky, C. Hankel forms and operators in Hardy spaces with two Szeg¨ o weights. Operator theory and interpolation (Bloomington, IN, 1996), 145–162, Oper. Theory Adv. Appl., 115, Birkhuser, Basel, 2000. [13] Cox, D.; Little, J.; and O’Shea, D. Ideals, Varieties, and Algorithms: An Introduction to Computational Algebraic Geometry and Commutative Algebra. Springer Verlag, 2005. [14] Foias. C; Sz-Nagy. S Harmonic analysis of operators on Hilbert space.Harmonic analysis of operators on Hilbert space. North-Holland Publishing Co., 1970. [15] Hartman, P. On completely continuous Hankel matrices. Proc. Amer. Math. Soc. 9 1958 862–866. [16] Nikolski, N. K. Operators, functions, and systems: an easy reading. Vol. 1. Hardy, Hankel, and Toeplitz. Mathematical Surveys and Monographs, 92. American Mathematical Society, Providence, RI, 2002. [17] Treil, S.; Volberg. A. A fixed point approach to Nehari’s problem and its applications. 165–186, Oper. Theory Adv. Appl., 71, Birkhuser, Basel, 1994.