Springer 2005
Numerical Algorithms (2005) 39: 131–142
Classical orthogonal polynomials in two variables: a matrix approach ∗ Lidia Fernández a , Teresa E. Pérez b and Miguel A. Piñar b a Departamento de Matemática Aplicada, Universidad de Granada, Granada, Spain
E-mail:
[email protected] b Departamento de Matemática Aplicada, and Instituto Carlos I de Física Teórica y Computacional,
Universidad de Granada, Granada, Spain E-mail: {tperez;mpinar}@ugr.es
Received 30 September 2003; accepted 8 March 2004
Classical orthogonal polynomials in two variables are defined as the orthogonal polynomials associated to a two-variable moment functional satisfying a matrix analogue of the Pearson differential equation. Furthermore, we characterize classical orthogonal polynomials in two variables as the polynomial solutions of a matrix second order partial differential equation. Keywords: orthogonal polynomials in two variables, classical orthogonal polynomials AMS subject classification: 42C05, 33C50
1.
Introduction
Orthogonal polynomials in two variables are studied as the natural generalization of orthogonal polynomials in one variable. However, today, some basics concepts for orthogonal polynomials in one variable are not yet generalized for several variables. The main problem is that, in two variables, the set of polynomials of total degree less than or equal to n is a linear space of dimension equal to n(n + 1)/2. Then, for n 1, the selection of a basis for the polynomials of total degree n conditions the structure of the formulae relating the orthogonal polynomials. In 1982, Kowalski [8,9], introduced the matrix notation for polynomials in several variables, and showed analogous properties to the three term recurrence relation and Favard theorem for orthogonal polynomials in several variables. In fact, [8] is the starting point to prove results for orthogonal polynomials in several variables similar to the standard properties for one-variable orthogonal polynomials (see [3,5,6,15]). ∗ Partially supported by Ministerio de Ciencia y Tecnología (MCYT) of Spain and by the European
Regional Development Fund (ERDF) through the grant BFM2001-3878-C02-02, Junta de Andalucía, G.I. FQM 0229 and INTAS Project 2000-272.
132
L. Fernández et al. / Two-variable orthogonal polynomials
Classical orthogonal polynomials in one variable are characterized as the only sequences of orthogonal polynomials satisfying very well known properties: Rodrigues formula, structure relation, orthogonality of the derivatives, and so on (for more details, see [2]). One of the most important characterizations of classical orthogonal polynomials in one variable was given by Bochner in 1929 (see [1]). Let {Pn }n be a sequence of orthogonal polynomials, then Pn (x) is solution of the second order differential equation φ(x)y + ψ(x)y = λn y,
(1)
where φ(x) and ψ(x) are fixed polynomials of degree 2, and 1, respectively, and λn is a real number depending on the degree of the polynomial solution if and only if {Pn }n is a classical family, nominally, Hermite, Laguerre, Jacobi or Bessel. In 1967, Krall and Sheffer (see [10]) extended the concept of classical sequence of orthogonal polynomials to the bivariate case. In fact, they define the classical orthogonal polynomials as the two-dimensional orthogonal polynomial solutions of the second order partial differential equation 2 ax + d1 x + e1 y + f1 wxx + (2axy + d2 x + e2 y + f2 )wxy + ay 2 + d3 x + e3 y + f3 wyy + (gx + h1 )wx + (gy + h2 )wy = λn w,
(2)
where λn = an(n − 1) + gn. From this equation, they deduced nine types of classical 2D orthogonal polynomials, depending, as in the one variable case, on the polynomial coefficients in (2). The classification obtained in [10] is not exhaustive, since the tensor product of Jacobi polynomials in one variable, a classical family in [3], does not satisfy a partial differential equation as (2). In this paper, using the vector polynomial notation introduced in [8], we define the classical character of a sequence of 2D orthogonal polynomials in terms of a matrix Pearson-type equation satisfied for the moment functional, as we can show in section 3. Finally, in section 4, we give a characterization of classical 2D orthogonal polynomials as the solutions of a matrix partial differential equation, and from this new point of view, we recover the results of Krall and Sheffer, and we extend the class of classical 2D orthogonal polynomials to include the tensor products of classical one-dimensional orthogonal polynomials.
2.
Basic facts
Here we recover the basic definitions and tools for the rest of the paper. For a complete theory, see [3,8,9,14]. Let us denote by P the linear space of polynomials in two variables with real coefficients, and by P its topological dual (see [13]). Let Pn be the set of two variable
L. Fernández et al. / Two-variable orthogonal polynomials
133
real polynomials of total degree not greater than n, n ci,j x i y j , ci,j ∈ R . Pn = p(x, y) = i+j =0
As usual, we denote by Mh×k (R) the linear space of (h × k) real matrices, and by Mh×k (P) the linear space of (h × k) matrices with 2D polynomial elements. If h = k, we will denote Mh×h ≡ Mh . Let {µh,k }h,k0 be a doubly indexed sequence of real numbers and let us denote by u, the only linear functional in P satisfying u, x h y k = µh,k , h, k = 0, 1, . . . , then u is called the 2D moment functional determined by {µh,k }h,k0 , and the number µh,k is called the moment of order (h, k). , means the duality bracket. First, we define some distributional operations relating 2D moment functionals, matrices, and 2D polynomials: • Let M = (mi,j (x, y))h,k i,j =1 ∈ Mh×k (P) be a polynomial matrix, and u a 2D moment functional. We define (see [3]) the action of u over M as h,k u, M = u, mi,j (x, y) i,j =1 ∈ Mh×k (R). • Given a polynomial matrix M, we define the left product of M times u by means of Mu, p = u, M T p , ∀p ∈ P, ∀M ∈ Mh×k (P). If {Pn,0 , Pn−1,1 , . . . , P0,n } are polynomials of total degree n, then we can use the matrix notation introduced in [8], and denote by Pn the vector polynomial Pn = (Pn,0 , Pn−1,1 , . . . , P0,n )T . When {Pm }nm=0 is a basis of Pn for each n 0, then {Pn }n0 is called a polynomial system (PS). Definition 2.1. We will say that a PS {Pn }n0 is a weak orthogonal polynomial system (WOPS) with respect to u if u, Pn PTm = 0, m = n, (3) u, Pn PTn = Hn , where Hn ∈ Mn+1 (R) is a regular matrix. If Hn is a diagonal matrix, we say that the WOPS {Pn }n0 is an orthogonal polynomial system (OPS). In the particular case where Hn is the identity matrix, we call {Pn }n0 an orthonormal polynomial system.
134
L. Fernández et al. / Two-variable orthogonal polynomials
Moreover, a WOPS is called a monomial WOPS if every polynomial contains only one term of higher degree, that is, Ph,k (x, y) = ch,k x h y k + R(x, y),
h + k = n,
with ch,k = 0 is a real constant and R(x, y) ∈ Pn−1 . If ch,k = 1, we will say that {Pn }n0 is a monic monomial WOPS. Now, we recall the concept of regularity of a linear functional u (see [10]). For a given 2D moment functional u with moments {µh,k }h,k0 , the Hankel determinant n is defined as the determinant of order (n + 1)(n + 2)/2 whose first row consists of the elements µ00 , µ10 , µ01 , µ20 , µ11 , µ02 , . . . , µn0 , µn−1,1 , . . . , µ0n and whose subsequent rows are obtained by adding the integers (i, j ) to the above subscripts pairs, where (i, j ) runs successively through the values (i, j ) = (1, 0), (0, 1), (2, 0), (1, 1), (0, 2), . . . , (n, 0), (n − 1, 1), . . . , (0, n). That is, 0 = µ00 , µ00 µ 10 n = . .. µ0n
µ00 1 = µ10 µ01 µ10 µ20 .. . µ1n
µ01 µ11 .. . µ0,n+1
µ10 µ20 µ11
µ01 µ11 , µ0,2
... µn0 . . . µn+1,0 .. .. . . ... µnn
... µ0n µ1n .. . . . . . µ0,2n ... ... .. .
The moment functional u is called regular (or quasi-definite) if and only if n = 0 for n 0. The following proposition was proved in [10]. Proposition 2.1. For a moment functional u the following statements are equivalent (i) u is regular (or quasi-definite). (ii) There is a unique monic monomial WOPS relative to u. (iii) There is an OPS relative to u. All along this paper, we will use the gradient operator ∇, and the divergence operator div defined as usual:
p ∂x p , div = ∂x p + ∂y q, ∇p = q ∂y p
L. Fernández et al. / Two-variable orthogonal polynomials
135
acting over P. Moreover, we define the distributional gradient operator and the distributional divergence operator acting over moment functionals in the following way p p ∀p, q ∈ P, ∇u, = − u, div = −u, ∂x p + ∂y q, q q
u1 u1 div ,p = − , ∇p = − u1 , ∂x p + u2 , ∂y p , ∀p ∈ P. u2 u2 Using the above definitions, we introduce the action of ∇ and div over a 2D polynomial matrix. Definition 2.2. Let M, N ∈ Mh×k (P) be polynomial matrices. We define
M ∂x M = ∂x M + ∂y N ∈ Mh×k . ∇M = ∈ M2h×k and div N ∂y M 3.
Classical orthogonal polynomials
First, we introduce the concept of 2D classical moment functional from a Pearsontype differential equation satisfied by a linear functional u, as Marcellán, Branquinho, and Petronilho do in the univariate case, see [12]. Definition 3.1. A regular moment functional u is said to be classical if it satisfies the Pearson-type equation div(φu) = ψ T u, where
φ=
a b
b c
,
d ψ= , e
(4)
(5)
are polynomial matrices such that a = a(x, y), b = b(x, y) and c = c(x, y) are polynomials of total degree 2, and d = d(x, y), e = e(x, y) are polynomials of total degree 1, and detu, φ = 0.
(6)
At this moment, we need to define the left multiplication of φ∇ and ψ times a polynomial matrix. Let M ∈ Mh×k , and φ, ψ as given in (5). We observe
a∂x M + b∂y M ∈ M2h×k , (φ ⊗ Ih )∇M = b∂x M + c∂y M
(7) dM ∈ M2h×k , (ψ ⊗ Ih )M = eM T ψ ⊗ Ih ∇M = d∂x M + e∂y M ∈ Mh×k ,
136
L. Fernández et al. / Two-variable orthogonal polynomials
where ⊗ is the matrix Kronecker product (see [4]) and Ih is the (h × h) identity matrix. From the above observations, it is easy to prove the following: Lemma 3.1. Let M ∈ Mh×k and N ∈ Mk×m be two polynomial matrices. Then, we have (i) (φ ⊗ Ih )∇(MN ) = (φ ⊗ Ih )∇M N + diag(M, M)(φ ⊗ Ik )∇N ∈ M2h×m , (ii) div diag(M, M)(φ ⊗ Ik )∇N T = M div (φ ⊗ Ik )∇N + ∇M T (φ ⊗ Ik )∇N ∈ Mh×m , where, as usual, diag(M, M) denotes the diagonal block matrix
M 0 diag(M, M) = ∈ M2h×2k . 0 M Observe that, if u is a classical moment functional, then we can write (4) as ˜ φ∇u = ψu, where ψ˜ = ψ − (div φ)T =
d − ax − by e − bx − cy
(8)
.
(9)
A WOPS with respect to u is called a classical WOPS. From now on, we will denote by {Pn }n the classical monic monomial WOPS with respect to u. Using the classical character of u, we can deduce some interesting properties. In fact, for n = 0, we have P0 = 1, and therefore 0 = − φu, ∇PT0 = div(φu), PT0 = ψ T u, PT0 = u, ψ, that is, u, d = 0, and u, e = 0. From the regularity condition for u, we deduce that deg d = deg e = 1. And finally, we can deduce that ψ = AP1 ,
(10)
where A is a regular matrix. Now, we have the necessary tools in order to prove the first characterization of the classical polynomials in two variables, the so-called structure relation. Proposition 3.2. Let u be a regular moment functional and {Pn }n0 the monic monomial WOPS with respect to u. Then u is classical if and only if {Pn }n0 satisfies n n + diag PTn , PTn Fnn + diag PTn−1 , PTn−1 Fn−1 , (11) φ∇PTn = diag PTn+1 , PTn+1 Fn+1 F n n ∈ M(m+1)×(n+1) (R), is a column block matrix, with Fm,i for n 1, where Fmn = Fm,1 n m,2
such that F01 is regular.
L. Fernández et al. / Two-variable orthogonal polynomials
137
Proof. Suppose that u is classical, that is, u satisfies the Pearson-type equation (4). The expression φ∇PTn is a matrix of dimension 2 × (n + 1) whose elements are polynomials of degree at most n + 1. Then, we can write
n+1 n+1 T T Fn T m,1 φ∇Pn = diag Pm , Pm diag PTm , PTm Fmn , = n Fm,2 m=0
where
Fmn
m=0
can be obtained as follows u, diag(Pm , Pm ) φ∇PTn = diag(Hm , Hm )Fmn .
On the other hand, using lemma 3.1, and equation (4), we get u, diag(Pm , Pm ) φ∇PTn = u, (φ ⊗ Im )∇ Pm PTn − u, (φ ⊗ Im )∇Pm PTn = − div (φ ⊗ Im )u , Pm PTn − u, (φ ⊗ Im )∇Pm PTn = − ψ T ⊗ Im u, Pm PTn − u, (φ ⊗ Im )∇Pm PTn = − u, (ψ ⊗ Im )Pm PTn − u, (φ ⊗ Im )∇Pm PTn = − u, (ψ ⊗ Im )Pm + (φ ⊗ Im )∇Pm PTn . Observe that (ψ ⊗Im )Pm +(φ ⊗Im )∇Pm is a polynomial matrix of degree at most m+1. From the orthogonality we deduce Fmn = 0, for m < n − 1, and relation (11) holds. Moreover, for n = 1 and m = 0, using (10), we obtain, diag(H0 , H0 )F01 = − u, (ψP0 + φ∇P0 )PT1 = − u, ψPT1 = −AH1 . In this way, since A, H1 and diag(H0 , H0 ) are regular matrices, we conclude that F01 is also a regular matrix. Conversely, let us assume that {Pn }n0 satisfies (11). Then we have div(φu), PTn = − u, φ∇PTn n − u, diag PTn , PTn Fnn = − u, diag PTn+1 , PTn+1 Fn+1 n − u, diag PTn−1 , PTn−1 Fn−1 . Therefore,
div(φu), PTn = 0,
n 2.
Obviously, this equality holds for n = 0, and for n = 1, we have div(φu), PT1 = − u, φ∇PT1 = − diag(H0 , H0 )F01 . So, we conclude
div(φu), PTn
=
0, − diag(H0 , H0 )F01 ,
n = 1, n = 1.
Now, let v be the (1 × 2) moment functional matrix given by T v = −PT1 H1−1 F01 diag(H0 , H0 ) u.
138
L. Fernández et al. / Two-variable orthogonal polynomials
This moment functional matrix satisfies v, PTn = u, − diag(H0 , H0 )F01 H1−1 P1 PTn = − diag(H0 , H0 )F01 H1−1 u, P1 PTn .
(12)
Then, from the orthogonality relation (3), it follows that div(φu), PTn = v, PTn , n 0, that is,
T div(φu) = v = −PT1 H1−1 F01 diag(H0 , H0 ) u = ψ T u.
From the regularity of the matrices H1−1 , (F01 )T and diag(H0 , H0 ), we deduce that ψ T = −PT1 H1−1 (F01 )T diag(H0 , H0 ) is a vector of orthogonal polynomials of degree 1, and the result follows. Condition (6) follows from relation (11) for n = 1. Remark 1. In the above proposition, the monic character of the polynomials {Pn }n0 is unnecessary. If {Qn }n0 is another WOPS associated with the classical moment functional u, then there exists a sequence of regular matrices {An }n0 such that QTn = PTn An . In this way n φ∇QTn = φ∇ PTn An = φ∇PTn An = diag PTn+1 , PTn+1 Fn+1 n An + diag PTn , PTn Fnn + diag PTn−1 , PTn−1 Fn−1 T −1 n −1 T T = diag Qn+1 , Qn+1 diag An+1 , An+1 Fn+1 An n T −1 + diag QTn , QTn diag A−1 n , An Fn An n −1 T + diag QTn−1 , QTn−1 diag A−1 n−1 , An−1 Fn−1 An , and therefore, {Qn }n0 satisfies the same kind of relation. Next, we are going to characterize the two variable classical orthogonal polynomials as those orthogonal polynomials satisfying the Bochner-type second order partial differential equation div φ∇PTn + ψ˜ T ∇PTn = PTn Tn , (13) where φ and ψ˜ are given by (5) and (9), and n ∈ Mn+1 (R). To this end, we define the operator L by means of L[p] = div(φ∇p) + ψ˜ T ∇p, and denote by L∗ the formal Lagrange adjoint of L
˜ , L∗ [u] = div(φ∇u) − div ψu
in this way, it satisfies
L∗ [u], p = u, L[p] .
L. Fernández et al. / Two-variable orthogonal polynomials
139
With this definition, L[PTn ] is a polynomial vector whose individual components are obtained by application of the operator L to the polynomials in Pn . Next, we will prove a useful lemma. Lemma 3.3. Let u be a regular moment functional, and {Pn }n0 the corresponding monic monomial WOPS. If there exist matrices n ∈ Mn+1 (R), such that (14) L PTn = PTn Tn , n 0, then L∗ [u] = 0. Proof. Let assume that {Pn }n0 satisfies (14). Then, for n 1, we have div(φ∇u), PTn = u, div φ∇PTn = u, −ψ˜ T ∇PTn + PTn Tn T ˜ , Pn , = − u, ψ˜ T ∇PTn + u, PTn Tn = div ψu ˜ and for n = 0 is trivial. In this way, div(φ∇u) = div(ψu), and therefore L∗ [u] = 0. The action of the operator L can be extended to a polynomial matrix M, using (7), in the following way L[M] = div (φ ⊗ I )∇M + ψ˜ T ⊗ I ∇M, where I is the identity matrix with the adequate size. From lemma 3.1 we obtain Lemma 3.4. Let {Pn }n0 be a PS. Then, T L Pn PTm = L[Pn ]PTm + Pn L PTm + 2 ∇PTn φ∇PTm . From lemmas 3.3 and 3.4, we can easily deduce the characterization of the two variable classical orthogonal polynomials as the solutions of a Bochner-type partial differential equation. Proposition 3.5. Let u be a regular moment functional and {Pn }n0 the corresponding monic monomial WOPS. Then u is classical if and only if there exist matrices n ∈ Mn+1 (R) such that L PTn = PTn Tn . (15) where H1 T1 is a symmetric matrix. Proof. Let us assume that u satisfies relation (8), that is, u is a classical functional. Then, it is clear that L∗ [u] = 0, and then, u, L Pm PTn = L∗ [u], Pm PTn = 0.
140
L. Fernández et al. / Two-variable orthogonal polynomials
On the other hand, from lemma 3.4, T u, L Pm PTn = u, L[Pm ]PTn + u, Pm L PTn + 2 u, ∇PTm φ∇PTn , and using the structure relation (11), u, (∇PTm )T φ∇PTn = 0, for m < n. Moreover, by the orthogonality condition, u, L[Pm ]PTn = 0, for m < n, and therefore, u, Pm L PTn = 0, m < n. Consequently, L[PTn ] is orthogonal to Pm for m < n, its total degree is not greater than n and its dimension is equal to 1×(n+1). Adjusting the leading coefficients, we conclude that L PTn = PTn Tn , where n ∈ Mn+1 (R). Furthermore, using relation (4), div(φu), PT1 = ψ T u, PT1 , or equivalently,
− u, φ∇PT1 = u, ψPT1 .
From L[P1 ] = ψ = 1 P1 and ∇PT1 = I2 , where I2 is the 2 × 2 identity matrix, we deduce that −u, φ = 1 H1 and therefore 1 H1 is symmetric. In order to prove the reciprocal, we observe that u satisfies (4) if and only if div(φu), PTn = ψ T u, PTn , n 0, that is,
− u, φ∇PTn = u, ψPTn ,
n 0.
Taking into account that ∇PT1 = I2 , the previous condition is equivalent to (a) u, ψ = 0, for n = 0, (b) −u, φ = u, ψPT1 , for n = 1, (c) u, φ∇PTn = 0, for n > 1. If we assume that {Pn }n0 satisfies (15), then, from lemma 3.3, we get L∗ [u] = 0, and then 0 = L∗ [u], PT1 = u, L PT1 = u, div φ∇PT1 + ψ˜ T ∇PT1 = u, div φ + ψ˜ T = u, ψ T ,
L. Fernández et al. / Two-variable orthogonal polynomials
141
so we obtain (a). By using lemma 3.4 and ∇PT1 = I2 , we get 0 = L∗ [u], P1 PTn = u, L P1 PTn T = u, L[P1 ]PTn + u, P1 L PTn + 2 u, ∇PT1 φ∇PTn = 1 u, P1 PTn + u, P1 PTn Tn + 2 u, φ∇PTn . Therefore, for n > 1, we get u, φ∇PTn = 0, and so (c) is proved. For n = 1, we have 1 u, P1 PT1 + u, P1 PT1 T1 + 2u, φ = 0, that is, 1 H1 + H1 T1 + 2u, φ = 0, then, using the symmetry of the matrix H1 T1 we conclude that −u, φ = H1 T1 = u, P1 ψ T
and condition (b) holds.
Remark 2. Observe that, in the above proposition, the monic character of the sequence {Pn }n0 is superfluous. In fact, if {Qn }n0 is a different WOPS, then we have QTn = PTn An for An a regular matrix. In this way, we obtain T L QTn = L PTn An = PTn Tn An = QTn A−1 n n An and therefore, {Qn }n0 satisfies ˜ Tn , L QTn = QTn T T ˜T ˜ Tn = A−1 where n n An . That is, n and n are similar matrices.
Remark 3. Moreover, we must remark that, if n is an scalar matrix for every value of n, that is, n = λn In+1 , where λn is a real number and In+1 is the identity matrix of order n + 1, then all the orthogonal polynomials with total degree n satisfy the same second order partial differential equation (given by (2)), and we recover the definition of classical 2D orthogonal polynomials given by Krall and Sheffer in [10]. Remark 4. Finally, observe that the tensor product of Jacobi polynomials in one variable, ˆ (α,β,α, ˆ β)
Ph,k
(α,β)
(x, y) = Ph
ˆ (α, ˆ β)
(x)Pk
(y),
orthogonal on [−1, 1] × [−1, 1] with respect to the weight function ˆ
ρ(x, y) = (1 − x)α (1 + x)β (1 − y)αˆ (1 + y)β ,
α, β, α, ˆ βˆ > −1,
142
L. Fernández et al. / Two-variable orthogonal polynomials
is not a classical family according to the classification given in [10]. In fact, the tensor Jacobi polynomials satisfy the second order partial differential equation 1 − x 2 wxx + 1 − y 2 wyy + β − α − (α + β + 2)x wx + βˆ − αˆ − αˆ + βˆ + 2 y wy = λh,k w, where λh,k = −h(h + α + β + 1) − k(k + αˆ + βˆ + 1) depends on the partial degrees of the polynomial solution. However, (α,β,α, ˆ ˆ ˆ T ˆ β) (α,β,α, ˆ β) (α,β,α, ˆ β) , Pn−1,1 , . . . , P1,n Pn = Pn,0 satisfy a second order partial differential equation of type (14) where n is a diagonal matrix whose elements are λi,i = −(n − i + 1)(n − i + 2 + α + β) − (i − 1) i + αˆ + βˆ , and therefore they are classical according to the definition given in this paper. References [1] S. Bochner, Über Sturm–Liouvillesche Polynomsysteme, Math. Zeit. 29 (1929) 730–736. [2] T.S. Chihara, An Introduction to Orthogonal Polynomials (Gordon and Breach, London, 1978). [3] C.F. Dunkl and Y. Xu, Orthogonal Polynomials of Several Variables, Encyclopedia of Mathematics and its Applications, Vol. 81 (Cambridge Univ. Press, Cambridge, 2001). [4] R.A. Horn and C.R. Johnson, Topics in Matrix Analysis (Cambridge Univ. Presss, Cambridge, 1991). [5] Y.J. Kim, K.H. Kwon and J.K. Lee, Orthogonal polynomials in two variables and second-order partial differential equations, J. Comput. Appl. Math. 82 (1997) 239–260. [6] Y.J. Kim, K.H. Kwon and J.K. Lee, Partial differential equations having orthogonal polynomial solutions, J. Comput. Appl. Math. 99 (1998) 239–253. [7] Y.J. Kim, K.H. Kwon and J.K. Lee, Centrally symmetric orthogonal polynomials and second order partial differential equations, Methods Appl. Anal. 7(1) (2000) 57–64. [8] M.A. Kowalski, The recursion formulas for orthogonal polynomials in n variables, SIAM J. Math. Anal. 13 (1982) 309–315. [9] M.A. Kowalski, Orthogonality and recursion formulas for polynomials in n variables, SIAM J. Math. Anal. 13 (1982) 316–323. [10] H.L. Krall and I.M. Sheffer, Orthogonal polynomials in two variables, Ann. Mat. Pura Appl. Ser. 4 76 (1967) 325–376. [11] L.L. Littlejohn, Orthogonal polynomial solutions to ordinary and partial differential equations, in: Orthogonal Polynomials and Their Applications. Proceedings Segovia, Spain (1986), Lecture Notes in Mathematics, Vol. 1329 (Springer, Berlin, 1988) pp. 98–124. [12] F. Marcellán, A. Branquinho and J. Petronilho, Classical orthogonal polynomials: A functional approach, Acta Appl. Math. 34 (1994) 283–303. [13] P. Maroni, Une théorie algébrique des polynômes orthogonaux, IMACS Ann. Comput. Appl. Math. 9 (1991) 95–130. [14] P.K. Suetin, Orthogonal Polynomials in Two Variables (Gordon and Breach, Amsterdam, 1999). [15] Y. Xu, On multivariate orthogonal polynomials, SIAM J. Math. Anal. 24 (1993) 783–794.