MATHEMATICS OF COMPUTATION Volume 66, Number 219, July 1997, Pages 1133–1145 S 0025-5718(97)00861-2
CALCULATION OF GAUSS-KRONROD QUADRATURE RULES DIRK P. LAURIE
Abstract. The Jacobi matrix of the (2n+1)-point Gauss-Kronrod quadrature rule for a given measure is calculated efficiently by a five-term recurrence relation. The algorithm uses only rational operations and is therefore also useful for obtaining the Jacobi-Kronrod matrix analytically. The nodes and weights can then be computed directly by standard software for Gaussian quadrature formulas.
1. Introduction A (2n + 1)-point Gauss-Kronrod integration rule for the integral Z b (1) f (x) ds(x), If = a
where s is a nonnegative measure on the interval [a, b], is a formula of the form (2)
K (2n+1) f =
2n+1 X
wi f (xi )
i=1
with the following two properties: • n of the nodes of K (2n+1) coincide with those of the n-point Gaussian quadrature rule G(n) for the same measure; • K (2n+1) f = If whenever f is a polynomial of degree less than or equal to 3n + 1. A thorough survey of the history, existence and other theoretical properties, and computational aspects of Gauss-Kronrod rules and their generalizations is given by Gautschi [9]. In this paper we are concerned with the efficient calculation of the nodes xi and weights wi of Gauss-Kronrod rules. Several methods for computing these formulas have been suggested [2, 3, 4, 6, 14, 15, 21, 22] but most of them, as Gautschi puts it, compute the Gauss-Kronrod formula “piecemeal.” That is to say, the new points and their weights are found by one method, and the new weights for the old points by another. The present author is aware of only two methods for computing the entire formula K (2n+1) in a single algorithm. One of them [2, 6] is based on solving by Newton’s method the 3n + 2 equations that express the exactness of the quadrature rule. In [6] the authors remark about this method: “ . . . by careful choice of initial approximations and continued monitoring of the iteration process, the method could be made to work for rules with up to 81 nodes . . . .” The other [4] is a very general Received by the editor June 28, 1995 and, in revised form, November 2, 1995. 1991 Mathematics Subject Classification. Primary 65D30, 65D32. c
1997 American Mathematical Society
1133
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
1134
DIRK P. LAURIE
algorithm for embedded quadrature formulas that uses among other tools the full eigenvector matrix of a tridiagonal matrix and the singular value decomposition. Both these methods therefore require O(n3 ) operations. The main contribution of the present paper is an O(n2 ) procedure for computing specifically the Kronrod extension. It is not applicable to more general embedded quadratures. The procedure is derived in Section 4 and given in pseudocode in the Appendix. We do not claim that this procedure is more accurate than the “piecemeal” methods, or even that it is computationally more efficient — such issues will need a thorough investigation — but only that it is an efficient way of reducing the computation of a Kronrod rule to the well-studied problem of computing a Gaussian rule from recurrence coefficients. For the calculation of Gaussian quadrature rules, the definitive algorithm (included in the recent software package of Gautschi [10]) is that of Golub and Welsch [11], which is based on the recurrence relation (3) (4) (5)
p−1 (x) = 0; p0 (x) = 1; pj+1 (x) = (x − aj )pj (x) − bj pj−1 (x), j = 0, . . . 1,
satisfied by the polynomials pj orthogonal with respect to the weight s. Since b0 only appears as a multiplier for p−1 (which is zero), any finite value will do: we follow Gautschi [7] in putting b0 = Ip20 , which leads to the useful property that Ip2j = b0 b1 . . . bj .
(6)
Golub and Welsch show that for all integers m ≥ 1 the nodes of the Gaussian formula G(m) are the eigenvalues, and the weights are proportional to the squares of the first components of the normalized eigenvectors, of the symmetric tridiagonal matrix (known as the Jacobi matrix associated with the Gaussian quadrature formula)
(7)
Tm
√a0 b1 =
√ b1 a1 .. .
√ b2 .. . p bm−1
..
. am−1
.
Remark. Although the routine in [10] works explicitly with the quantities in Tm (which are formed by taking square roots of the bj ) there exist square-root free eigensolvers for tridiagonal matrices that work directly with the bj (see [19], Section 8–15, and [20]). An investigation of which eigensolver is most accurate for the special purpose of computing Gaussian quadrature formulas would be worthwhile, but is outside the scope of the present article. The claim in [20] that small components of eigenvectors are computed with high relative accuracy is of particular interest here. Let us now suppose that the 2n + 1 nodes and weights of the Gauss-Kronrod formula K (2n+1) can be found in a similar way from the symmetric tridiagonal
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
CALCULATION OF GAUSS-KRONROD QUADRATURE RULES
matrix
(8)
Tb2n+1
=
b a q0 bb1
q bb1 b a1 .. .
1135
q bb2 .. . q bb2n
.. . b a2n
which we shall call the Jacobi-Kronrod matrix associated with the Gauss-Kronrod formula K (2n+1) . We do not know Tb2n+1 , but we do know a lot about it: some theoretical questions are discussed in Section 2. Our main result in Section 4 is a rational algorithm that computes Tb2n+1 efficiently in O(n2 ) arithmetic operations. A related problem is considered by Boley and Golub [1], where the doubledimension Jacobi matrix T2n is to be found when all its eigenvalues are known and Tn is specified. They use the available information to compute the weights of G(2n) in O(n2 ) arithmetic operations, after which any algorithm (three such are cited in [1]) for recovering a Jacobi matrix from its Gaussian quadrature formula may be used to compute T2n . Since we do not know all the eigenvalues of Tb2n+1 , a similar algorithm is not possible here. The main tool that we require is the theory of mixed moments, which is implicit in the work of Salzer [25] and appears more explicitly in the work of Sack and Donovan [24] and Wheeler [26]. An accessible exposition is given by Gautschi [8]. For the sake of clarity, we give the essentials of the theory in Section 3. 2. Properties of the Jacobi-Kronrod matrix In the literature on Kronrod formulas and Stieltjes polynomials (see the surveys [9, 18]) some non-existence theorems on these formulas are given. It is therefore of interest to relate existence questions to the Jacobi-Kronrod matrix. We use the following terminology [16]: • A quadrature formula exists if its defining equations have a (possibly complex) solution. • The formula is real if the points and weights are all real. • A real formula is internal if all the points belong to the (closed) interval of integration. A node not belonging to the interval is called an exterior node. • The formula is positive if all the weights are positive. It is well known (see e.g. [7]) that there is a one-to-one correspondence between Jacobi matrices and quadrature formulae with positive weights: if we knew the Kronrod formula itself, we could in principle find the Jacobi-Kronrod matrix, even though the computation may be delicate [12]. So we have: Fact 1. The Jacobi-Kronrod matrix exists and is real if and only if the corresponding Kronrod formula exists and is real and positive. Note that this fact does not imply that the Kronrod formula contains no exterior nodes. In view of Monegato’s result [17] that positivity of the weights of the new points is equivalent to the interlacing property (i.e. that there is one Gaussian node between any consecutive pair of Kronrod nodes), all that can be said is that at most two nodes, one at each end point, are exterior. It is, however, easy to diagnose whether the formula is interior once the Jacobi-Kronrod matrix is available:
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
1136
DIRK P. LAURIE
one simply factorizes Tb2n+1 − aI = LDLT , where L is lower triangular with unit diagonal, and D = diag{d0 , d1 , . . . , d2n }. It is well known (see any standard text on linear algebra) that there is one eigenvalue of Tb2n+1 less than the end point a if and only if there is one sign change on the diagonal of D. This algorithm is also rational, since d0 = a0 , dj+1 = aj+1 − bj+1 /dj . A similar test can be applied at the other end point b. As pointed out in [16], the fact that the formula K (2n+1) is exact for polynomials of degree less than or equal to 3n + 1 implies that the first 3n + 1 coefficients in a1 , bb2 , . . . } equal the corresponding coefficients in the known the sequence {b a0 , bb1 , b sequence {a0 , b1 , a1 , b2 , . . . }. Only the remaining n coefficients are unknown. They are determined by the condition that n of the eigenvalues of Tb2n+1 are fixed to be equal to the eigenvalues of Tn . At first sight this seems to be a partial inverse eigenvalue problem, subject to all the difficulties that surround the solution of such problems, but we shall show in Section 4 that the n unknown coefficients can be determined efficiently in O(n2 ) arithmetic operations. The algorithm given in Section 4 is rational and the only divisions are by quantities that can only be zero if some bbj = 0. We therefore have: Fact 2. The algorithm of Section 4 cannot break down if the corresponding Kronrod formula exists and is real and positive. This does not imply that the algorithm will break down if the Gauss-Kronrod formula is not real and positive: it may very well still succeed. But in that case, by Fact 1, the Kronrod-Jacobi matrix cannot be real, so at least one of the computed recurrence coefficients bbj must be negative. This means, of course, that the GolubWelsch algorithm, which always produces real non-negative formulas, can no longer be used to compute the Gauss-Kronrod formula. There does not seem to be an easy way of distinguishing between the cases of negative weights and non-real nodes. For example, when trying to construct a 7-point extension of the 3-point Gauss-Hermite formula (proved to be impossible in [13]), we obtain bb6 = −1, and indeed the corresponding Kronrod formula has complex nodes. When trying to construct a 9-point extension of the 4-point GaussHermite formula (not proved to be impossible in [13] 1 ) we obtain bb7 = − 14 , bb8 = 14 , and the formula indeed turns out to have real nodes but negative weights at two of the old points. It is also possible to use the Jacobi-Kronrod matrix as a theoretical tool. In [5] the authors investigate whether the Gauss-Kronrod formula associated with the Jacobi weight function (9)
ds(x) = (1 − x)α (1 + x)β dx
is real, positive and internal. In the case n = 1 they obtain analytic results, and thereafter give graphs based on numerical calculations. It is easy to code the algorithm of Section 4 in an algebraic language and obtain the numbers bbj and dj analytically. To illustrate this point we have computed the critical coefficients in 1 The
results of [13] are often misquoted as implying that a positive extension of the 4-point Hermite formula exists, but the abstract only claims “ . . . do not exist with positive weights when n > 0 in the Laguerre case and n = 3 or n > 4 in the Hermite case.” The corollary on p. 985 of [13] does state “ . . . only exist for n = 1, 2, 4” but the proof is a non-existence proof of the other cases, so the quoted phrase is clearly a misprint for “ . . . can only exist for n = 1, 2, 4.”
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
CALCULATION OF GAUSS-KRONROD QUADRATURE RULES
1137
the case n = 2, namely bb4
=
d4
=
2 (α + β + 5) P3,3 (α, β)
(4 + β) (α + β + 3) (α + β + 4) (α + β + 6)2 P2,4 (α, β) 2 2, (α + β + 3) (α + β + 6) (α + β + 8)
where
P2,4 (α, β) = [ 1 α
and
72 213 137 21 1 α2 ] −3 4 21 2 0 11 0 1 0 0
,
1 β β2 β3 β4
P3,3 (α, β) = [ 1
α α2
576 336 −71 −23 1 336 434 51 1 β α3 ] −71 51 2 0 β2 −23 1 0 0 β3
.
One can therefore conclude that the 5-point Gauss-Kronrod formula is real and positive when P3,3 (α, β) > 0 and is internal when P2,4 (α, β) ≥ 0 (there is no node less than −1) and P2,4 (β, α) ≥ 0 (there is no node greater than 1). The lines marked b and c in the first graph on p.241 of [5] are thereby obtained analytically. The following lemma gives an essential property of the Jacobi-Kronrod matrix Tb2n+1 , which will later be the key to its efficient computation. Lemma 1. The characteristic polynomial of the trailing principal n × n submatrix of the Jacobi-Kronrod matrix Tb2n+1 is the same as that of its leading principal n× n submatrix. Proof. Denote by φk and ψk respectively the characteristic polynomial of the leading and trailing k × k principal submatrices of Tb2n+1 . Expand φ2n+1 (λ) = det(Tb2n+1 − λI) along the (n + 1)-st row. Then (suppressing the argument (λ) for the sake of readability) (10)
φ2n+1 = −φn−1bbn ψn + (an − λ)φn ψn − φnbbn+1 ψn−1 .
Clearly any common zero of φn and ψn is a zero of φ2n+1 . Conversely, if φ2n+1 is to have φn as a factor, then φn−1 ψn must be divisible by φn since bbn = bn , which is nonzero by (6). But φn−1 and φn are consecutive terms in a sequence of orthogonal polynomials and therefore mutually prime. It follows that ψn is divisible by φn . Since the two polynomials have the same leading coefficient, they must be identical. Remark. Once we know that φn = ψn , it is of course possible to divide equation (10) by ψn to obtain (11)
φ2n+1 = −φn−1bbn + (an − λ)φn − bbn+1 ψn−1 , ψn
which gives an explicit expression for the polynomial with zeros at the Kronrod nodes.
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
1138
DIRK P. LAURIE
3. Mixed moments Suppose that two different systems of monic orthogonal polynomials are being considered: the system defined in (5), which we shall sometimes call the ‘old’ system, and a similar system (the ‘new’ system) defined by (12)
π−1 (x) = 0;
(13)
π0 (x) = 1;
(14)
πj+1 (x) = (x − αj )πj (x) − βj πj−1 (x), j = 0, 1, . . . .
The new system (12–14) is assumed to be orthogonal with respect to some inner product (·, ·)σ with the property that for any three polynomials f, g, h, (15)
(f, gh)σ = (f g, h)σ .
As before, it is convenient to put β0 = (π0 , π0 )σ . Salzer [25] considered the problem of converting a finite expansion in terms of the old polynomials pj into one in terms of the new polynomials πj . Expanding each pj in terms of the πj , one finds (16)
pl =
l X σk,l πk σk,k
k=0
where the mixed moments σk,l are given by (17)
σk,l = (πk , pl )σ .
The crucial observation is that a five-term recurrence for the mixed moments can be derived by putting j = l in (5), j = k in (14), taking the new inner product with πk and pl respectively, and subtracting. Thanks to the property (15), the terms that do not immediately reduce to mixed moments cancel, and we are left with (18)
σk,l+1 + al σk,l + bl σk,l−1 − (σk+1,l + αk σk,l + βk σk−1,l ) = 0.
The equation (18) can be written in a visually appealing manner familiar to practioners of finite difference methods as −βk bl al − αk 1 (19) σk,l = 0. −1 Here k is the row index and l is the column index, and the rows are numbered from top to bottom as is usual with matrices. Values of σk,l that are known in advance are (20)
σ0,0 = β0 ;
(21)
σ−1,l = 0, 1 = 0, 1, . . . , n,
(22)
σk,−1 = 0, k = 0, 1, . . . , n,
(23)
σk,l = 0, l = 0, 1, . . . , k − 1.
Equation (23) holds because πk is orthogonal in the new inner product to all polynomials of lower degree. In the application considered by Salzer [25], all the new recurrence coefficients are known. Then σk,l can be computed for l increasing, with the ‘East’ moment in (19) the unknown, as in Figure 1. The solution to the problem is obtained by substituting (16) into the given orthogonal expansion. (This explanation of
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
CALCULATION OF GAUSS-KRONROD QUADRATURE RULES
k 0 1 2
l
0
1
0 0
0 β0 0 0
0 0 ∗ ν ∗ ν 0 ν 0
3 4
2
0 0
1139
3 0 ··· ··· ··· .. . 0
Figure 1. Build-up of mixed moments σk,l in Salzer’s algorithm. The stars denote moments that have already been computed, and each ν denotes a new moment that is being computed at that stage.
0
1
2
3
···
2n − 4 2n − 3 2n − 2
2n − 1
0 0 µ0 0 0 0
0 µ1 ∗ 0
0 µ2 ∗ ν
··· ··· ··· ···
0
0 µ2n−4 ∗ ν . ..
0 µ2n−1
0
0 µ3 ∗ ν .. .
l k 0 1 2 3
···
0 µ2n−3 ∗ ν
0 µ2n−2 ∗
Figure 2. Build-up of mixed moments in the Sack-DonovanWheeler algorithm. The stars denote moments that have already been computed, and each ν denotes a new moment that is being computed at that stage.
Salzer’s algorithm, simplified to emphasize its kinship with the other algorithms considered here, does not do full justice to its ingenuity. In fact, it employs a fiveterm recurrence involving the expansion coefficients directly, rather than doing the last-mentioned substitution.) In the application considered by Sack and Donovan [24] and Wheeler [26], the new recurrence coefficients αk and βk are unknown, but the modified moments (24)
µl = σ0,l , l = 0 = 1, . . . , 2n − 1
are assumed to be known. In that case, the recurrence can be computed for k increasing, with the ‘South’ moment in (19) the unknown, as in Figure 2. The essential idea is that when row k has been computed, one can obtain βk and αk by considering (19) at position (k, k − 1) to yield (25)
βk =
σk,k σk−1,k−1
and at position (k, k) to yield
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
1140
DIRK P. LAURIE
σk,k+1 − βk σk−1,k σk,k σk,k+1 σk−1,k = ak + − . σk,k σk−1,k−1
(26)
αk = ak +
(27)
4. Computation of the Jacobi-Kronrod matrix The lemma in Section 2 reduces the partial inverse eigenvalue problem of finding a (2n + 1) × (2n + 1) matrix with n prescribed eigenvalues to a standard inverse eigenvalue problem in which the number of prescribed eigenvalues is the same as the order of the matrix. Moreover, this problem can be solved in a finite number of steps. Let the old polynomials pl of §2 be the same as those of §1, and let the new polynomials πk of §2 be given by (12–14) with (28)
an+k+1 , βk = bbn+k+1 , k = 0, 1, . . . , n − 1. αk = b
That is, the polynomials πk are those associated with the trailing n × n submatrix of the Jacobi-Kronrod matrix. Which mixed moments can we now compute? The first n − 1 entries in the sequence {α0 , β1 , α1 , β2 , . . . } are known: they are just an+1 , bn+2 , an+2 , bn+3 , . . . . So we can compute a number of mixed moments with the East moment as the unknown, as in the case of Salzer’s algorithm. To progress further, we use the fact that πn = pn , and therefore σk,n = 0, k = 0, 1, . . . , n − 1.
(29)
This allows us to complete the process in the same way as the Sack-DonovanWheeler algorithm, with the South moment as the unknown, computing new values of αk and βk as soon as possible, up to σn−1,n−1 , which together with the known σn−1,n = 0 is all that we need. The whole scheme is illustrated for n = 7 in Figure 3. Here we know α0 , β1 , α1 , β2 , α2 and β3 . The pattern is typical for odd n; when n is even, a diagonal of asterisked entries starting at σk,k+1 where k = n/2 − 1 can also be computed, because ak is then known. l k 0 1 2 3 4 5 6
0 0
0 0 ∗ 0 0
1 0 ∗ ∗ 0 0
2 0 ∗ ∗ ∗ 0 0
3 0 ∗ ∗ ∗ ∗ 0 0
4 0 ∗ ∗ ∗ ν ν 0 0
5 0 ∗ ∗ ν ν ν ν 0
6 0 ∗ ν ν ν ν ν ν
n 0 0 0 0 0 0 0 0
Figure 3. Computation of mixed moments for the JacobiKronrod matrix when n = 7. The stars denote quantities computable from the known recurrence coefficients, with the recurrence going ‘Eastwards’, and the ν’s quantities computable with the recurrence going ‘Southwards’, by using the fact that pn = πn gives a column of zeros, here indicated by 0.
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
CALCULATION OF GAUSS-KRONROD QUADRATURE RULES
1141
Building the table diagonal by diagonal, we obtain the following algorithm. Initialization: σ0,0 = 1; σ−1,l = 0, l = 0 : n; σk,n = 0, k = 1 : n − 1; σk,l = 0, l = 1 : n − 2, k = l + 1, l + 2. Eastward phase: For m = 0 : n − 2 σk,l+1 = σk+1,l + (αk − al )σk,l + βk σk−1,l − bl σk,l−1 , k = dm/2e : −1 : 0, l = m − k. Southward phase: For m = n − 1 : 2n − 3 σk+1,l = σk,l+1 − (αk − al )σk,l − βk σk−1,l + bl σk,l−1 , k = m + 1 − n : dm/2e, l = m − k. If m is even, αk = ak + If m is odd, βk =
σk,k+1 − βk σk1,k . σk,k
σk,k . σk−1,k−1
Termination: αn−1 = an−1 − βn−1
σn−2,n−1 . σn−1,n−1
The algorithm uses approximately 32 n2 multiplications, and can be implemented with two temporary vectors of length bn/2c + 2. The details are given in the Appendix. Remark. A variation may be obtained by reversing the rows and columns of the leading n × n submatrix of Tb2n+1 before entering the algorithm. Since the characteristic polynomial is unchanged, the values of αk and βk computed in this way are theoretically the same, even though the modified moments σk,j differ. Our numerical experiments do not indicate a preference for either version. 5. Numerical example We illustrate the algorithm by computing the Kronrod extension of the n-point Gauss-Jacobi formula for the weight function (9) over the interval (−1, 1). In the numerical example shown here, we took α = 0.3 and β = −0.6, and n = 3, 4, . . . , 199. The computations were performed in double precision on a machine conforming to the IEEE floating point standard, and for both the normal and ‘reversed’ versions of the algorithm. Actually, as recommended by Reichel [23], the coefficients were scaled to the interval (−2, 2) before performing the computation, and the resulting points and weights (obtained by Gautschi’s routine gauss from [10]) were scaled back to (−1, 1). On a machine with a small exponent range, this subterfuge is necessary to avoid underflow of the modified moments.
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
1142
DIRK P. LAURIE
Figure 4. Mean absolute difference (units of machine epsilon) in common points of computed n-point Gauss-Jacobi and (2n + 1)point Kronrod-Jacobi formulas (α = 0.3, β = −0.6).
In this example the coefficients bbj all turn out to be positive, and therefore the old and new nodes interlace. There is one node less than −1. An a posteriori indication of the accuracy of the computed formula can be obtained by comparing every second computed node of the Kronrod extension with the corresponding node ξi of the original Gaussian formula, computed using gauss in the same precision. Since there is no reason to think that the eigenvalue solver can discriminate in accuracy between the original and the new nodes, one may suppose that the new nodes are computed to an accuracy similar to that observed for the old nodes. Unfortunately this check says nothing about the accuracy of the weights. In Figure 4 we show the average difference between these zeros as a multiple of the machine roundoff level τ, i.e. the quantity n
ηn =
1 X |x2i − ξi | . n i=1 τ
Most of the values lie in the range 1 < ηn < 2, although there is a slight tendency
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
CALCULATION OF GAUSS-KRONROD QUADRATURE RULES
1143
for ηn to increase with n. The results from the ‘reversed’ version show essentially the same behaviour and are therefore not shown. Thanks. The suggestions of an anonymous referee have greatly improved this paper.
Appendix A. Pseudocode for the algorithm Input: Vectors a = {a0 , a1 , . . . , a2n } and b = {b0 , b1 , . . . , b2n }, in which the elements a0 , a1 , . . . , ab3n/2c and b0 , b1 , . . . , bd3n/2e have been initialized to the recurrence coefficients of the given weight. Working storage: Vectors s = {s−1 , s0 , . . . , sbn/2c } and t = {t−1 , t0 , . . . , tbn/2c }, initialized to zeros. Output: The vectors a and b now contain the recurrence coefficients from which the (2n + 1)-point Gauss-Kronrod formula is obtained by Gautschi’s routine gauss from [10]. We use the notation s ↔ t to indicate that the contents of the vectors s and t should be swapped. In an actual implementation, only the pointers to the vectors would be swapped: e.g. the vectors could be rows is and it of a two-row matrix, and one would swap the indices is and it . begin generate Jacobi-Kronrod matrix t0 ← bn+1 for m from 0 to n − 2 u←0 for k from b(m + 1)/2c downto 0 l ←m−k u ← u + (ak+n+1 − al ) tk + bk+n+1 sk−1 − bl sk sk ← u end for k s↔t end for m for j from bn/2c downto 0 sj ← sj−1 end for j for m from n − 1 to 2n − 3 u←0 for k from m + 1 − n to b(m − 1)/2c l ←m−k j ←n−1−l u ← u − (ak+n+1 − al ) tj − bk+n+1 sj + bl sj+1 sj ← u end for k if m mod 2 = 0 k ← m/2 ak+n+1 ← ak + (sj − bk+n+1 sj+1 )/tj+1 else k ← (m + 1)/2 bk+n+1 ← sj /sj+1
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
1144
DIRK P. LAURIE
endif s↔t end for m a2n ← an−1 − b2n s0 /t0 end generate Jacobi-Kronrod matrix
References 1. D. Boley and G. H. Golub. A survey of matrix inverse eigenvalue problems. Inverse problems, 3:595–622, 1987. MR 89m:65036 2. F. Cali` o, W. Gautschi, and E. Marchetti. On computing Gauss-Kronrod quadrature formulae. Math. Comput., 47:639–650, 1986. MR 88a:65028 3. F. Cali` o and E. Marchetti. Derivation and implementation of an algorithm for singular integrals. Computing, 38:235–245, 1987. MR 88h:65048 4. S. Elhay and J. Kautsky. Generalized Kronrod Patterson type embedded quadratures. Aplikace Matematiky, 37:81–103, 1992. MR 92j:65026 5. W. Gautschi and S. Notaris. An algebraic study of Gauss-Kronrod quadrature formulae for Jacobi weight functions. Math. Comput., 51: 231–248, 1988. MR 89f:65031 6. W. Gautschi and S. Notaris. Newton’s method and Gauss-Kronrod quadrature. In H. Braß and G. H¨ ammerlin, editors, Numerical Integration III, pages 60–71, Basel, 1988. Birkh¨ auser. MR 90m:65104 7. Walter Gautschi. On generating orthogonal polynomials. SIAM J. Scient. Statist. Comput., 3:289–317, 1982. MR 84e:65022 8. Walter Gautschi. Questions of numerical condition related to polynomials. In Gene H. Golub, editor, Studies in Numerical Analysis, pages 140–177. Math. Assoc. Amer., 1984. CMP 20:07 9. Walter Gautschi. Gauss-Kronrod quadrature — a survey. In G. V. Milovanovi´ c, editor, Numerical Methods and Approximation Theory III, pages 39–66. University of Niˇs, 1987. MR 89k:41035 10. Walter Gautschi. Algorithm 726: ORTHPOL – a package of routines for generating orthogonal polynomials and Gauss-type quadrature rules. ACM Trans. Math. Software, 20:21–62, 1994. 11. G.H. Golub and J.H. Welsch. Calculation of Gauss quadrature rules. Math. Comput., 23:221– 230, 1969. MR 39:6513 12. W. B. Gragg and W. J. Harrod. The numerically stable reconstruction of Jacobi matrices from spectral data. Numer. Math., 44:317–335, 1984. MR 85i:65052 13. D. K. Kahaner and G. Monegato. Nonexistence of extended Gauss-Laguerre and GaussHermite quadrature rules with positive weights. Z. Angew. Math. Phys., 29:983–986, 1978. MR 80d:65034 14. J. Kautsky and S. Elhay. Gauss quadratures and Jacobi matrices for weight functions not of one sign. Math. Comput., 43:543–550, 1984. MR 86f:65049 15. A. S. Kronrod. Nodes and Weights of Quadrature Formulas. Consultants Bureau, New York, 1965. MR 32:598 16. D. P. Laurie. Anti-Gaussian quadrature formulas. Math. Comput., 65: 739–747, 1996. MR 96m:65026 17. G. Monegato. A note on extended Gaussian quadrature rules. Math. Comput., 30:812–817, 1976. MR 55:13746 18. G. Monegato. Stieltjes polynomials and related quadrature rules. SIAM Review, 24:137–158, 1982. MR 83d:65067 19. Beresford N. Parlett. The Symmetric Eigenvalue Problem. Prentice-Hall, Englewood Cliffs, 1980. MR 81j:65063 20. Beresford N. Parlett. The new qd algorithms. In Acta Numerica 1995, pages 459–491. Cambridge University Press, 1995. MR 96g:65041 21. T. N. L. Patterson. The optimum addition of points to quadrature formulae. Math. Comput., 22:847–856, 1968. MR 39:3701 22. R. Piessens and M. Branders. A note on the optimal addition of abscissas to quadrature formulas of Gauss and Lobatto type. Math. Comput., 28:135–139 and 344–347, 1974. MR 49:8293 23. L. Reichel. Newton interpolation at Leja points. BIT, 30:332–346, 1990. MR 91e:65020
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
CALCULATION OF GAUSS-KRONROD QUADRATURE RULES
1145
24. R.A. Sack and A.F. Donovan. An algorithm for Gaussian quadrature given modified moments. Numer. Math., 18:465–478, 1972. MR 46:2829 25. H. E. Salzer. A recurrence scheme for converting from one orthogonal expansion into another. Comm. ACM., 16:705–707, 1973. MR 52:15956 26. J.C. Wheeler. Modified moments and Gaussian quadratures. Rocky Mountain J. Math., 4:287– 296, 1974. MR 48:12785 Potchefstroom University for Christian Higher Education, P. O. Box 1174, Vanderbiljpark, 1900, South Africa E-mail address:
[email protected] License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use