Two-Grid Analysis of Minimal Residual Smoothing ... - Semantic Scholar

Report 0 Downloads 30 Views
Two-Grid Analysis of Minimal Residual Smoothing as a Multigrid Acceleration Technique  Jun Zhang

Department of Computer Science and Engineering, University of Minnesota 4-192 EE/CS Building, 200 Union Street S.E., Minneapolis, MN 55455, USA July 8, 1996, revised July 31, 1997

Abstract

We analyze the two-level method accelerated by a minimal residual smoothing (MRS) technique. The two-grid analysis is sucient for our purpose because our MRS acceleration scheme is only applied on the nest level of the multigrid method. We prove that the MRS acceleration scheme is a semi-iterative method with respect to the underlying two-level iteration and that the MRS accelerated two-level method is a polynomial acceleration of rst order. We explain why MRS may not e ectively accelerate standard multigrid method for solving Poisson-like problems. The iteration matrices for the MRS accelerated coarse-grid-correction operator and the MRS accelerated two-level operator are obtained. We give bounds for the residual reduction rates of the accelerated two-level method. Numerical experiments are employed to support the analytical results.

Key words: Minimal residual smoothing, multigrid method, multigrid acceleration techniques, two-grid analysis. AMS subject classi cations: 65F10, 65N06.

1 Introduction In this paper, we analyze the multigrid method accelerated by the minimal residual smoothing (MRS) technique to solve the linear system

Au = f:

(1)

In many problems of practical interests, system (1) results from discretized partial di erential equations. In this case, we use h to denote the uniform meshsize associated with the grid space h and H to denote the meshsize of a coarse level grid space H . In multigrid, H is usually chosen as 2h for convenience. In practical applications, A is usually very large and sparse, but not necessarily symmetric positive de nite (SPD). For example, if system (1) is obtained from discretized convection-di usion equation with high-Reynolds number, A is nonsymmetric. Iterative solution of system (1) is of  This paper has been published in

Applied Mathematics and Computation, 96 (1), 27{45, 1998.

1

great interest in scienti c computing because direct methods usually can not handle such a large system. Classical relaxation methods (e.g., Jacobi, Gauss-Seidel and SOR) for solving system (1) begin with an initial guess at a solution and quickly damp (smooth) the components of high frequency errors with short wavelength comparable to the meshsize, but leave the components of low frequency errors with long wavelength almost unchanged. Hence, classical iterative methods work very well for the rst several iterations. Inevitably, however, the convergence slows down and the entire iterative scheme appears to stall. One e ective technique to remove the low frequency errors is to project them to a coarser grid, on which the errors become more oscillatory and thus are more subject to relaxation methods. This leads to the two-level method (TLM). The twolevel method is an important theoretical and debugging tool in developing multi-level or multigrid method. For this reason and the reason that our MRS acceleration scheme is only applied on the nest grid (see Zhang [1]), most of our analysis in this paper is customarily on the two-level method accelerated by MRS technique, which we refer to as TLM-MRS. The idea of treating smooth errors on the coarse grid can be recursively generalized to multilevel method, in which the coarse grid sub-problem is not solved exactly on the coarse grid, but high frequency errors are smoothed out and the left low frequency errors are projected to yet a coarser grid to be smoothed out there. Standard multigrid method is extremely ecient for solving elliptic problems such as the Poisson-type equations [2]. But convergence is deteriorated when it is used to solve many nonelliptic problems or problems containing non-elliptic components which usually result in some nonsymmetric linear systems. In these cases, acceleration schemes are usually needed to obtain satisfactory convergence rate. Various acceleration schemes have been proposed by many investigators to accelerate di erent procedures of the multigrid method in di erent situations [3, 4, 5, 6, 7, 8]. These acceleration techniques have been categorized and compared by Zhang [9]. Some of these acceleration techniques require that the coecient matrix A be SPD and thus limit the application of those techniques. In a recent paper [1], we proposed to employ the minimal residual smoothing (MRS) technique to accelerate multigrid convergence. The numerical experiments conducted in [1] showed that multigrid-MRS converges almost 45% faster than the standard multigrid for solving the convection-di usion equations when the convective terms dominate. The numerical results of [1] demonstrated that MRS is an ecient multigrid acceleration technique. In this paper, we analyze the MRS accelerated two-level method and try to shield some light on how this technique works. The rest of the paper is organized as follows: In Section 2 we formulate the minimal residual smoothing technique and the TLM-MRS method. In Section 3 we prove that the MRS procedure is a semi-iterative method with respect to TLM and that TLM-MRS is a polynomial acceleration of rst order. We explain why TLM-MRS may not work very well for the di usion-dominated (Poisson-like) problems. The iteration matrices for the MRS accelerated coarse-grid-correction operator and for the two-level operator are obtained in Section 4. In Section 5 we formulate some assumptions and give bounds on the residual reduction rate of TLM-MRS. Numerical examples are employed in Section 6 to support our analytical results. Conclusions are given in Section 7. We assume that the readers are familiar with the basic concept of multigrid method as a fast solver. Introduction to multigrid can be found in [2]. We remark that di erent choices of the multigrid components such as the relaxation scheme and the inter-grid transfer operators greatly a ect the convergence of the multigrid method. We assume that such choices have been made and we employ MRS as a general purpose multigrid acceleration technique.

2

2 MRS Acceleration Scheme Let (; ) denote the usual inner product on h . k  k2 = (; )1=2 is the Euclidean norm. Suppose that we also have a coarse grid operator AH de ned on H . AH must be nonsingular, but its exact nature is not important in our following discussions. It may be an H version of A (the same di erential equation is discretized on H ) or it may be constructed by using the Galerkin technique [2].

De nition 2.1 The energetic inner product with respect to an SPD matrix Z on h is de ned as h; iZ = (Z ; ): The energy norm with respect to Z is de ned as

k  kZ = h; i1Z=2 :

(2)

Note that when Z = I , the energy norm (2) reduces to the Euclidean norm. Here I is the identity matrix on h .

De nition 2.2 Let Q be an operator (matrix) on h, we de ne the operator norm of Q with respect to an SPD matrix Z as

kZ ; kQkZ = sup h kkQv v k Z 06=v2

where v 6= 0 is any non-zero vector on h .

Suppose that a sequence fuk g is generated by some iterative method with associated residual sequence frk g. Since it is generally not possible to measure the convergence of the error directly, the quality of the iteration is usually judged by the behavior of the residual norm sequence fkrk kg, where k  k is some norm. Usually, it is desirable that fkrk kg converges \smoothly" to zero. In the widely used generalized minimal residual (GMRES) method [10], each uk is characterized by kf ? Auk k2 = u2u +min kf ? Auk2; K (r ;A) 0

k 0

where k  k2 is the Euclidean norm and the Krylov subspace Kk (r0 ; A) is de ned by

Kk (r0 ; A) = spanfr0 ; Ar0 ; : : : ; Ak?1 r0 g: For GMRES, fkrk k2 g converges to zero optimally among all Krylov subspace methods, for which uk 2 u0 + Kk (r0 ; A). Other comparable methods, such as biconjugate gradient (BCG) [11], and conjugate gradient squared (CGS) [12], have certain advantages over GMRES, but often exhibit very irregular residual-norm behavior [13]. This irregular residual-norm behavior has provided an incentive for the development of methods that have similar advantages but produce better behaved residual norms, such as the biconjugate gradient stabilized (Bi-CGSTAB) methods [14] and methods based on the quasi-minimal residual (QMR) approach [15, 16]. Another approach to generating well-behaved residual norms has been proposed by Schonauer [17] and investigated extensively by Weiss [18]. We formulate the minimal residual smoothing technique of Schonauer as follows:

Algorithm 2.3 Minimal Residual Smoothing (MRS) [17, 18] 3

Initialize s0 = r0 and v0 = u0 . For k = 1; 2; : : : ; do: Compute uk and rk . Compute k = ?hsk?1 ; rk ? sk?1 iZ =krk ? sk?1 k2Z . Set sk = sk?1 + k (rk ? sk?1 ) and vk = vk?1 + k (uk ? vk?1 ).

In Algorithm 2.3 each k is chosen to minimize kf ? A[(1 ? )vk?1 + uk ]kZ over all 2 R, where R is the set of all real numbers. The new sequence fvk g obviously has a non-increasing residual norm sequence fksk kZ g, i.e., ksk kZ  ksk?1 kZ and ksk kZ  krk kZ for each k. Weiss [18] explored and analyzed the residual smoothing technique of the form of Algorithm 2.3 extensively, which was referred to by Zhou and Walker [13] as the minimal residual smoothing (MRS) technique. Weiss showed that applying MRS to an orthogonal residual method results in a minimal residual method. More general forms of the residual smoothing techniques have been discussed by Brezinski and Redivo-Zaglia [19], but we restrict our attention of this paper to analyze the MRS accelerated two-level (multigrid) method as discussed in Zhang [1]. To accelerate the two-level (multigrid) method, we insert MRS procedure just after the residuals on the nest grid being computed and before they are projected to the coarse grid. At each major iteration, we replace the original TLM iterate uk and its residual iterate rk by the MRS iterate vk and the associated residual iterate sk . We then project the smoothed residual sk to the coarse grid to form a coarse grid sub-problem. (Note that we must replace both the TLM iterate uk and its residual rk at the same time, otherwise the coarse grid sub-problem would provide a wrong correction to the ne grid.) In this way, we give the coarse grid smoothed residuals which are essential for the coarse grid to provide a good coarse-grid-correction to the ne grid [2]. For motivation and various implementations of MRS accelerated multigrid method, readers are referred to Zhang [1]. We give the algorithm of the MRS accelerated two-level method as follows:

Algorithm 2.4 Two-Level Method with MRS Acceleration (TLM-MRS) [1] Given any initial guess u0. For k = 0; 1; 2; : : :; do: Relax 1 times on Auk = f with the given initial guess uk . Compute rk = f ? Auk . If k = 0, then Set v0 = u0 and s0 = r0 . Else Compute k = ?hsk?1 ; rk ? sk?1 iZ =krk ? sk?1 k2Z . Set sk = sk?1 + k (rk ? sk?1 ) and vk = vk?1 + k (uk ? vk?1 ). Set uk = vk and rk = sk . End if. Restrict rkH = Rrk . Solve eHk = (AH )?1 rkH . Correct uk+1 = uk + PeHk . Relax 2 times on Auk+1 = f with the initial guess uk+1 .

Algorithm 2.4 is the version of TLM-MRS proposed by Zhang [1], although we used a special (Euclidean) norm in [1] instead of the more general energy norm in Algorithm 2.4. 1 and 2 are the numbers of pre-smoothing and post-smoothing sweeps. R and P are the restriction and

4

interpolation operators, respectively. If the coarse grid operator AH is generated by the Galerkin technique, we have AH = RAP . In Algorithm 2.4 the residual equation on the coarse grid is supposed to be solved exactly by a direct solver. If the coarse grid direct solver is recursively approximated by two-level methods, the resulting algorithm is the multigrid method.

3 MRS as a Semi-Iterative Method In this section, we give some insight on how the MRS acceleration scheme works. We rst prove the following theorem:

Theorem 3.1 The MRS technique in Algorithm 2.4 is a semi-iterative method with respect to the two-level method. The TLM-MRS Algorithm 2.4 is a polynomial acceleration of rst order. Proof. Let

fu0; u1 ; u2 ;    ; uk ;   g

(3) be the sequence generated by the two-level iteration process after the pre-smoothing sweeps. Let

fv0 ; v1 ; v2 ;    ; vk ;   g be the sequence generated by the MRS scheme from the TLM sequence (3). Hence, at the kth iteration, we have vk = vk?1 + k (uk ? vk?1 ) by the de nition of the MRS acceleration. We de ne a new sequence fzk g by  z2k = uk k = 0; 1;    : z2k+1 = vk It is obvious that the sequence fzk g is formed as

fu0; v0 ; u1; v1 ; u2; v2 ;    ; uk ; vk ;   g: It is easy to see that the sequence fzk g is actually the iterates of Algorithm 2.4, each new iterate zk is generated by the procedure zk = zk?1 + k (zk ? zk?1 ) (4) with

zk?1 = vk

and zk = uk : Iteration procedure (4) is called by Varga [20] the semi-iterative method with respect to the underlying two-level method. The combined TLM-MRS is therefore the so-called polynomial acceleration of rst order due to Hageman and Young [21, p. 40]. 2 Iteration procedure (4) is reminiscent of an SOR acceleration step with respect to the underlying iterative method. The only di erence is that zk?1 is not the value of the previous underlying iterate, but the value of the previous TLM-MRS iterate. If TLM-MRS (or multigrid-MRS) is considered as TLM with SOR-type acceleration, TLMMRS (or multigrid-MRS) with Gauss-Seidel relaxation may not be ecient in solving the di usiondominated (Poisson-like) problems when the grid space is ordered in a red-black fashion and the discretization is the ve-point second-order central di erence scheme.

5

Remark 3.2 If the Poisson equation is discretized by the standard ve-point second-order central di erence scheme and if the red-black SOR relaxation is used in multigrid method, two di erent relaxation parameters are necessary to achieve ecient acceleration. An over-relaxation parameter (! > 1) should be used in the projection half cycle and an under-relaxation (! < 1) should be used in the interpolation half cycle.

Proof. See Observations 1 and 2 in Zhang [7].

The following remarks follow immediately from Remark 3.2.

2

Remark 3.3 No single relaxation parameter may be eciently employed in the red-black SOR

relaxation in multigrid for solving the Poisson equation discretized by the ve-point second-order central di erence scheme.

Remark 3.4 TLM-MRS as de ned in Algorithm 2.4 may not be e ective when it is used to solve the Poisson equation with the ve-point red-black Gauss-Seidel relaxation.

Although the above remarks are made with respect to the Poisson equation, they are applicable to di usion-dominated (Poisson-like) problems. It has long been observed that the SOR acceleration is not e ective to accelerate the ninepoint Gauss-Seidel multigrid method for solving the Poisson equation [22]. Hence TLM-MRS with nine-point discretization may too not ecient for solving the Poisson-like equations due to the above remarks. This o ers a heuristic explanation to the numerical results of Zhang [1] that the eciency of MRS acceleration scheme was deteriorated when it was used to accelerate multigrid method with a nine-point formula for solving the di usion-dominated problems.

4 Convergence Analysis 4.1

MRS with Coarse-Grid-Correction Operator

Let us rst assume that there is no smoothing, i.e., 1 = 2 = 0, in order to analyze the e ect of the MRS acceleration on the coarse-grid-correction operator. The coarse-grid-correction operator with respect to residual is given by [2, p. 90]

C = I ? AP (AH )?1 R: (5) At the kth iteration, suppose the residual is rk , then after the kth coarse-grid-correction the residual changes to

rk+1 = Crk :

(6) If TLM is accelerated by MRS (Algorithm 2.4), the residual after the kth MRS accelerated iteration reads sk = (1 ? k )sk?1 + k rk ; (7) where k is given by Algorithm 2.4. Hence, after the kth coarse-grid-correction, the new residual is

rk+1 = Csk = (1 ? k )Csk?1 + k Crk : (8) Since we have replaced the TLM residual rk?1 by the MRS residual sk?1 at the (k ? 1)th iteration,

we have

rk = Csk?1

6

(9)

by virtue of (6). Substituting (9) into (8), we have the residual after the kth MRS accelerated coarse-gridcorrection rk+1 = (1 ? k )rk + k Crk = [(1 ? k )I + k C ]rk = [I ? k AP (AH )?1 R]rk (10) by using (5) and (9). Therefore, the error after the kth MRS accelerated coarse-grid-correction is ek+1 = A?1 [I ? k AP (AH )?1 R]rk : (11) Theorem 4.1 At the kth TLM-MRS iteration without smoothing, the error iteration matrix is given by Ek = I ? k P (AH )?1 RA (12) and the residual iteration matrix by Tk = I ? k AP (AH )?1 R (13) with k given by Algorithm 2.4. Proof. The residual iteration matrix (13) is obtained directly by (10). At the kth iteration, the residual rk and the corresponding error ek satisfy the following error (residual) equation Aek = rk : (14) Substituting (14) into (11), we obtain (12) as the error iteration matrix of Algorithm 2.4 at the kth iteration without smoothing sweep. 2 Comparing the MRS accelerated coarse-grid-correction residual operator (13) with the standard coarse-grid-correction residual operator (5), we have the following corollary: Corollary 4.2 The acceleration rate of the MRS acceleration scheme with the coarse-grid-correction operator at the kth iteration is given by H ?1 RAkZ k = kIkI?? Pk P(A(AH )?) 1 RA (15) kZ with k being the MRS parameter given by Algorithm 2.4. The MRS acceleration scheme may be heuristically viewed as scaling the operator P (AH )?1 RA by MRS parameter k so that k is smaller than 1 (hopefully). In this case, we obtain a convergent method without any relaxation scheme on the nest grid. This property has been veri ed numerically. From (15) it seems that the optimal scaling factor k for accelerating the convergence is the one that ful lls the following minimization problem ( ) H )?1 RA]vkZ k [ I ? P ( A H ? 1 sup kI ? k P (A ) RAkZ = min : (16) 2R 06=v2 h kvkZ However, solving minimization problem (16) is by no means realistic. In [5, 6], Reusken and Vanek discussed the so-called over-correction technique that solves a minimization problem similar to (16) to optimize the computed correction (after the coarse-grid-correction procedure) with the assumption that Z = A (and positive de nite) and the number of post-smoothing sweeps is nonzero.

7

4.2

MRS with Two-Level Operator

Now we consider the case that the number of smoothing sweeps is nonzero, i.e., 1 + 2 > 0. Let there be a regular splitting of A and let M and N be nonsingular square matrices on h satisfying the consistency condition (see [21, p. 19])

M + NA = I:

(17)

De nition 4.3 The smoothing iterative method is de ned as of the form S (v) = Mv + Nf;

(18)

where v 2 h . For any integer  > 1, we recursively de ne

S  (v) = S (S  ?1 (v)): For notational convenience, we denote

S 0 (v) = I (v) = v: M is sometimes called the (error) iteration matrix of the smoothing iterative method (18).

De nition 4.4 Denote the residual iteration matrix of the smoothing iterative method (18) by (see [2, p. 20])

M~ = AMA?1 = I ? AN:

For any integer  > 1, we recursively de ne M~  = M~ M~  ?1 :

Lemma 4.5 For any integer  > 0, the following identities are valid: M~  A?1 M~  A A?1 M~  M~  A

= = = =

AM  A?1 ; M; M  A?1 ; AM  :

(19) (20) (21) (22)

Proof. (19) is proved by induction on  . (20) follows immediately from (19) and De nition 4.4.

(21) and (22) are special cases of (20). 2 The two-level residual iteration operator with 1 pre-smoothing and 2 post-smoothing sweeps is given by [2, p. 90] (23) C~ = M~ 2 C M~ 1 ; where C is the coarse-grid-correction residual operator (5). The residual after the kth TLM-MRS iteration is ~k rk+1 = Cs = M~ 2 C M~ 1 [(1 ? k )sk?1 + k rk ] (24) = (1 ? k )M~ 2 C M~ 1 sk?1 + k M~ 2 C M~ 1 rk : By the de nitions of the MRS acceleration and of the standard TLM residual iteration operator (23), we have ~ k?1 : rk = Cs (25)

8

Substituting (25) into (24), after the kth TLM-MRS iteration, we obtain the new residual

rk+1 = [(1 ? k )I + k M~ 2 C M~ 1 ]rk = f(1 ? k )I + k M~ 2 [I ? AP (AH )?1 R]M~ 1 grk :

Theorem 4.6 At the kth TLM-MRS iteration, the error iteration matrix is given by E~k = I ? k [I ? M 1 +2 + M 2 P (AH )?1 RAM 1 ]

(26) (27)

and the residual iteration matrix by

T~k = (1 ? k )I + k C~ = I ? k [I ? AM 1 +2 A?1 + AM 2 P (AH )?1 RAM 1 ]:

(28) (29)

Proof. The residual iteration matrices (28) and (29) are obtained directly from (26) and Lemma 4.5. The proof of (27) follows from (14), (26) and Lemma 4.5:

E~k = (1 ? k )I + k A?1 M~ 2 [I ? AP (AH )?1 R]M~ 1 A = (1 ? k )I + k M 2 A?1 [I ? AP (AH )?1 R]AM 1 = I ? k [I ? M 1 +2 + M 2 P (AH )?1 RAM 1 ]: 2

This nishes the proof of Theorem 4.6.

5 Bounds of Residual Reduction Rates The convergence analyses given above do not allow an easy and quantitative understanding how MRS accelerates the multigrid iteration process. The following assumption, which is motivated by the work of Brandt and Mikulinsky [4], is aimed at simplifying the analysis and at obtaining a quantitative insight to the performance of the TLM-MRS algorithm. Assumption 5.1 Let C~ be the residual iteration operator of the two-level method. Let any initial residual r0 be decomposed into one possibly slow component r0(s) and a reminder r0(f ) made up of fast components, i.e. r0 = r0(s) + r0(f ) where ~ 0(s) = "s r0(s) Cr and ~ 0(f ) kZ < "f kr0(f ) kZ : kCr

"s and "f measure the convergence rate of the slow and fast components respectively. We also assume that the following condition holds [4]:

0 < "f < 1=2 < j"s j  1:

Remark 5.2 Since high frequency errors are damped quickly by the smoothing sweeps, r0(f ) is

mostly made up of the high frequency (rough) components of the residual. Similarly, r0(s) is mostly made up of the low frequency (slow) components of the residual.

9

The residual after the kth TLM cycle will be:

rk+1 = C~ k+1 (r0(s) + r0(f ) ) = "ks +1 r0(s) + rk(f+1) ; where

krk(f+1) kZ  "kf +1 kr0(f ) kZ : Lemma 5.3 Let the initial residual r0 satisfy all conditions of Assumption 5.1, then after the kth TLM-MRS iteration, the slow and fast components of the residual satisfy:

rk(s+1)

=

krk(f+1) kZ



#

"k Y

(1 ? i + i "s ) "s r0(s)

(30)

(j1 ? i j + j i j"f ) "f kr0(f ) kZ :

(31)

"i=1 k Y i=1

#

Proof. We only give a simpli ed proof of (30), the complete proof of this lemma is lengthy and

readers are referred to Zhang [23]. By Theorem 4.6, after the kth TLM-MRS iteration, the slow component of the residual becomes:

rk(s+1) = T~k T~k?1    T~k T~0 r0(s) = [(1 ? k )I + k C~ ][(1 ? k?1 )I + k?1 C~ ]    [(1 ? 1 )I + 1 C~ ][(1 ? 0 )I + 0 C~ ]r0(s) ) (k Y [(1 ? i )I + i C~ ] r0(s) = # " ik=0 Y (1 ? i + i "s ) r0(s) = # "i=0 k Y (1 ? i + i "s ) "s r0(s) : = i=1

The last equality is valid because 0 = 1 by the de nition of Algorithm 2.4. T~k is the TLM-MRS residual iteration matrix at the kth iteration given by (28). 2

Lemma 5.4 Under the conditions of Lemma 5.3, after the kth TLM-MRS iteration, the residual satis es:

krk+1 kZ = k 

k Y T~i r0 kZ

i=0 jk ("s )j kr0(s) kZ + k ("f ) kr0(f ) kZ ;

where

k ("s ) = and

k ("f ) =

"k Y i=1

#

(1 ? i + i "s ) "s

#

"k Y i=1

(j1 ? i j + j i j"f ) "f :

10

(32) (33) (34)

Proof. See Zhang [23]. 2 jk ("s )j and k ("f ) measure the reduction rates of the slow and fast components of the residual with TLM-MRS, respectively. jk ("s )j1=(k+1) and k ("f )1=(k+1) are the average reduction factors of the slow and fast residual components for the rst (k + 1) iterations. Acceleration is achieved by speeding up the convergence of the slow component, this leads to the following assumption:

Assumption 5.5 Let there exist some 0 < "^s < 1 such that "^s < j"sj and j1 ? k + k "s j  "^s ; for all k  1:

(35)

"s is an upper bound of the residual reduction rate of TLM-MRS.

Lemma 5.6 If Assumption 5.5 holds, then the MRS parameter k satis es: ? "^s = ~ k  11 ? "s

or

+ "^s = ;  k  11 ? " s

for each k  1.

Proof. Inequalities (36) and (37) are the solutions of inequality (35). Lemma 5.7 Let Assumptions 5.1 and 5.5 hold, then jk ("s )j  j"s j"^ks holds for any k  1. Proof. Inequality (38) follows from (33) and (35). Lemma 5.8 Let Assumptions 5.1 and 5.5 and the inequality ?1  "s < ?1=2 hold. We de ne

(36) (37)

2 (38)

2 (39)

= j1 ? k j + j k j"f : "f ("s ; "^s ; "f )k def

(40)

"f ? " s 2 ? "s ? "f  "^s

(41)

If holds, then

"f ("s ; "^s ; "f )k  "^s :

Proof. By assumption (39) and Lemma 5.6 (37), we have because "^s < j"s j leads to "^s < ?"s.

"^s 0 < j k j   = 11 + ?"