On the Modularity of Termination of Term Rewriting Systems Enno Ohlebusch Universitat Bielefeld Technische Fakultat Postfach 100131 33501 Bielefeld Germany e-mail:
[email protected] Abstract It is well-known that termination is not a modular property of term rewriting systems, i.e., it is not preserved under disjoint union. The objective of this paper is to provide a \uniform framework" for sucient conditions which ensure the modularity of termination. We will prove the following result. Whenever the disjoint union of two terminating term rewriting systems is non-terminating, then one of the systems is not CE -terminating (i.e., it looses its termination property when extended with the rules C ons(x; y ) ! x and C ons(x; y ) ! y ) and the other is collapsing. This result has already been achieved by Gramlich [7] for nitely branching term rewriting systems. A more sophisticated approach is necessary, however, to prove it in full generality. Most of the known sucient criteria for the preservation of termination [24, 15, 13, 7] follow as corollaries from our result, and new criteria are derived. This paper particularly settles the open question whether simple termination is modular in general. We will moreover shed some light on modular properties of combined systems with shared constructors. For instance, it will be shown that the above result does not carry over to constructor-sharing systems.
1 Introduction During the past decade, term rewriting has gained enormous importance in elds of computer science concerned with symbolic manipulation. Within the subject of term rewriting modular aspects have recently been receiving increasing attention. As is well-known from software engineering, programmers are encouraged to write their programs in a modular way in order to handle large systems. Thus, from a practical point of view, it is worth knowing under what conditions the combined program inherits properties from its constituent modules. The most simple way to combine two term rewriting systems (TRSs) is their disjoint union. This means that the signatures F1 and F2 of two TRSs (F1; R1) and (F2; R2) have to be disjoint; then their disjoint union is (F ; R) = (F1 ] F2; R1 ] R2). A property P of TRSs is called modular if for all disjoint TRSs R1 and R2 their disjoint union R1 ] R2 has the property P if and only if both R1 and R2 have the property P . In his pioneering paper [27], Toyama showed that con uence is modular. In contrast to this, termination lacks a modular 1
behavior. This is demonstrated by the following famous example (cf. [26]).
Example 1.1 The term rewriting systems (F1; R1) = (f0; 1; F g; fF (0; 1; x) ! F (x; x; x)g), and (F2; R2) = (fgg; fg(x; y) ! x; g(x; y) ! yg) are terminating but their disjoint union is not terminating, for there is the cyclic rewrite derivation
t = F (0; 1; g(0; 1)) !R1 F (g(0; 1); g(0; 1); g(0; 1)) !R2 F (0; g(0; 1); g(0; 1)) !R2 t In [22] an example is given which shows that R1 ] R2 may be non-terminating even if R1 and R2 are terminating, con uent, irreducible, and variable-preserving. Naturally the question arises what restrictions have to be imposed on the constituent TRSs so that their disjoint union is again terminating. Toyama, Klop, and Barendregt showed in [28] that termination is modular for con uent left-linear TRSs. But the rst results were obtained by investigating the distribution of collapsing rules (a rewrite rule is collapsing if its right-hand side is a variable) and duplicating rules (a rewrite rule l ! r is duplicating if r contains more occurrences of some variable than l) among the TRSs. Note that in the above example R1 consists of a duplicating rule, whereas R2 contains two collapsing rules. These results are stated in the next theorem.
Theorem 1.2 Let R1 and R2 be two disjoint terminating TRSs. Their disjoint union is terminating provided that one of the following conditions is satis ed:
1. Neither R1 nor R2 contains collapsing rules. 2. Neither R1 nor R2 contains duplicating rules. 3. One of the systems contains neither collapsing nor duplicating rules. Statements 1 and 2 were rst proved by Rusinowitch in [24] (Drosten obtained parts of these results independently { in 2 he required right-linearity, cf. [5]). The proof of the last statement is due to Middeldorp [15]. A very simple intuitive proof of Theorem 1.2 can be found in [20] (see also the proof of Theorem 4.13). An equivalent formulation of Theorem 1.2 reads as follows: If R1 and R2 are two disjoint terminating TRSs such that their disjoint union R1 ]R2 is non-terminating, then R1 is duplicating and R2 is collapsing or vice versa. Kurihara and Ohuchi were the rst to observe that in each of the known counterexamples one of the systems was not simplifying. They proved in [13] that this is essential:
Theorem 1.3 To be simplifying is a modular property of TRSs. We call a TRS simplifying if its rewrite relation is contained in some simpli cation ordering (as a matter of fact, Kurihara and Ohuchi used the word \simple termination" for this case, however, \simple termination" in their sense does not imply termination in general), and we say that a TRS is simply terminating if its rewrite relation is contained in some wellfounded simpli cation ordering. It is well-known that the simplifying property and simple termination are equivalent for nite TRSs, hence the above theorem implies:
Corollary 1.4 Simple termination is a modular property of nite TRSs. 2
The above corollary is very important from a practical point of view because many termination proofs are done semi-mechanically with the aid of theorem provers by means of some implemented well-founded simpli cation ordering (like RPO, LPO, RDO, KNS, cf. [25]). Gramlich proved in [7] the following abstract result for nitely branching TRSs, i.e., those TRSs R which satisfy the property: For every rule l ! r 2 R, there are only nitely many dierent rules in R with the same left-hand side l.
Theorem 1.5 Let R1 and R2 be disjoint nitely branching terminating TRSs such that their disjoint union R1 ] R2 is non-terminating. Then R1 is not CE -terminating and R2 is collapsing or vice versa.
A TRS R is called CE -terminating if the TRS R ] fCons(x; y) ! x; Cons(x; y) ! yg is terminating, where Cons is some binary function symbol that does not occur in the signature of R. As above this theorem can be paraphrased as follows. Let R1 and R2 be two disjoint nitely branching terminating TRSs. Their disjoint union R1 ] R2 is terminating provided that one of the following conditions is satis ed: 1. Neither R1 nor R2 contains collapsing rules. 2. Both R1 and R2 are CE -terminating. 3. One of the systems is CE -terminating and does not contain collapsing rules. In [7] it is shown that if a terminating TRS is not CE -terminating, then it is duplicating and not simply terminating. Hence for the class of nitely branching TRSs Theorem 1.5 contains Theorem 1.2 as a special case. Moreover, it extends Corollary 1.4 to the class of nitely branching TRSs. In this paper, we generalize the above results to arbitrary (i.e., possibly in nitely branching) TRSs. More precisely, we prove Theorem 1.5 for arbitrary disjoint TRSs solving a recent conjecture of Gramlich [7]. It should be pointed out that to do this a more sophisticated approach than in [7] is necessary. From our result, Theorem 1.2 follows as a corollary and Corollary 1.4 can be proved for arbitrary TRSs. This settles in particular the open question whether simple termination is a modular property and bridges the existing theoretical gap. One way of weakening the disjointness requirement is to allow shared constructors. Constructors are symbols that do not occur at the root position of the left-hand side of any rewrite rule. We say that two TRS (F1; R1) and (F2; R2) share constructors if F1 \ F2 \ froot(l) j l ! r 2 R1 [ R2g = ; and call the TRS (F ; R) = (F1 [ F2; R1 [ R2) their combined system with shared constructors. A property P is called modular for TRSs with shared constructors, if every combined system has the property P if and only if both constituent TRSs have the property P . Our next goal is to elucidate what properties are also modular for constructor-sharing TRSs. It is known that con uence lacks modular behavior, whereas local con uence is modular (cf. [14]). It turns out that normalization is also a modular property of constructor-sharing TRSs. Before we turn to the question of whether the above theorems on the modularity of termination also hold in the presence of shared constructors, we rst collect some known results. Kurihara and Ohuchi extended their result of [13] to combined systems with shared constructors (cf. [14]): 3
Theorem 1.6 To be simplifying is a modular property of constructor-sharing TRSs. Corollary 1.7 Simple termination is a modular property of nite constructor-sharing TRSs. Gramlich showed in [7] the following extension of Theorem 1.5 to constructor-sharing systems (a rule l ! r is called constructor-lifting if root(r) is a shared constructor).
Theorem 1.8 Let R1 and R2 be nitely branching terminating TRSs with shared constructors such that their combined system R1 [ R2 is non-terminating. Then R1 is collapsing or constructor-lifting and R2 is not CE -terminating or vice versa. Since non-duplicating TRSs are CE -terminating, Theorem 1.8 in particular implies: Corollary 1.9 Let R1 and R2 be terminating nitely branching TRSs with shared constructors. Their combined system R1 [R2 is terminating provided that one of the following conditions is satis ed: 1. Neither R1 nor R2 contains either collapsing or constructor-lifting rules. 2. Neither R1 nor R2 contains duplicating rules. 3. One of the systems contains neither collapsing, constructor-lifting, nor duplicating rules.
Constructor-lifting rules have to be excluded (cf. [7]) because they may have the same impact as collapsing ones (note that it is easy to obtain an example in which both TRSs are also con uent by modifying known counterexamples to the modularity of completeness).
Example 1.10 The term rewriting systems (F1; R1) = (f0; 1; F g; fF (0; 1; x) ! F (x; x; x)g), and (F2; R2) = (fa; 0; 1g; fa ! 0; a ! 1g) share the constructors 0 and 1. They are terminating but their combined system is not terminating, for there is the cyclic rewrite derivation F (0; 1; a) !R1 F (a; a; a) !R2 F (0; a; a) !R2 F (0; 1; a)
We will show by counterexamples that Theorem 1.8 and Corollary 1.7 do not extend to arbitrary TRSs with shared constructors. In particular, CE -termination and simple termination are not modular properties of TRSs with shared constructors. Consequently, we are interested in the restrictions under which these results do hold. We will show that Corollary 1.7 can be generalized to the class of TRSs which introduce only nitely many function symbols. In [7], it is shown that this corollary can also be extended to the class of nitely branching TRSs. Moreover, by a simple proof similar to the one of Theorem 1.2 given in [20], we show that Corollary 1.9 can be generalized to arbitrary TRSs. The paper is organized as follows: Section 2 brie y recalls the basic notions of term rewriting and also deals with simple termination. In Section 3, we prove the generalization of Theorem 1.5 for arbitrary TRSs, from which Theorem 1.2 follows as a corollary and which also entails the modularity of CE -termination and simple termination. In Section 4, we rst introduce required notions of combined systems with shared constructors and then analyse for which cases theorems for the disjoint union also hold in the presence of shared constructors. The last section is dedicated to concluding remarks. 4
2 Preliminaries In this section, we brie y recall the basic notions of term rewriting as surveyed in e.g. Dershowitz and Jouannaud [3] and Klop [10]. A signature is a countable set F of function symbols or operators, where every f 2 F is associated with a natural number its arity. F n denotes the set of all function S denoting n symbols of arity n, hence F = n0 F . Elements of F 0 are called constants. The set T (F ; V ) of terms built from a signature F and a countable set of variables V with F \V = ; is the smallest set such that V T (F ; V ) and if f 2 F has arity n and t1; : : :; tn 2 T (F ; V ) then f (t1; : : :; tn) 2 T (F ; V ). We write f instead of f ( ) whenever f is a constant. The set of function symbols appearing in a term t 2 T (F ; V ) is denoted by F un(t), and the set of variables occurring in t is denoted by V ar(t). Terms without variables are called ground terms. The set of all ground terms is denoted by T (F ). For t 2 T (F ; V ) we de ne root(t) by: root(t) = t if t 2 V , and root(t) = f if t = f (t1; : : : ; tn). A substitution is a mapping from V to T (F ; V ) such that fx 2 V j (x)6=xg is nite. This set is called the domain of and will be denoted by Dom(). Occasionally we present a substitution as fx 7! (x) j x 2 Dom()g. The substitution with empty domain will be denoted by ". Substitutions are extended to morphisms from T (F ; V ) to T (F ; V ), i.e. (f (t1; : : : ; tn)) = f ((t1); : : : ; (tn)) for every n-ary function symbol f and terms t1; : : :; tn. We call (t) an instance of t. We also write t instead of (t). In order to describe subterm occurrences of a term, we introduce the notationally convenient notion \context" instead of the more precise notion \position" (cf. [16]). Let 2 be a special constant symbol. A context C [; : : :; ] is a term in T (F [ f2g; V ). If C [; : : :; ] is a context with n occurrences of 2 and t1; : : :; tn are terms, then C [t1; : : : ; tn] is the result of replacing from left to right the occurrences of 2 with t1; : : :; tn. A context containing precisely one occurrence of 2 is denoted by C [ ]. A term t is a subterm of a term s if there exists a context C [ ] such that s = C [t]. A subterm t of s is proper, denoted by s > t, if s 6= t. By abuse of notation we write T (F ; V ) for T (F [f2g; V ), interpreting 2 as a special constant which is always available but used only for the aforementioned purpose. Let ! be a binary relation on terms, i.e., ! T (F ; V ) T (F ; V ). The re exive transitive closure of ! is denoted by !. If s ! t, we say that s reduces to t and we call t a reduct of s. We write s t if t ! s; likewise for s t. The transitive closure of ! is denoted by !+, and $ denotes the symmetric closure of ! (i.e., $ = ! [ ). The re exive transitive closure of $ is called conversion and denoted by $ . If s $ t, then s and t are convertible. Two terms t1; t2 are joinable, denoted by t1 # t2, if there exists a term t3 such that t1 ! t3 t2. Such a term t3 is called a common reduct of t1 and t2. The relation # is called joinability. A term s is a normal form w.r.t. !, if there is no term t such that s ! t. A term s has a normal form if s ! t for some normal form t. The set of all normal forms of ! is denoted by NF (!). The relation ! is normalizing if every term has a normal form; it is terminating, if there is no in nite reduction sequence t1 ! t2 ! t3 ! : : :. The relation ! is con uent if for all terms s; t1; t2 with t1 s ! t2 we have t1 # t2. It is well-known that ! is con uent if and only if every pair of convertible terms is joinable. The relation ! is locally con uent if for all terms s; t1; t2 with t1 s ! t2 we have t1 # t2. If ! is con uent and terminating, it is called complete or convergent. The famous Newman's Lemma states that termination and local con uence of ! imply con uence. 5
A term rewriting system (TRS for short) is a pair (F ; R) consisting of a signature F and a set R T (F ; V ) T (F ; V ) of rewrite rules or reduction rules. Every rewrite rule (l; r) must satisfy the following two constraints: (i) the left-hand side l is not a variable, and (ii) variables occurring in the right-hand side r also occur in l. Rewrite rules (l; r) will be denoted by l ! r. The rewrite rules of a TRS (F ; R) de ne a rewrite relation !R on T (F ; V ) as follows: s !R t if there exists a rewrite rule l ! r in R, a substitution and a context C [ ] such that s = C [l] and t = C [r]. We call s !R t a rewrite step or reduction step. An instance of a left-hand side of a rewrite rule is a redex (reducible expression). A TRS (F ; R) has one of the above properties (e.g. termination) if its rewrite relation has the respective property. Moreover, it is called nite if the sets F and R are nite. A TRS (F ; R) is called nitely branching if for every rule l ! r 2 R there are only nitely many dierent rules in R with the same left-hand side l. We often simply write R instead of (F ; R) if there is no ambiguity about the underlying signature F . A rewrite rule l ! r is left-linear if l does not contain multiple occurrences of the same variable. A left-linear TRS only contains left-linear rewrite rules. A rewrite rule l ! r is collapsing if r is a variable and l ! r is duplicating if r contains more occurrences of some variable than l. A non-collapsing (non-duplicating) TRS does not contain collapsing (duplicating) rules. A reduction step is called duplicating if the rewrite rule applied is duplicating. A partial ordering (A; >) is a pair consisting of a set A and a binary irre exive and transitive relation > on A. A partial ordering is called well-founded if there are no in nite sequences a1 > a2 > a3 > : : : of elements from A. We also need the notion of multiset ordering. A multiset is a collection in which elements are allowed to occur more than once. If A is a set, then the set of all nite multisets over A is denoted by M(A). The multiset extension of a partial ordering (A; >) is the partial ordering (M(A); >mul) de ned as follows: M1 >mul M2 if M2 = (M1 n X ) ] Y for some multisets X; Y 2 M(A) that satisfy (i) ; 6= X M1 and (ii) for all y 2 Y there exists an x 2 X such that x > y. Dershowitz and Manna proved in [4] that the multiset extension of a well-founded ordering is a well-founded ordering. A rewrite ordering is a partial ordering (T (F ; V ); >) which is closed under contexts (i.e., if s > t, then C [s] > C [t] for all contexts C [ ]) and closed under substitutions (i.e., if s > t, then s > t for all substitutions ). A simpli cation ordering > is a rewrite ordering possessing the subterm property, i.e., C [t] > t for all contexts C [ ] 6= 2.
De nition 2.1 A TRS (F ; R) is called simplifying if there exists a simpli cation ordering > such that !R >. If > is also well-founded, then (F ; R) is called simply terminating. Evidently, a simply terminating TRS is both simplifying and terminating. The converse is not true (see Example 2.4). In the recent paper of Middeldorp and Gramlich [18] it is shown that simple termination is an undecidable property, even for one rule systems. The next lemma states useful characterizations of the notions \simplifying" and \simply terminating" (see [14, 29]).
De nition 2.2 Let F be a signature. The TRS E mb(F ) consists of all rewite rules f (x1; : : :; xn) ! xj where f 2 F is a function symbol of arity n 1 and j 2 f1; : : :; ng. 6
Lemma 2.3 Let (F ; R) be a TRS. 1. (F ; R) is simplifying if and only if !+R[E mb(F ) is irre exive. 2. (F ; R) is simply terminating if and only if (F ; R [ E mb(F )) is terminating. Example 2.4 Let F = fa; g; fj j j 2 g and let R = ffj (a) ! fj+1(g(a)) j j 2 g. Then we have E mb(F ) = fg(x) ! x; fj (x) ! x j j 2 g. Clearly, R is terminating. Furthermore, !+R[E mb(F ) is irre exive, that is, R is simplifying. But R is not simply terminating because IN
IN
IN
there is the in nite reduction sequence f1(a) !R[E mb(F ) f2(g(a)) !R[E mb(F ) f2(a) !R[E mb(F ) f3(g(a)) !R[E mb(F ) : : :
We next present a class of TRSs for which the notions \simplifying" and \simply terminating" coincide.
De nition 2.5 Let (F ; R) be a TRS. Set [ F0 = (F un(r) n F un(l)); l!r2R
i.e., F 0 consists of all those function symbols which occur at the right-hand side r but not at the left-hand side l of some rule l ! r 2 R. We say that (F ; R) introduces only nitely many function symbols if the set F 0 is nite. We stress that if the signature F or the set of rules R of a TRS (F ; R) is nite, then (F ; R) belongs to the class of TRSs which introduce only nitely many function symbols.
Proposition 2.6 Let (F ; R) be a TRS introducing only nitely many function symbols. Then (F ; R) is simply terminating if and only if (F ; R) is simplifying. Proof: This follows from Kruskal's Tree Theorem (see e.g. [6, 1]) because in every in nite reduction sequence only nitely many dierent function symbols and variables can occur. 2
3 On the Modularity of Termination 3.1 Basic Notions of the Disjoint Union of TRSs First we give a brief overview of the basic notions of disjoint unions of TRSs (see [27, 16]). Let (F1; R1) and (F2; R2) be two disjoint TRSs, i.e., F1 \F2 = ;. The TRS (F1 ]F2; R1 ]R2) is called their disjoint union; we will simply write R1 ] R2. Other authors use the notation R1 R2 and call it the direct sum of R1 and R2. In the sequel let t 2 T (F1 ] F2; V ). Let t = C [t1; : : :; tn] with C [; : : :; ] 6= 2. We write t = C [ t1; : : : ; tn] if C [; : : :; ] 2 T (Fd; V ) and root(t1 ); : : :root(tn ) 2 Fd for some d; d 2 f1; 2g with d 6= d. The tj are the principal subterms of t. Moreover, we de ne for all t ( 2 T (F1; V ) [ T (F2; V ) rank(t) = 11 + maxfrank(t ) j 1 j ng,, ifif tt = C [ t1; : : :; tn] j 7
The multiset S (t) of special subterms of a term t is de ned by S (t) = Sj1 S j (t), where S 1(t) = [t] (in order to distinguish multisets from sets, we use brackets instead of braces for the former) and ( (t) = 1 j +1 S (t) = S[ ]j (t ) [ : : : [ S j (t ),, ifif rank t = C [ t1; : : : ; tn] 1 n Furthermore, we de ne for d 2 f1; 2g: Sd(t) = [s j s 2 S (t) ; root(s) 2 Fd]. The topmost homogeneous part of t, denoted by top(t), is obtained from t by replacing all principal subterms with 2, i.e., ( (t) = 1 top(t) = Ct [; : : :; ] ifif trank = C [ t1; : : : ; tn] We further de ne T if n + 1 k :hi otherwise. where Sort(ft1; : : : ; tng) = ht(1); : : : ; t(n)i such that t(j) t(j+1) for 1 j < n. Note that the sets to be sorted are nite and thus the sets Ldn and Ldn are well-de ned. As we shall see later, the sorting process is necessary in order to cope with non-left-linear rules: The succession of the listed elements of a set has to be uniquely determined. Again, we suppress the argument s whenever it is clear from the context.
Example 3.8 Let R1 = fF (x; x; B2) ! x; G(x) ! x; H (x) ! A; A ! Bj j j 2 g and let R2 = ff (x) ! xg. For the term s = F (f (G(f (B1))); f (A); f (H (a))) we have (note d = 1) L21 = h i L11 = hA; B1i IN
12
L22 = hL11; L21i L23 = hL12; L22i L24 = hL13; L23i L25 = h i
L12 = hH (L21)i L13 = hG(L22 )i L14 = h i L15 = hF (L24; L24; L24)i
where is some total ordering on T (F ] fConsg; fzg) such that Sort(fA; B1g) = hA; B1i.
Lemma 3.9 Let s 2 T (F ; fzg) with rank(s) = k and root(s) 2 Fd. Then we have for all j; i with 1 j < i < k: Ldi !+C Ldj Ldi !+C Ldj E
E
Proof: Straightforward. 2 With the aid of De nition 3.7 we can de ne the required transformation function d.
De nition 3.10 Let s 2 T (F ; fzg) with root(s) 2 Fd. De ne sd : ft 2 T (F1 ] F2; fzg) j (s; t) = 6 1g ! T (Fd ] fConsg; fzg) by sd(t) = t, if t 2 T (Fd; fzg) sd(t) = h i, if t 2 T (Fd; fzg) sd(t) = C [sd(t1); : : : ; sd(tn)], if root(t) 2 Fd and t = C [ t1; : : : ; tn] sd(t) = Ld(s;t)(s), if root(t) 2 Fd and t = C [ t1; : : :; tn] . Notice that sd is well-de ned (cf. Lemma 3.6). Again we will suppress the superscript s in sd if it is clear from the context.
Lemma 3.11 Let s 2 T (F ; fzg) with root(s) 2 Fd. If u 2 Sd (s), then Ldrank(u) !+C d (u). Proof: Let rank(u) = m. Clearly, m = (u) rank(s). We will show the lemma for E
m > 1, the case m = 1 is obtained by similar arguments. Since m > 1, we may write u = C 0[ u1; : : : ; un] . Clearly, d (u) = C 0[d(u1); : : :; d(un)] = C 0[Ld (u1); : : :; Ld (un )]. Moreover, Ldrank(u) = Ldm = Sort(fC [Ldm ?1; : : :; Ldm ?1 ] j t = C [ t1; : : :; tn] 2 Sd(s); rank(t) = mg). Since u 2 Sd(s) and rank(u) = m, it follows that C 0[Ldm ?1 ; : : :; Ldm ?1] occurs in Ldm . Therefore, Ldm !+C C 0[Ldm ?1; : : : ; Ldm ?1]. Note that m ? 1 (uj ) for all j 2 f1; : : :; ng because m > rank(uj ) = (uj ). Thus C 0[Ldm ?1; : : :; Ldm ?1 ] !C C 0[Ld(u1 ); : : :; Ld(un )] according to Lemma 3.9. All in all, Ldrank(u) !+C C 0[Ldm ?1; : : :; Ldm ?1 ] !C C 0[Ld(u1); : : :; Ld(un )] = d (u). 2 E
E
E
E
13
Example 3.12 (Example 3.8 continued) Consider the reduction sequence D : s = s1 !R s2 !R s3 !R s4 !R s5 !R s6 !R s7 !R s8 !R s9 where s, R1, and R2 are as in Example 3.8 and 1
2
1
1
1
2
1
2
s2 = F (f (G(f (B1))); f (A); f (A)) s3 = F (f (G(B1)); f (A); f (A)) s4 = F (f (B1); f (A); f (A)) s5 = F (f (B1); f (B1); f (A)) s6 = F (f (B1); f (B1); f (B2)) s7 = F (f (B1); f (B1); B2) s8 = f (B1) s 9 = B1 Applying 1 to D we obtain 1(D) : 1(s) = 1(s1) !+C 1(s2) = 1(s3) !+C 1(s4) = 1(s5) = 1(s6) !+R1]C 1(s7) !R1 1(s8) !+R1]C 1(s9) where 1(s) = F (L24; L22; L23) and 1(s2) = F (L24; L22; L22); 1(s4) = F (L22; L22; L22); 1(s7) = F (L22; L22; B2); 1(s8) = L22; 1(s9) = B1 E
E
E
E
The next theorem shows that an R1 ] R2 derivation starting from some term s with root(s) 2 Fd can always be transformed into a Rd ] CE derivation starting from d(s). In fact, it expresses a more general statement.
Theorem 3.13 Let s; t 2 T (F ; fzg) such that root(s) 2 Fd and (t) 6= 1. Then for all t0 2 T (F ; fzg) such that t !R ]R t0 it follows d(t) !Rd]C d (t0). Moreover, t !oRd t0 implies d(t) !Rd d (t0). Proof: We prove the theorem by ( nite) induction on (t), where rank(t) (t) rank(s). The base case (t) = 1 is straightforward because (t) = 1 implies rank(t) = 1. So let (t) = k > 1 and suppose that the theorem holds for all v 2 T (F ; fzg) with (v) < k. We prove the induction step by induction on the length l of the derivation t !lR ]R t0. The case l = 0 is trivial, so consider t !lR ]R t00 !R ]R t0. By the inner induction hypothesis (on l) d (t) !Rd]C d (t00). It remains to be shown that d(t00) !Rd]C d(t0). 1
2
E
1
1
1
2
2
2
The case rank(t00) = 1 is straightforward, so let rank(t00) > 1. We distinguish between the following cases: (i) root(t00) 2 Fd
E
E
If t00 !oRd t0, then we may write t00 = C [ t1; : : :; tn] and t0 = C 0hhti ; : : :; tim ii for some contexts C [; : : :; ], C 0[; : : :; ], i1; : : : ; im 2 f1; : : : ; ng, and terms t1; : : :; tn. Applying d, we obtain d(t00) = C [d(t1); : : : ; d(tn)] and d(t0) = C 0hd (ti ); : : :; d (tim )i. Thus d(t00) !Rd d (t0) by the same rule because < t1; : : : ; tn > / < d (t1); : : :; d (tn) > 1
1
(i.e., even non-left-linear rules do not cause trouble). If t00 !iR1]R2 t0, then we have t00 = C [ t1; : : : ; tj ; : : :; tn] and t0 = C [t1; : : : ; t0j ; : : : ; tn], where tj !R1 ]R2 t0j . Since tj 2 S (t00) and tj 6= t00, it follows from Lemma 3.6 that (tj ) < (t00) and thus (tj ) < (t) (recall that (t00) (t) by Lemma 3.5). The outer induction hypothesis yields d(tj ) !Rd]C d (t0j ). Therefore, d(t00) = C [d(t1); : : :; d(tj ); : : : ; d(tn)] !Rd]C C [d(t1); : : : ; d(t0j ); : : :; d (tn)]. It E
E
14
remains to show the equality d(t0) = C [d(t1); : : : ; d(t0j ); : : :; d (tn)]. It is easy to verify its validity if root(t0j ) 2 Fd or t0j 2 T (F d; V ). So suppose root(t0j ) 2 F d and t0j = C 0[ u1; : : :; um] . Set C 00[; : : :; ] = C [; : : :; C 0[; : : :; ]; : : :; ]. It follows d(t0) = C 00[d(t1); : : :; d(tj?1); d(u1); : : :; d (um); d(tj+1); : : :; d (tn)] = C [d(t1); : : : ; d(tj?1); C 0[d(u1); : : : ; d(um)]; d(tj+1); : : :; d (tn)] = C [d(t1); : : : ; d(t0j ); : : :; d (tn)]: (ii) root(t00) 2 Fd
If t0 = z, then the assertion follows easily. If also root(t0) 2 Fd, then by Lemma 3.9 d (t00) = Ld(t )(s) !C Ld(t )(s) = d (t0) because (t0) (t00). If otherwise root(t0) 2 Fd, then t00 = C [ t1; : : :; tn] !Rd t0 where t0 = tj for some j 2 f1; : : : ; ng. Since (t00) (t) = 6 1, there is an s0 2 Sd(s) with rank(s0) = (t00) such that s0 !R ]R t00. Now d(s0) = Ldrank(s )(s) = Ld (t )(s) = d(t00). Hence it remains to show d(s0) !Rd]C d(t0). Since s0 !R ]R t0, root(t0) 2 Fd, and root(s0) 2 Fd, there exists a u 2 Sd(s0) Sd(s) such that u !R ]R t0. Obviously, (u) = rank(u) < rank(s0) = (t00) (t). Consequently it follows from the outer induction hypothesis that d(u) !Rd]C d(t0). Eventually, we have (cf. Lemmata 3.9 + + 0 d and 3.11) d(t00) = d(s0) = Ldrank (s ) (s) !C Lrank(u) (s) !C d (u) !Rd ]C d (t ). 00
0
E
1
0
2
00
1
E
2
1
2
E
0
E
E
E
2 The above theorem paves the way for our main result Theorem 3.16. But rst we need another de nition. The notion de ned in the next de nition was called \termination preserving under non-deterministic collapses" in [7]. We will use a shorter phrase.
De nition 3.14 A TRS R is called CE -terminating if the collapsing extended term rewriting system R ] fCons(x; y) ! x; Cons(x; y) ! yg is terminating, where Cons is some binary function symbol that does not occur in the signature of R. Clearly, a CE -terminating TRS is terminating. The next lemma (which will be used later) acquaints the reader with this notion.
Lemma 3.15 If a TRS (F ; R) is CE -terminating, then the same is true for the system (F ] fConsg; R ] fCons(x; y) ! x; Cons(x; y) ! yg). Proof: Let Cons and Cons0 be two distinct function symbols that do not occur in F . Let CE denote the TRS (fConsg; fCons(x; y) ! x; Cons(x; y) ! yg) and let CE0 denote the TRS (fCons0g; fCons0(x; y) ! x; Cons0(x; y) ! yg). Since R is CE -terminating, the TRS R ] CE is terminating. Suppose that there is an in nite rewrite derivation D : s1 !(R]C )]C s2 !(R]C )]C s3 !(R]C )]C : : : In every sj replace each Cons0(t1; t2) with Cons(t1; t2) and denote this term by s0j . Note that Cons0 does not occur in any rule of R ] CE . Then D0 : s01 !R]C s02 !R]C s03 !R]C : : : E
0 E
E
0 E
E
0 E
E
15
E
E
is an in nite rewrite derivation of terms s0j 2 T (F [ fConsg; V ), where s0j is rewritten to s0j+1 by Cons(x; y) ! x (resp. Cons(x; y) ! y) if sj is reduced to sj+1 using the rule Cons0(x; y) ! x (resp. Cons0(x; y) ! y). This contradicts the termination of R ] CE . 2
Theorem 3.16 Let R1 and R2 be two disjoint terminating TRSs such that their disjoint union R1 ] R2 is non-terminating. Then R1 is not CE -terminating and R2 is collapsing or vice versa. Proof: Let
D : s = s 1 ! s2 ! s3 ! : : : be an in nite R1 ] R2-derivation of minimal rank, i.e., any R1 ] R2-derivation of smaller rank is nite. Let rank(D) = k. Hence rank(sj ) = rank(D) for all indices j . root(s1 ) 2 Fd for some d 2 f1; 2g. It follows that root(sj ) 2 Fd for any j . In particular, there is no reduction step which is destructive at level 1. From the minimality assumption on rank(D) we conclude that there is no index l 2 such that the subderivation of D beginning at sl consists only of inner rewrite steps. Thus there must be in nitely many !oRd -steps in D. W.l.o.g. we may assume that z is the only variable occurring in D. Now we apply the function d to D and obtain an Rd ] CE rewrite derivation (note that d is well-de ned on D) d (D) : d(s) = d(s1) !Rd]C d(s2) !Rd]C d (s3) !Rd]C : : : By Theorem 3.13 it follows that a reduction step of the form sj !oRd sj+1 in D corresponds to a reduction step d (sj ) !Rd d(sj+1) in d(D). Since there are in nitely many outer reduction steps in D, the derivation d (D) consists of in nitely many reduction steps. Hence Rd is not CE -terminating. Let d 2 f1; 2g n fdg. Suppose that Rd is non-collapsing. Then for any sj !R sj+1 we have IN
E
E
E
sj !oRd sj+1 implies top(sj ) !Rd top(sj+1), and sj !iR sj+1 implies top(sj ) = top(sj+1 ). Since there are in nitely many outer reduction steps in D, this yields an in nite Rd derivation, contradicting the termination of Rd. 2 This theorem can be paraphrased as follows: If R1 and R2 are disjoint terminating TRSs, then their disjoint union R1 ]R2 is terminating provided that one of the following conditions is satis ed: 1. Both R1 and R2 are CE -terminating. 2. Both R1 and R2 are non-collapsing. 3. One of the systems is CE -terminating and non-collapsing. Theorem 3.16 is a rather abstract result. How can we check whether or not a TRS is CE -terminating? The next proposition states some sucient conditions that can easier be checked. It is due to Gramlich; the proof can be found in [7]. 16
De nition 3.17 A TRS (F ; R) is said to be non-deterministically collapsing if there is a term that can be reduced to two distinct variables. More precisely, there must be a term s 2 T (F ; V ) with s = C [x; y] for some context C [; : : :; ] and distinct variables x and y such that s !+R x and s !+R y. Proposition 3.18 Let (F ; R) be a terminating TRS. 1. If (F ; R) is non-duplicating, then it is CE -terminating. 2. If (F ; R) is non-deterministically collapsing, then it is CE -terminating. 3. If (F ; R) is simply terminating, then it is CE -terminating. De nition 3.19 Let A0 = fR j R is CE -terminating g A1 = fR j R is non-duplicating and terminating g A2 = fR j R is simply terminating g A3 = fR j R is non-deterministically collapsing and terminating g Proposition 3.18 states that Aj A0 for j 2 f1; 2; 3g. Thus this proposition in conjunction with the next examples shows that we have the situation depicted in Figure 1.
''$$ '$ '$ &% &%% &&% A1
A2
A0
A3
Figure 1
Example 3.20
R1 = fh(x; y) ! x; h(x; y) ! yg R2 = fa ! bg R3 = ff (f (x)) ! f (g(f (x))); h(x; y) ! x; h(x; y) ! yg R4 = ff1(x) ! f2(x; x); h(x; y) ! x; h(x; y) ! yg R5 = ff (f (x)) ! f (g(f (x)))g R6 = ff1(x) ! f2(x; x)g R7 = R 4 [ R 5 R8 = ff (f (x)) ! f (g(f (x))); f1(x) ! f2(x; x)g
2 2 2 2 2 2 2 2
A 1 \ A2 \ A3 (A1 \ A2) n A3 (A1 \ A3) n A2 (A2 \ A3) n A1 A1 n (A2 [ A3) A2 n (A1 [ A3) A3 n (A1 [ A2) A0 n (A1 [ A2 [ A3)
The combination of Theorem 3.16 and Proposition 3.18 has interesting consequences. 17
Theorem 3.21 1. 2. 3. 4. 5.
Termination is a modular property of non-collapsing TRSs (cf. [24] and Theorem 1.2). CE -termination is a modular property of TRSs. Termination is a modular property of non-duplicating TRSs (cf. [24] and Th.1.2). Termination is a modular property of non-deterministically collapsing TRSs. Simple termination is a modular property of TRSs.
Proof: Let R1 and R2 be disjoint TRSs. We have to show that R = R1 ]R2 has one of the properties 1 - 5 if and only if R1 and R2 have the respective property. The only-if direction
is in all cases easy to prove. It remains to prove the if direction. 1. The combination of two non-collapsing terminating TRSs yields a non-collapsing TRSs. Its termination follows from Theorem 3.16. 2. We know from Theorem 3.16 that the disjoint union of two CE -terminating TRSs R1 and R2 is again terminating. But in addition we have to prove that it is also CE -terminating. That this is in fact the case can be seen as follows: Let Cons be a binary operator that does not occur in F1 ] F2. Since R2 is CE -terminating, the same holds, by Lemma 3.15, for R2 ] fCons(x; y) ! x; Cons(x; y) ! yg. Hence we may conclude by Theorem 3.16 that R1 ] (R2 ] fCons(x; y) ! x; Cons(x; y) ! yg) = (R1 ] R2) ] fCons(x; y) ! x; Cons(x; y) ! yg is terminating. But this amounts to CE -termination of R1 ] R2. 3. Clearly, the union of two non-duplicating terminating TRSs is again non-duplicating. That it is also terminating follows from Theorem 3.16 in conjunction with Proposition 3.18. 4. The combination of a non-deterministically collapsing TRS and an arbitrary other TRS yields a non-deterministically collapsing TRS. Hence the assertion follows as in 3. 5. Since Ri, i 2 f1; 2g, is simply terminating, the same holds for Ri [ E mb(Fi). By Proposition 3.18, Ri [ E mb(Fi) is CE -terminating. The application of Theorem 3.16 to R1 [E mb(F1) and R2 [E mb(F2) yields the termination of (R1 [E mb(F1)) ] (R2 [E mb(F2)), or equivalently, the simple termination of R1 ] R2. 2 We emphasize that it suces to show that each terminating constituent TRS is either simply terminating, non-duplicating, non-deterministically collapsing or CE -terminating to infer that their combination is again terminating. Finally, the next corollary states sucient conditions for the preservation of termination under disjoint union.
Corollary 3.22 Let R1 and R2 be disjoint terminating TRSs. Their union R1 ] R2 is terminating provided that:
One of the systems is non-duplicating and non-collapsing (cf. [15] and Theorem 1.2). One of the systems is simply terminating and non-collapsing. Proof: Immediate consequence of Theorem 3.16 and Propositon 3.18. 2 18
4 Constructor-Sharing Term Rewriting Systems 4.1 Basic Notions of TRSs with Shared Constructors In this section we weaken the disjointness requirement, i.e., the TRSs are allowed to share special function symbols, so-called constructors. Constructors are function symbols that do not occur at the root position of the left-hand side of any rewrite rule; the others are called de ned symbols. The union (F ; R) = (F1 [F2 ; R1 [R2 ) of two TRSs (F1 ; R1) and (F2 ; R2), where C = F1 \ F2 (F1 [ F2) n froot(l) j l ! r 2 R1 [ R2g is called the combined TRS of (F1; R1) and (F2; R2) with shared constructors C . In this case we de ne D1 = F1 n C , D2 = F2 n C , and D = D1 ] D2. To be able to distinguish between symbols from dierent sets, we use capitals F; G; : : : for function symbols from D1, small letters f; g; : : : for those from D2 , and small capitals C; D; : : : for shared constructors. As usual x; y; z; x1; y1; z1; : : : will denote variables. To emphasize that F = D ] C , we write T (D; C ; V ) instead of T (F ; V ) at the appropriate places. In this section ! = !R = !R1[R2 .
De nition 4.1 Let s 2 T (D; C ; V ). Again we color each function symbol in s. Function symbols from D1 are colored black, those from D2 white, and constructors as well as variables
are transparent. If s does not contain white (black) function symbols, we speak of a black (white) term. s is said to be transparent if it only contains constructors and variables. Consequently, a transparent term may be regarded as black or white, this is convenient for later purposes. s is called top black (top white, top transparent) if root(s) is black (white, transparent). Several de nitions and considerations are symmetrical in the colors black and white. Therefore, we state the respective de nition or consideration only for the color black (the same applies mutatis mutandis for the color white).
De nition 4.2 Let s be a top black term such that s = C b[s1; : : : ; sn] for some black context C b[; : : :; ] = 6 2 and root(sj ) 2 D2 for j 2 f1; : : :; ng. We denote this by s = C b[ s1; : : :; sn] . In this case we de ne the multiset S b(s) of all black principal subterms of s to be S b(s) = [s] and the set of all white principal subterms of s to be S w (s) = [s1; : : : ; sn]. The topmost black homogeneous part of s, denoted by topb (s), is obtained from s by replacing all white principal subterms with 2. The topmost white homogeneous part of s is topw (s) = 2. Now let s be a top transparent term such that 8 t > < C b[s1 ; : : :; sl ] where C tb[; : : :; ] 2 T (C ; V ) ; root(sj ) 2 D1 ] D2 s = > C [t1; : : :; tm] where C [; : : :; ] 2 T (D1; C ; V ) ; root(tj ) 2 D2 : C w [u1; : : :; un ] where C w [; : : :; ] 2 T (D2 ; C ; V ) ; root(uj ) 2 D1
From now on this will be denoted by
8 t > < C b[ s 1 ; : : : ; s l ] s = > C [ t1; : : : ; tm] : C w [ u1; : : :; un ]
19
In this situation, we de ne the multiset S b(s) (S w (s)) of black (white) principal subterms of s to be S b(s) = [u1; : : :; un] (S w (s) = [t1; : : :; tm]). The topmost black (white) homogeneous part of s, denoted by topb (s) (topw (s)), is obtained from s by replacing all white (black) principal subterms with 2.
Example 4.3 Let D1 = fF; Ag, D2 = fg; bg, and C = fCg. For s = C(F (b); g(A)) we have 8 t > < C b[ F (b); g (A)]] withb C t[; : : :; ] = C (2; 2) s = > C [ b; g(A)]] with C [; : : :; ] = C(F (2); 2) : C w [ F (b); A] with C w [; : : :; ] = C (2; g (2))
as well as S b(s) = [F (b); A] and S w (s) = [b; g(A)].
De nition 4.4 Let s be a top black term. Let s = C b[ s1; : : : ; sn] and s !R t by an application of a rewrite rule of R = R1 [ R2. We write s !iR t if the rule is applied in one of the sj and we write s !oR t otherwise. The relation !iR is called inner reduction and !oR is called outer reduction. Now let s be a top transparent term, i.e., s = C t[ s1; : : :; sn] with C t[; : : :; ] = 6 2. Let s !R t, i.e., t = C t[s1; : : :; sj?1 ; tj ; sj+1; : : : ; sn] for some j 2 f1; : : :; ng. Then we write s !iR t if sj !iR tj and s !oR t if sj !oR tj . In order to indicate which TRS the applied rule stems from, we also use the notation s !oR t, s !oR t, s !iR t, and s !iR t. 1
2
1
2
De nition 4.5 Let s be a top black term. We de ne (
2 T (D1; C ; V ) rank(s) = 11 + maxfrank(s ) j 1 j ng,, ifif ss = C b[ s1; : : :; sn] j Now let s be a top transparent term. Then we de ne (
0 , if s 2 T (C ; V ) rank(s) = max frank(tj ) j 1 j mg, if s = C t[ t1; : : : ; tm] As for disjoint unions, we have s ! t ) rank(s) rank(t). We will also use special notations for \degenerate" cases of s = C b[ t1; : : :; tm] and s = C w [ u1; : : : ; un] . These are de ned in analogy to those used for the disjoint union case.
4.2 Modularity of Termination of Constructor-Sharing TRSs We have seen that, in contrast to termination, simple termination is modular for disjoint TRSs. But does this result carry over to constructor-sharing systems? As already mentioned, Kurihara and Ohuchi showed in [14] that the simplifying property is modular for TRSs with shared constructors. This entails the modularity of simple termination for nite constructorsharing TRSs. Also, Gramlich [7] extended his abstract result to nitely branching TRSs with shared constructors. However, the next counterexamples show that Theorem 3.16 and some corollaries thereof do not extend to constructor-sharing TRSs. 20
Example 4.6 Let R1 = fFj (Cj ; x) ! Fj+1(x; x); Fj (x; y) ! x; Fj (x; y) ! y j j 2 g and R2 = fg(x; y) ! x; g(x; y) ! y; a ! C j j j 2 g. The systems share the constructors fCj j j 2 g. R1 and R2 are both simply terminating and non-deterministically collapsing (hence CE -terminating) but their combined system with shared constructors R is not IN
IN
IN
terminating:
F1(C1; a) !R F2(a; a) !R F2(C 2; a) !R F3(a; a) !R : : :
Using the results from Section 2 this can be interpreted as follows: If R1 and R2 are simplifying, i.e., their rewrite relations are contained in some simplication orderings >1 and >2, then we can nd a simpli cation ordering >1;2 which contains !R1[R2 . But if >1 and >2 are additionally well-founded (i.e., R1 and R2 are even simply terminating), then >1;2 is not well-founded in general. In the above example the TRSs share in nitely many constructors. This is not essential.
Example 4.7 Let R1 = fFj (Cj (x); x) ! Fj+1(x; x); Fj (x; y) ! x; Fj (x; y) ! y j j 2 g and R2 = fg(x; y) ! x; g(x; y) ! y; h(x) ! Cj (x) j j 2 g. The systems only share the IN
constructor C, where
IN
C j (x) = C (C (C (: : : (C (C (x))) : : :))) {z } | j ?times
R1 and R2 exhibit the same behavior as the TRSs above. Next we collect some positive results.
Proposition 4.8 Simple termination is a modular property of constructor-sharing TRSs
introducing only nitely many function symbols. Proof: Clearly, the union of two TRSs which introduce only nitely many function symbols also introduces only nitely many function symbols. Thus, the proposition follows directly from Theorem 1.6 and Proposition 2.6. 2 An analogous statement holds for nitely branching TRSs (see [7]).
Proposition 4.9 Simple termination is a modular property of nitely branching TRSs with shared constructors.
Note that Proposition 4.9 is not a special case of Proposition 4.8 and vice versa. To establish our next result, the generalization of Corollary 1.9, we need a few prerequisites. The proof of the next lemma can for instance be found in [18].
Lemma 4.10 Let ! be a binary relation on T T (F ; V ). If ! is closed under contexts and well-founded on T , then > = (! [ >)+ is a well-founded partial ordering on T . Lemma 4.11 Let s; t 2 T (D; C ; V ) such that s !oR t is a non-duplicating reduction step. Then S w (t) S w (s). Proof: Case (i): s is top black. We consider the following subcases: 1
21
1. root(t) 2 D2. Then s = C b[ ; : : :; t; : : :; ] !oR1 t and as a consequence we obtain S w (t) = [t] S w (s). 2. root(t) 2 C . Then s = l !oR1 r = t using a black rule l ! r with either root(r) 2 C or r 2 V . Clearly, l = C [x1; : : :; xn] and r = C 0[xi1 ; : : :; xim ] where fxi1 ; : : :; xim g fx1; : : : ; xng. Since the rule is non-duplicating, even the multiset inclusion [xi1 ; : : :; xim ] [x1; : : : ; xn] holds. Hence S w (t) S w (s) follows. 3. root(t) 2 D1. In this case the assertion follows by similar arguments as in 2. 4. t 2 V . In this case the assertion holds vacuously. Case (ii): s is top transparent. Let s = C t[ s1; : : :; sl; : : : ; sn] !oR1 t = C t[s1; : : : ; tl; : : :; sn ], i.e., sl !oR1 tl where sl is top black. Now the assertion follows from the fact that S w (t) = (S w (s) n S w (sl)) [ S w (tl) in conjunction with S w (tl) S w (sl) (cf. (i)). 2 Statements 1 and 2 of the next proposition appeared already in Gramlich [7]. Statement 3 is the new and interesting part, it is the essence of our proof.
Proposition 4.12 Let R1 and R2 be terminating TRSs with shared constructors such that D : s 1 ! s2 ! s3 ! : : : is an in nite R1 [R2 rewrite derivation of minimal rank, i.e., any R1 [R2 rewrite derivation of smaller rank is nite. Then we have for some d; d 2 f1; 2g with d = 6 d: 1. There are in nitely many !oRd reduction steps in D. 2. There are in nitely many !Rd reduction steps in D which are collapsing or constructorlifting. 3. There are in nitely many duplicating !oRd reduction steps in D.
Proof: Let rank(D) = k. Hence rank(sj ) = rank(D) for all indices j . Moreover, !R [R is terminating on T = (!R1 [R2 [ >)+ . According to Lemma 4.10, (T ) is a well-founded ordering. Let (M(T mul) denote its well-founded multiset extension. Note that S w (sj ) M(T mul S w (sj+1). If sj !R sj+1, then there is a white principal subterm u 2 S w (sj ) such that u !R v for some v, i.e., sj = C b[ ; : : :; u; : : :; ] !R C b[; : : :; v; : : :; ] = sj+1 . Thus we have S w (sj+1) = (S w (sj ) n [u]) [ S w (v). It follows from u ! v in conjunction with v > w for any principal subterm w 2 S w (v) that u > w for any w 2 S w (v). Therefore 1
1
1
2
2
2
S w (sj ) >mul S w (sj+1).
We conclude from the well-foundedness of (M(T mul ) that only nitely many !iR1 and !R2 steps can occur in the derivation D under consideration. In particular, there are only nitely many !R2 reduction steps which are collapsing or constructor-lifting. This contradicts statement 2. 2
Theorem 4.13 Let R1 and R2 be two terminating TRSs with shared constructors such that their combined system R1 [ R2 is non-terminating. Then R1 is collapsing or constructorlifting and R2 is duplicating or vice versa. Proof: This is an immediate consequence of Proposition 4.12. 2 An equivalent formulation of Theorem 4.13 reads as follows: If R1 and R2 are terminating TRSs with shared constructors, then their combined system R1 [R2 is terminating provided that one of the following conditions is satis ed: 1. Neither R1 nor R2 contains either collapsing or constructor-lifting rules. 2. Neither R1 nor R2 contains duplicating rules. 3. One of the systems contains neither collapsing, constructor-lifting, nor duplicating rules.
5 Concluding Remarks First of all, we point out that normalization is also a modular property of constructor-sharing TRSs (the proof for disjoint unions given in [16] can be carried over). The same holds for 23
semi-completeness (con uence plus normalization), see [23] for details. Second, we expect that Theorem 3.16 is also true for join conditional term rewriting systems (CTRSs). This, however, does not seem to lead to practically relevant results. In contrast to the unconditional case, the class of CE -terminating CTRSs comprises neither the class of non-duplicating terminating CTRSs nor the class of simply terminating CTRSs (if simple termination of CTRSs is de ned according to De nition 2.1 { cf. [9]). This is witnessed by the following non-duplicating CTRS taken from [17]. R = fF (x) ! F (x) ( x # A; x # B g !R coincides with the empty relation; thus R is simply terminating and in particular terminating. Nevertheless, R is not CE -terminating: F (Cons(A; B )) !R]C F (Cons(A; B )) because Cons(A; B ) !C A and Cons(A; B ) !C B . A, from a practical point of view, reasonably signi cant result for the modularity of completeness for certain nite join CTRSs with shared constructors can be found in [21]. The above example also shows that the disjoint union of two terminating non-duplicating CTRSs may be non-terminating. However, Middeldorp proved in [17]: If R1 and R2 are disjoint terminating join CTRSs, then their disjoint union R1 ]R2 is terminating provided that one of the following conditions is ful lled: 1. Both systems are non-collapsing. 2. Both systems are con uent and non-duplicating. 3. Both systems are con uent and one of them is non-collapsing and non-duplicating. We point out that a simpler proof than that of [17] can be achieved by a simple modi cation of the proof structure (resulting in a proof similar to that of Theorem 4.13). Using this approach, we only have to prove statements 1 and 2, and we get 3 for free. In [2], Dershowitz has also given a proof sketch for Theorem 4.13 revealing exactly the idea underlying our proof. [2] deals with hierarchical TRSs as well. These are systems like ( (0; x) ! x R1 = add add(S (x); y) ! S (add(x; y)) and ( (0; x) ! 0 R2 = mult mult(S (x); y) ! add(mult(x; y); y) E
E
E
where de ned symbols of the rst TRS may occur as constructors in right-hand sides of rules of the second. Denoting the set of constructors of Rj by Cj and the set of de ned symbols by Dj , the relationship of the dierent kinds of combinations of TRSs is depicted in [11] by the next illustration.
D1
C1
C2
D2
(i) Disjoint Union
D1
C1 C2
D2
(ii) Constructor-Sharing TRSs 24
D1
D2 C2
C1
(iii) Hierarchical TRSs
Recent modularity results for certain restricted classes of hierarchical TRSs can be found in [2, 8, 11, 12]. Another class of TRSs we did not consider in this paper consists of the so-called constructor systems, where every left-hand side f (t1 ; : : :; tn) of a rewrite rule must satisfy that the terms t1; : : :; tn are build over constructors and variables only. Notice that the TRSs in the above example are constructor systems. An interesting result in this regard was obtained by Middeldorp and Toyama [19]; they proved that completeness is preserved under the combination of composable constructor systems. Two term rewriting systems R1 and R2 are called composable if C1 \D2 = D1 \C2 = ; and both systems contain all rewrite rules that de ne a de ned symbol whenever that symbol is shared, more precisely, the equality fl ! r 2 R1 j root(l) 2 D1 \ D2g = fl ! r 2 R2 j root(l) 2 D1 \ D2g must hold. It should be worthwhile to investigate composable TRSs in more detail. Acknowledgements: The author is grateful to Robert Giegerich and Aart Middeldorp for valuable comments on a previous version of the paper, to Anke Bodzin for typesetting parts of the manuscript, and to Hugh Osborne for suggestions for improving the English. Moreover, the paper has bene tted from the constructive criticism of the anonymous referees.
References [1] N. Dershowitz. A Note on Simpli cation Orderings. Information Processing Letters 9(5), pages 212{215, 1979. [2] N. Dershowitz. Hierarchical Termination. Draft, Dept. of Computer Science, Hebrew University, Jerusalem 91904, Israel, 1993. Revised version to appear in: 4th International Workshop on Conditional Term Rewriting Systems. [3] N. Dershowitz and J.-P. Jouannaud. Rewrite Systems. In J. van Leeuwen, editor, Handbook of Theoretical Computer Science, volume B, chapter 6. Elsevier { The MIT Press, 1990. [4] N. Dershowitz and Z. Manna. Proving Termination with Multiset Orderings. Communications of the ACM 22(8), pages 465{476, 1979. [5] K. Drosten. Termersetzungssysteme. Informatik-Fachberichte 210, Springer Verlag, 1989. [6] J. Gallier. What's so Special about Kruskal's Theorem and the Ordinal ?0? A Survey of some Results in Proof Theory. Annals of Pure and Applied Logic 53, pages 199{260, 1991. [7] B. Gramlich. Generalized Sucient Conditions for Modular Termination of Rewriting. In Proceedings of the Third International Conference on Algebraic and Logic Programming, pages 53{68. Lecture Notes in Computer Science 632, Berlin: Springer Verlag, 1992. 25
[8] B. Gramlich. Generalized Sucient Conditions for Modular Termination of Rewriting. Applicable Algebra in Engineering, Communication and Computing, 1993. Extended version of [?], to appear. [9] B. Gramlich. Relating Innermost, Weak, Uniform and Modular Termination of Term Rewriting Systems. SEKI Report SR-93-09, Universitat Kaiserslautern, 1993. [10] B. Gramlich. Sucient Conditions for Modular Termination of Conditional Term Rewriting Systems. In Proceedings of the 3rd International Workshop on Conditional Term Rewriting Systems 1992, pages 128{142. Lecture Notes in Computer Science 656, Berlin: Springer Verlag, 1993. [11] J.W. Klop. Term Rewriting Systems. In S. Abramsky, D. Gabbay, and T. Maibaum, editors, Handbook of Logic in Computer Science, volume 2, pages 1{116. Oxford University Press, 1992. [12] M.R.K. Krishna Rao. Completeness of Hierarchical Combinations of Term Rewriting Systems. In Proceedings of the 13th Conference on the Foundations of Software Technology and Theoretical Computer Science, pages 125{139. Lecture Notes in Computer Science 761, Berlin: Springer Verlag, 1993. [13] M.R.K. Krishna Rao. Simple Termination of Hierarchical Combinations of Term Rewriting Systems. In Proceedings of the International Symposium on Theoretical Aspects of Computer Software, pages 203{223. Lecture Notes in Computer Science 789, Berlin: Springer Verlag, 1994. [14] M. Kurihara and A. Ohuchi. Modularity of Simple Termination of Term Rewriting Systems. Journal of IPS Japan 31(5), pages 633{642, 1990. [15] M. Kurihara and A. Ohuchi. Modularity of Simple Termination of Term Rewriting Systems with Shared Constructors. Theoretical Computer Science 103, pages 273{282, 1992. [16] A. Middeldorp. A Sucient Condition for the Termination of the Direct Sum of Term Rewriting Systems. In Proceedings of the 4th IEEE Symposium on Logic in Computer Science, pages 396{401, 1989. [17] A. Middeldorp. Modular Properties of Term Rewriting Systems. PhD thesis, Vrije Universiteit te Amsterdam, 1990. [18] A. Middeldorp. Modular Properties of Conditional Term Rewriting Systems. Information and Computation 104(1), pages 110{158, 1993. [19] A. Middeldorp and B. Gramlich. Simple Termination is Dicult. In Proceedings of the 5th International Conference on Rewriting Techniques and Applications, pages 228{242. Lecture Notes in Computer Science 690, Berlin: Springer Verlag, 1993. [20] A. Middeldorp and Y. Toyama. Completeness of Combinations of Constructor Systems. Journal of Symbolic Computation 15(3), pages 331{348, 1993. 26
[21] E. Ohlebusch. A Simple Proof of Sucient Conditions for the Termination of the Disjoint Union of Term Rewriting Systems. Bulletin of the European Association for Theoretical Computer Science 49, pages 178{183, 1993. [22] E. Ohlebusch. Combinations of Simplifying Conditional Term Rewriting Systems. In Proceedings of the 3rd International Workshop on Conditional Term Rewriting Systems 1992, pages 113{127. Lecture Notes in Computer Science 656, Berlin: Springer Verlag, 1993. [23] E. Ohlebusch. Termination is not Modular for Con uent Variable-Preserving Term Rewriting Systems, 1993. To appear in Information Processing Letters. [24] E. Ohlebusch. On the Modularity of Con uence of Constructor-Sharing Term Rewriting Systems. In Proceedings of the 19th Colloquium on Trees in Algebra and Programming, pages 261{275. Lecture Notes in Computer Science 787, Berlin: Springer Verlag, 1994. [25] M. Rusinowitch. On Termination of the Direct Sum of Term Rewriting Systems. Information Processing Letters 26, pages 65{70, 1987. [26] J. Steinbach. Extensions and Comparison of Simpli cation Orderings. In Proceedings of the 3rd International Conference on Rewriting Techniques and Applications, pages 434{448. Lecture Notes in Computer Science 355, Berlin: Springer Verlag, 1989. [27] Y. Toyama. Counterexamples to Termination for the Direct Sum of Term Rewriting Systems. Information Processing Letters 25, pages 141{143, 1987. [28] Y. Toyama. On the Church-Rosser Property for the Direct Sum of Term Rewriting Systems. Journal of the ACM 34(1), pages 128{143, 1987. [29] Y. Toyama, J.W. Klop, and H.P. Barendregt. Termination for the Direct Sum of LeftLinear Term Rewriting Systems. In Proceedings of the 3rd International Conference on Rewriting Techniques and Applications, pages 477{491. Lecture Notes in Computer Science 355, Berlin: Springer Verlag, 1989. [30] H. Zantema. Type Removal in Term Rewriting. In Proceedings of the 3rd International Workshop on Conditional Term Rewriting Systems 1992, pages 148{154. Lecture Notes in Computer Science 656, Berlin: Springer Verlag, 1993.
27