SOLVING SYMBOLIC ORDERING CONSTRAINTS 1 ... - CiteSeerX

Report 0 Downloads 42 Views
SOLVING SYMBOLIC ORDERING CONSTRAINTS HUBERT COMONy CNRS and Laboratoire de Recherche en Informatique Bat. 490, Universite de Paris Sud, 91405 ORSAY cedex, France. [email protected]

Received Revised

ABSTRACT

We show how to solve boolean combinations of inequations s > t in the Herbrand Universe, assuming that  is interpreted as a lexicographic path ordering extending a total precedence. In other words, we prove that the existential fragment of the theory of a lexicographic path ordering which extends a total precedence is decidable. Keywords: simpli cation orderings, ordered strategies, term algebras, constraint solving.

1. Introduction

The rst order theory of term algebras over a language (or alphabet) with no relational symbol (other than equality) has been shown to be decidable1;2 . See also Refs 3 and 4. Introducing into the language a binary relational symbol  interpreted as the subterm ordering makes the theory undecidable5 . Venkataraman also shows in the latter paper that the purely existential fragment of the theory, i.e. the subset of sentences whose prenex form does not contain 8, is decidable. Venkataraman was concerned with some applications in functional programming which ful ll the interpretation of . We are interested in the purely existential fragment of the theory when  is interpreted as a lexicographic path ordering. Let us brie y consider the motivations for such an interpretation.  An abstract of this paper appeared in Proc. IEEE Logic in Computer Science, Philadelphia, 1990, under the title \Solving inequations in term algebras". y This research was partly supported by the Greco de Programmation and partly by the ESPRIT Basic Research Action COMPASS.

Ordered rewriting and unfailing completion techniques have been introduced in Ref. 6 and successfully applied for deciding (or, in general, semi-deciding) word problems in equational theories. The idea is to replace the relation $E (the replacement of equals by equals) with the relation $E (ordered rewriting) which consists in an ordered strategy for replacement of equals by equals de ned as follows. u $E v if u $E v and u  v for a given simpli cation ordering  that is total on ground terms (see e.g. Ref. 7 for the missing de nitions). In other words, an equation s = t 2 E is split into two constrained equations s > t : s = t and t > s : s = t. s = t is viewed as (the denotation of) an (in nite) term rewriting system fu ! v j u  vg [ fv ! u j v  ug. The word problem is then solved by completion. Adding new equational consequences to E leads to simpler equational proofs than the brute-force replacement of equals by equals8;9. The equational consequences are computed by superposition of two equations (the so-called critical pairs): if u = v and s = t are two equations in E and if some non-variable subterm (at position p) of u is uni able with s (with m.g.u. ), the pair (v; u[t]p ) is a critical pair provided that there exists a ground substitution  satisfying u  v ^ s  t. The existence of the substitution  corresponds exactly to the problem of deciding whether s  t ^ u  v has a solution. Up to now, this problem was unsolved when  is interpreted as a recursive path ordering. Other applications in the same vein (ordered strategies) are investigated in Ref. 10. For example, the resolution rule is restricted to ordered resolution in the following way:

P _ Q :P 0 _ R If  = mgu(P; P 0 ) and 9; P  Q ^ P  R. Q _ R This is a complete strategy (together with ordered factorization, it is a complete set of inference rules), provided that  is total on ground terms. It requires however that one decides whether P  Q ^ P  R is satis able in the Herbrand Universe. For all such applications,  needs to be interpreted as an ordering on ground terms which enjoys the following properties: 1. It is a total ordering on ground terms. (This is required for the completeness of the strategy.) 2. It is monotonic (in the sense of Ref. 7). This is required for handling equality. 3. It is well founded. This is required if we want to incorporate simpli cation rules. This property is also required for completeness. There is one well-known ordering on terms which ful lls these three requirements : the lexicographic path ordering whose de nition is recalled below (see also Ref. 7). This is why we consider such an interpretation of  in this paper. The main result of the paper is to show how to decide whether there is indeed a  satisfying u lpo v ^ s lpo t (where lpo is the lexicographic path ordering), therefore solving the above questions. Our result also shows that the ordering on terms with variables de ned as t  u i , for all ground substitution , t  u is a decidable simpli cation ordering when

 is a lexicographic path ordering on ground terms.  generalizes the usual ex-

tension of recursive path ordering to terms with variables. This means that more (sometimes strictly more) terms are comparable w.r.t.  than w.r.t. the recursive path ordering. Let T be the theory of term algebra over the relational symbols = and , where  is interpreted as a lexicographic path ordering. We show the decidability of the purely existential fragment of T . The proof is carried out in three steps. The rst

step (Sec. 2) consists of the transformation of any quanti er-free formula  (i.e. all variables are free) into a solved form that has the same set of solutions as . In Sec. 3 we reduce the satis ability of an arbitrary solved form to the satis ability of some particular problems called simple systems. Roughly, a simple system is a formula which de nes a total ordering on the terms occurring in it and which is closed under deduction. This last property means that, if is a solved form of a simple system , then must be a subformula of . In Sec. 4 we show how to reduce the satis ability of simple systems to the satis ability of some particular simple systems called natural simple systems. Finally, we complete the proof showing that the satis ability in the Herbrand Universe of a natural simple system is equivalent to the satis ability of a system of linear inequalities over the integers.

2. Inequations Simpli cation

In the following, F is assumed to be a nite set of function symbols. Each function symbol is associated with a non-negative integer a(f ) called the arity of f . When a(f ) = 0, f is called a constant. We assume through this paper that F contains at least one constant. X is a set of variable symbols, disjoint from F . Terms (or nite trees) over F and X are de ned in the usual way (see e.g. Ref. 9 for missing de nitions). The set of all terms is denoted by T (F; X ). T (F; ;) is simply denoted T (F ). Its elements are called ground terms. As in Ref. 9, tjp denotes the subterm of t at position p and t[u]p denotes the term t in which tjp has been replaced with u. As usual, t[u]p is also used to indicate that u is the subterm of t at position p. Moreover, when p is not precised, t[u] means that u is a subterm of t at some position. Substitutions are mappings from X to T (F; X ). A substitution  is confused with the (unique) endomorphism of T (F; X ) which extends . A ground substitution is a substitution  such that (X )  T (F ). De nition 1 (Syntax). An equation is an expression s = t where t and t are terms in T (F; X ). = is assumed to be commutative (s = t is the same equation as t = s). An inequation is an expression s > t where s; t 2 T (F; X ). An inequational problem is a boolean combination of equations and inequations. We use also the following syntactic abbreviations:  We write sometimes s < t in place of t > s  ? is the empty disjunction and > is the empty conjunction

 s  t (resp. t  s) stands for s > t _ s = t.  denotes the syntactic equality on terms (resp. the syntactic equality of inequational problems). Let F be a total ordering on F . The lexicographic path ordering lpo is de ned on T (F; X ) by s lpo t , s >lpo t _ s  t, and >lpo is

de ned in the following way (see also Ref. 7):

s  f (s1 ; : : : ; sn ) >lpo g(t1 ; : : : ; tm )  t i one of the following holds  9i, si lpo t  f >F g and 8j; s >lpo tj  f = g, (s1 ; : : : ; sn ) lpo (t1 ; : : : ; tn ) and 8j , s >lpo tj where (s1 ; : : : ; sn ) lpo (t1 ; : : : ; tn ) i 9j  n; 8i < j; si  ti and sj >lpo tj . De nition 2 (Semantics). A solution of an equation s = t (resp. an inequation s > t) is a ground substitution  (i.e. an assignment  from X to T (F )) such that s  t (resp. s >lpo t). This de nition of a solution is extended to inequational problems in a standard way. If I is an inequational problem, then S (I ) denotes the set of its solutions. The decidability problem we address in this paper is the emptiness of S (I ). lpo is a total ordering on ground terms (see e.g. Ref. 7). Therefore, :(s > t) can be replaced with s < t _ s = t and :(s = t) with s > t _ t > s, without changing the set of solutions of the formula. This is the reason why we assume in the following (without loss of generality) that inequational problems have the following form:

_

j 2J

s1 = t1 ^ : : : ^ sn = tn ^ u1 > v1 ^ : : : ^ um > vm :

By convention, if J is empty, then the inequational problem is ? and, if n = m = 0 and J is not empty, then the inequational problem is >.

More precisely, let !N1 (resp. !N2 ) be the reduction relation de ned on inequational problems by the rules of Fig. 1 (resp. 2). Both sets of rules de ne a canonical term rewriting system modulo the commutativity of = and the associativity and commutativity of ^ and _ (see Ref. 9 for de nitions). Moreover, the normal form I #N1 #N2 (abbreviated I #N ) of an inequational formula I has the same set of solutions as I . We may therefore restrict our attention to irreducible problems w.r.t. !N1 [ !N2 and call them disjunctive normal forms.a De nition 3 (Solved Forms). A solved form is either >, ? or a formula

x1 = t1 ^ : : : ^ xn = tn ^ u1 > v1 ^ : : : ^ um > vm ; where

a Let us emphasize that, with our de nition, there is no identity = in a disjunctive normal form: =  >. s

s

s

s

:(s = t) :(s > t) :> :? :(a _ b) :(a ^ b)

! ! ! ! ! !

s>t_t>s t>s_s=t

? > :a ^ :b :a _ :b

Fig. 1. Elimination of negation from inequational problems.

s=s s>s s a^> a_ ? a^ ?

! ! ! ! ! ! ! ! ! !

> ?

t>s

(a ^ c) _ (b ^ c)

a a

>

a a

?

Fig. 2. Normalization of inequational problems.

 x ; : : : ; xn are variables occurring only once in the formula  for each i 2 f1; : : : ; mg, ui or vi is a variable  for each index i 2 f1; : : : ; mg, ui is not a subterm of vi nor vi of ui We now describe a set of rules R for the transformation of inequational formulas. The corresponding reduction relation is written out )R . R is called correct if  )R 0 implies that  and 0 have the same solutions. Given a set of solved forms (as above) R is called complete if any normal form for )R is a solved form. 1

Each rule in Fig. 3 is followed by a (possibly empty) condition. This de nes a class of algorithms by choosing any sequence of reductions that ful lls the conditions. The rules apply to disjunctive normal forms of problems. Therefore, we assume that a normalization (w.r.t. !N ) is performed after each reduction with ). (This must be kept in mind when proving termination). Proposition 1 . The rules given in Fig. 3 are correct, complete and terminating. Proof. Correctness is a direct consequence of the de nition of >lpo: assume that s  f (s1 ; : : : ; sn ) and t  g(t1 ; : : : ; tm ) are two ground terms. Then s >lpo t i one of the following holds: 1. f >F g and either 9i; si lpo t or 8i; s >lpo ti . However, if si lpo t, then s >lpo ti . Therefore, the former case is useless. This corresponds to the rule (D2 ). 2. g >F f , then one of the si 's must be larger than t. This corresponds to rule (D3 ). 3. f = g, then, either one of the si 's is larger than t or (s1 ; : : : ; sn ) lpo (t1 ; : : : ; tn ). This corresponds to rule (D4 ). The correctness of other rules is obvious. Completeness is easy to check when the system terminates. Let us prove termination. Consider the following interpretation functions:  1 (s1 = t1 ^ : : : ^ sn = tn ^ u1 > v1 ^ : : : ^ um > vm ) is the multiset of multisets of natural numbers:

ffjs j; jt jg; : : : ; fjsn j; jtn jg; fju j; jv jg; : : : ; fjumj; jvm jgg 1

1

1

1

where jsj is the number of function symbols and variables occurring in s (also called the size of s). Such multisets are ordered by the usual multiset extensions of orderings (see e.g. Ref. 11).  2 (s1 = t1 ^ : : : ^ sn = tn ^ u1 > v1 ^ : : : ^ um > vm ) is the number of unsolved variables in the system. A variable x is solved in such a system if x is a member of an equation and x occurs only once in the system.  (Wj2J cj ), where cj is a conjunction of equations and inequations, is the multiset of pairs (2 (cj ); 1 (cj )). Such interpretations are ordered using the multiset extension of the lexicographic ordering on pairs.

Equality Rules (D1 ) f (v1 ; : : : ; vn ) = f (u1; : : : ; un) ) v1 = u1 ^ : : : ^ vn = un (C1 ) f (v1 ; : : : ; vn ) = g(u1 ; : : : ; um ) ) ?

If f 6= g

(R) x = t ^ P ) x = t ^ P fx 7! tg If x is a variable, x 62 V ar(t), P is a conjunction of equations and inequations, x 2 V ar(P ) and, if t is a variable, then t 2 V ar(P ).

(O1 ) s = t[s]p ) ?

If p 6= .

Inequality Rules (D2 ) f (v1 ; : : : ; vn ) > g(u1 ; : : : ; um) ) f (v1 ; : : : ; vn ) > u1 ^ : : : f (v1 ; : : : ; vn ) > um If f >F

g

(D3 ) f (v1 ; : : : ; vn ) > g(u1 ; : : : ; um ) )

v1  g(u1 ; : : : ; um) _ : : : vn  g(u1 ; : : : ; um)

If g >F

f

(D4 ) f (v1 ; : : : ; vn ) > f (u1 ; : : : ; un ) ) (v1 > u1 ^ f (v1 ; : : : ; vn ) > u2 ^ : : : f (v1 ; : : : ; vn ) > un ) _ (v1 = u1 ^ v2 > u2 ^ : : : f (v1 ; : : : ; vn ) > un )

_ ::: _ (v = u ^ v = u ^ : : : vn > un ) _ v  f (u ; : : : ; un ) _ : : : _ vn  f (u ; : : : ; un) 1

1

1

2

2

1

(O2 ) t[s]p > s ) >

1

If p 6= .

(O3 ) s > t[s] ) ? (T1 ) s > t ^ t > s ) ? (T2 ) s = t ^ s > t ) ? Fig. 3. Transformation rules.

We prove actually that  is strictly decreasing W by applicationWof any rule to an inequational problem. Assume that I  c _ j2J cj , and c ) j2J c0j . Then, we have to prove that, for every j 2 J 0 , either 2 (c0j ) < 2 (c) or 2 (c0j ) = 2 (c) and 1 (c0j ) < 1 (c). Actually, for every rule and every j , 2 (c0j )  2 (c) because there is no rule which can turn a solved variable into an unsolved one, except if the resulting formula is > or ?. Assume now that 2 (c0j ) = 2 (c) for some j . Note that this excludes the replacement rule (R) for which the number of unsolved variables is strictly decreasing. It is easy to check the strict decreasingness of 1 . Let us show it for e.g. (D3 ): 0

c  c0 ^ f (v1 ; : : : ; vn ) > g(u1 ; : : : ; um ) ) (c0 ^ v1 = g(u1 ; : : : ; um )) _ : : : _ (c0 ^ vm = g(u1; : : : ; um)) _(c0 ^ v1 > g(u1; : : : ; um)) _ : : : _ (c0 ^ vn > g(u1 ; : : : ; um )): For each j , c0j is either some c0 ^ vi = g(u1 ; : : : ; um) or some c0 ^ vi > g(u1 ; : : : ; um ).

In both cases,

1 (c) = fa1 ; : : : ; ak ; f1 + b1 + : : : + bn ; cgg; where, for each i, bi = jvi j and c = jg(u1 ; : : : ; um )j and 1 (c0j ) = fa1 ; : : : ; ak ; fbi ; cgg; for some i. By de nition of the multiset ordering, 1 (c0j ) < 1 (c). This proves that  is strictly decreasing by application of any rule. Since  interprets the inequational problems in a well-founded domain, this proves termination.

2

Example 1. Assume that F 0 = F [fh; gg (where h; g 62 F ) with h >F g >F f . Then, h(u ; u ) > g(h(u ; v ); h(v ; u )) )R u > v ^ u > v : 1

2

1

2

1

2

1

1

2

2

This shows that solving a conjunction of inequations is equivalent to solving a single inequation w.r.t. another set of function symbols. The reduction relation )R is not sucient for deciding the existence of a solution. Indeed, there are irreducible inequational problems that are di erent from ? but do not have any solution. Example 2. Let F = f: : : ; s; 0g with : : : >F s >F 0. The following problem:

s(x) > y ^ y > x has no solution since, for every ground term x, there is no term between x and s(x) The above example shows that our rules system is not sucient. It suggests use of the following rules: v > u ; v = succ(u) _ v > succ(u) succ(u) > v ; v = u _ u > v

If succ(u) is a term such succ(u) and u . (We say

that, for all ground substitution , there is no term between that succ(u) is the successor of u).

Unfortunately, there are two problems with these rules. First, they do not terminate because we can derive an in nite sequence v > succn(u) (there may be a \gap" between v and u). Therefore, we have to nd in which situations they should be used. Secondly they are not complete. Indeed, a term u may have some instances that are successor terms and some instances that are not successor terms. Example 3. Assume F = fg >F f >F 1 >F 0g, where f is a binary function symbol. Then, f (x; y) has some instances which are successor terms (f (0; g(0)) is the successor of g(0)) and some instances which are not successor terms: f (1; 0) is a \limit ordinal": 0 < 1 < f (0; 0) < f (0; 1) < f (0; f (0; 0)) < f (0; f (0; 1)) : : : < f (1; 0) : : : Even worse, it may happen that all instances of a term v are successor terms but they cannot be written as successors of instances of a single term u. Example 4. Using the same ordering F as in Example 3, f (0; x) is a successor term for every ground term x. But f (0; 0) = succ(1), whereas f (0; t) = succ(t) if t lpo f (1; 0) (See Sec. 4 for more details). Then, for example, the problem f (0; x) > y ^ x > f (1; 0) ^ y > x has no solution. This shows that we have to study more deeply the successor function in order to derive the right (terminating) rules. This will be achieved in Sec. 4. Before that, let us simplify the problem by reducing the satis ability of solved forms to the satis ability of some particular formulas.

3. From Solved Forms to Simple Systems

In this second step, we show how to reduce the satis ability of inequational problems to the satis ability of simple systems. The basic idea is to perform at once all possible identi cations between terms occurring in I . Then, one need no longer consider equalities: we may then assume that all terms occurring in I are distinct and may therefore be totally ordered.

Example 5. Let I  f (y; 0) > x ^ y > f (0; x). We sketch our method on this simple example. The possible identi cations are x = 0, x = y, x = f (y; 0), y = 0, y = f (0; x) and all combinations of them. Consider, for example, the identi cation x = 0. We get the system f (y; 0) > 0 ^ y > f (0; 0): Now the terms occurring in the system are assumed to be distinct and hence can be totally ordered. This leads to the systems f (y; 0) > y > 0 > f (0; 0); f (y; 0) > y > f (0; 0) > 0; : : : : The rst system can be removed since 0 > f (0; 0) )R ?. The second problem is a simple system.

The section is organized as follows: we rst de ne simple systems (3.1), then perform all possible identi cations (3.2) and nally consider all total orderings on the subterms which do not lead to a contradiction, using R (3.3). 3.1. De nition of Simple Systems A system is a conjunction of equations and inequations. It can be considered either as a formula or a set of equations and inequations. If I is a system, then Sub(I ) denotes the set of all (sub)terms of all terms t that occur as a member of an equation or an inequation in I . A simple system is a system I satisfying the following properties:  There exists a nite set of terms ft1; : : : ; tn g such that

I

^

i<jn

ti > tj :

1

 If I )R Wj2J cj , then some cj is a subformula of I (i.e. there exists a formula c such that I  c ^ cj ). This means in particular that J cannot be empty.  Every subterm of a term ti is some term tj . Such systems will be written in the following way

t1 > : : : > tn : Note that a simple system, by de nition, cannot be reduced to ? using R, because of the second condition above. In particular, a simple system cannot contain any inequation s > t[s]. Moreover, note that the second condition above is equivalent to the (stronger) following property: W If I 0 is a subsystem of I and I 0 )R j2J cj , then some cj0 is a subsystem of I . In the following we will use as well this stronger version of the second condition. 3.2. Identi cations A variable x that occurs in a system I is solved in I if x occurs only once in I and there is an equation x = t in I . We consider the following rule (identi cation)

(Id) I !Id x = t ^ Ifx 7! tg; if x 2 V ar(I ) ? V ar(t) is not solved in I and t 2 Sub(I ) and t is not a solved variable.

The reduction relation !Id is terminating because each application of the rule decreases the set of unsolved variables in the system. Therefore, the set G(I ) of systems I 0 (in disjunctive normal form) such that I !Id I 0 is nite. The equational part of a system I is the conjunction of all equations occurring in I . The inequational part IP (I ) is the conjunction of inequations occurring in I . Let

H (I ) = fI 0 j 9I 00 2 G(I ); I 0 = IP (I 00 )g:

Example 6. Let f >F 0 and I  f (y; 0) > x ^ y > f (0; x). Then, H (I ) = f f (y; 0) > x ^ y > f (0; x); f (y; 0) > 0 ^ y > f (0; 0); f (y; 0) > y ^ y > f (0; y); ?; f (0; 0) > x ^ 0 > f (0; x); f (0; 0) > 0 ^ 0 > f (0; 0); g Lemma 1 . H (I ) is nite. Proof. This follows the termination property of !Id. 2 Now, because all possible identi cations have been considered, we may assume that all terms occurring in a formula I 0 2 H (I ) are distinct: Lemma 2 . I has a solution i there is an inequational problem I 0 2 H (I ) which has a solution  such that

8s; t 2 Sub(I 0 ); s 6 t:

Proof. If  is a solution of some I 0 2 H (I ), then, by construction, there is an 00 I  x1 = t1 ^ : : : ^ xn = tn ^ I 0 where x1 ; : : : ; xn are solved in I 00 and I )Id I 00 .

Then fx1 7! t1 ; : : : xn 7! tn g is a solution of I . Conversely, if  is a solution of I , then we construct G (I ) as follows:  I0  I .  If there is a variable x 2 V ar(In ) and a term t 2 Sub(In) such that x  t and In !Id x = t ^ In fx 7! tg, then In+1  x = t ^ In fx 7! tg.  Otherwise, G (I ) = In . The above construction terminates because !Id terminates, and, by de nition, IP (G(I ) ) 2 H (I ). Moreover, by construction, we have the following properties:   is a solution of G (I ) ( is a solution of each In ).  If x 2 V ar(IP (G (I ))) and t 2 Sub(IP (G (I ))), then x 6 t. 2

3.3. Considering all Compatible Total Orderings For every system I , let K (I ) be the set of all total orderings  on Sub(I ) compatible with the inequations in I i.e. such that s > t 2 I ) s  t. Then, if I is a system, let ^ S (I ) = f s > t j 2 K (J ); J 2 H (I )g: st

Finally, let D(I ) be the set of systems I 0 in S (I ) that cannot be reduced to ? using )R . The following lemmas show that the satis ability of an inequational problem reduces to the satis ability of some system in D(I ), which is a nite set of simple systems. Lemma 3 . A conjunction of inequations I has a solution i there is some system in D(I ) that has a solution. Proof. If some system in D(I ) has a solution , then  is also the solution of some I 0 2 H (I ) and, by Lemma 2, I has a solution. Conversely, assume that  is a solution of I . Then, by Lemma 2,  is a solution of some I 00 2 H (I ) and is injective on Sub(I 00 ). Therefore,  is a solution of some system I 0 2 S (I ) : it is sucient to choose  as follows:

s  t , s >lpo t: This indeed de nes a total ordering on Sub(I 00 ) because, if s  t and s 6 t, there is a variable x in V ar(s; t) and a subterm u of s or t such that x  u, which contradicts the injective property of . Now, I 0 cannot be reduced to ? (since it has a solution) : I 0 2 D(I ). 2

Lemma 4 . D(I ) is a nite computable set of simple systems. Proof. We only have to prove that, if I 0 2 D(I ) and I 0 )R Wj2J cj , then some cj is a subformula of I 0 . Since I 0 is a conjunction of inequations, only (D ); (D ); (D ); (O ); (O ); (T ); (T ) may apply. Since I 0 6)R ?, (O ); (T ); (T ) cannot apply. Moreover, if the rule replaces an inequation of I 0 with >,b then J contains a single element (assume it is 1). c is obviously a subformula of I 0 . We 2

3

4

2

3

1

2

3

1

2

1

exclude now this case. This means in particular that only the rules (D2 ); (D3 ); (D4 ) have still to be considered. W Assume now that I 7!D2 ;D3 ;D4 i2J ci . Let i0 be an index i0 2 J such that ci0 6)R ?. Such an index does exist since I 0 6)R ?. Sub(ci0 )  Sub(I 0 ) because each term occurring in the right-hand side of a rule Di is a subterm of some term occurring in the left-hand side. On the other hand, if s; t 2 Sub(I 0 ) are two distinct terms, then either s > t or t > s occurs in I 0 , by construction. Then, for every equation s = t and every inequation s > t occurring in ci0 , either s > t or t > s occurs in I 0 . This means that

b This occurs if the rule ( 2 ) is used, but also in other situations, for example, with the rule ( 3 ) when i  ( 1 m ). Indeed, in such a case, the rule produces an identity = which is immediately reduced to > by the normalization of formulas. O

D

t

g u ;:::;u

s

s

1. ci0 contains no equation. (Otherwise, it can be reduced to ? by (T2 )). 2. if s > t 2 ci0 , then s > t 2 I 0 . (Otherwise, ci0 contains both inequations s > t and t > s. Then it could be reduced to ? by (T1 ).) This shows that ci0 is a subformula of I 0 . 2 From the previous lemmas, it is sucient to study simple systems. This is what we are going to do in the next section.

4. Satis ability of Simple Systems

We show here how to decide the satis ability of simple systems. Once again, we split the satis ability problem in three steps. First, we establish some technical results about lpo . Then we show how to solve a particular kind of simple systems: the natural simple systems. Finally, we reduce the satis ability of a simple system to the satis ability of nitely many natural simple systems. 4.1. Successor of a Ground Term The ordering F is assumed to be total. We assume moreover that F contains at least one non-constant function symbol. (If F is a set of constants, then T (F ) is nite and it is easy to nd all solutions of an inequational problem, trying all possible ground instances of the variables.) Let 0 be the least constant symbol and f be the least non-constant function symbol. C is the set of constant function symbols that are either 0 or smaller (w.r.t. F ) than any non-constant function symbol. Let C = fc1 >F : : : >F cn = 0g. Some terms play a special role w.r.t. the successor function in T (F ) (there is a \discontinuity" as shown in Sec. 2). Let N be the least set of terms solution of the equation N = C [ f (0; : : : ; 0; N ). We also denote by N (X ) the least solution of

N (X ) = C [ X [ f (0; : : : ; 0; N (X )): Note that any term in N (X ) contains at most one occurrence of a variable. succ(t) denotes the successor of t 2 T (F ) de ned as follows:  succ(ci+1 ) = ci .  succ(c1) = f (0; : : : ; 0).  If t  f (0; : : : ; 0; t0 ) 2 N , then succ(t) = f (0; : : : ; 0; succ(t0)).  In all other cases, succ(t) = f (0; : : : ; 0; t).

Lemma 5 . Let s; t be two ground terms. s >lpo t i s lpo succ(t).

In particular, there is no term between t and succ(t).

Proof. succ(t) >lpo t is straightforward. We only have to prove that s >lpo t ) s lpo succ(t). We are going to prove this implication by induction on the depth of t. If t 2 C , this is straightforward. If t is a constant and t 62 C , then s >lpo t implies that there is a subterm u  g(u1; : : : ; um ) in s such that g F t. On the other hand, t >F f since t 62 C . Therefore, g(u1 ; : : : ; um ) >lpo t ) g(u1 ; : : : ; um ) >lpo f (0; : : : ; 0; t)  succ(t). Finally, if g = t, then u is a strict subterm of s and the top symbol of s is greater than f , which shows again that s lpo f (0; : : : ; 0; t)  succ(t). Assume now that the lemma holds up to a certain depth. As above, s >lpo t  g(t1 ; : : : ; tn ) implies that  either there is some subterm u in s such that u >lpo t, u  h(u1 ; : : : ; um ), h F g and no proper subterm of u has this property  or t is a proper subterm of s and we are not in the above case.

We investigate now a number of cases Case 1. h >F g and t 62 N . Then u >lpo t and u >lpo 0 implies u >lpo f (0; : : : ; 0; t) since h >F f , by de nition of >lpo . Therefore, u >lpo succ(t). Case 2. h >F g = f and t 2 N . Either t is a constant and the result is straightforward or t  f (0; : : : ; 0; t0 ). Then u lpo succ(t0) by induction hypothesis. But u cannot equal succ(t0 ) since the top symbol of u is h and the top symbol of succ(t0 ) is f , by definition of succ. Therefore, u >lpo succ(t0). Now, h >F f implies u >lpo f (0; : : : ; 0; succ(t0))  succ(t). Case 3. h = g >F f . In such a case, t 62 N and h >F f implies u >lpo f (0; : : : ; 0; t) = succ(t) by de nition of >lpo . Case 4. h = g = f , ui 6 0 for some i < n and t 62 N . By de nition of >lpo , u > t and ui > 0 for i < n implies f (u1 ; : : : ; un ) >lpo f (0; : : : ; 0; t). Case 5. h = g = f , ui 6 0 for some i and t 2 N . By induction hypothesis, u >lpo tn implies u  succ(tn) and, since ui 6 0, u cannot equal succ(tn ). Therefore, by de nition of >lpo , u > f (0; : : : ; 0; succ(tn)). Case 6. u  f (0; : : : ; 0; un). Since un 6>lpo t and u >lpo t, t must equal f (0; : : : ; 0; tn) and un >lpo tn . By induction hypothesis, un  succ(tn ). Now, either t 2 N and obviously u >lpo succ(t) or t 62 N . In the latter case, tn 62 N and succ(tn)  t. Thus, un lpo t and therefore u lpo succ(t). Case 7. t is a proper subterm of s and s  h(s1; : : : ; sn) satis es g >F h. Then g 6= f and h 62 C (and therefore t 62 N ). Now, either h = f and s lpo succ(t) follows from s1 lpo 0; : : : ; sn lpo 0 and si lpo t for some i, or else h >F f and the conclusion follows from s >lpo 0 and s >lpo t. 2

Lemma 5 actually states precisely the relationship between ground terms and ordinal numbers: succ is the successor function. Example 7. Let us come back to Example 3: g >F f >F 1 >F 0. Listing the terms in increasing order, we nd (using classical ordinal notations): 0 1 2 3 4 ... 0 1 f (0; 0) f (0; 1) f (0; f (0; 0)) ... ! !+1 !+2 ... f (1; 0) f (0; f (1; 0)) f (0; f (0; f (1; 0))) ... !2 !  2 + 1 ... !2 !2 + 1 ... f (1; 1) f (0; f (1; 1)) ... f (1; f (1; 0)) f (0; f (1; f (1; 0))) ... !! ... ! !! ... 0 ... f (f (0; 0); 0) ... f (f (1; 0); 0) ... f (f (f (0; 0); 0); 0) ... (See Ref. 12 for a description of the order type of recursive path orderings.) The set of terms that have indeed a predecessor will be useful. Let ST be the subset of T (F ) de ned by:

ST = ft 2 T (F ) j 9u 2 T (F ); t  succ(u)g:

Lemma 6 . We have the following properties of ST; N :  ST [ f0g = N [ ff (0; : : : ; 0; u) j u 2 T (F )g.  8t 2 T (F ); succ(t) = 6 f (0; : : : ; 0; t) ) t 2 N .  N is order-isomorphic to the natural numbers.  Every term in T (F ) ? N is greater (w.r.t. lpo ) than every term in N . Proof. The rst property follows from the de nition of succ. The second property follows from the de nition of succ by induction on the depth of t. The third property follows from Lemma 5 since N is the set of succn (0) for every natural number n. The fourth proposition is a consequence of the third one. 2 Extending the previous notation, ST9(X ) will be the set of terms that have an instance in ST and ST8(X ) the set of terms whose all ground instances are in ST : Lemma 7 .

ST9(X ) [ f0g = N (X ) [ ff (y1; : : : ; yk ; u) j 8i; yi 2 X [ f0g; u 2 T (F; X )g and

ST8(X ) [ f0g = N [ ff (0; : : :; 0; u) j u 2 T (F; X )g:

Proof. Let A be the right-hand side of the rst equality in the lemma. If t 2 A, then, replacing all the variables of t by 0 leads to a term in ST [ f0g by Lemma 6.

Therefore, A  ST9(X ) [ f0g. Conversely, replacing any subterm of t 2 ST [ f0g with a variable gives a term in A. If u 2 T (F; X ), then, for every substitution , f (0; : : : ; 0; u) 2 ST by Lemma 6. Conversely, if t 2 ST8(X ), then every ground instance of t is either in C or has the form f (0; : : : ; 0; v) by Lemma 5. This means that t is either in N or has the form f (0; : : : ; 0; u) where u 2 T (F; X ). 2

Example 8. In the previous example, we nd T (F ) ? ST  ff (1; 0); f (1; 1); f (1; f (1; 0)); f (f (0; 0); 0); f (f (1; 0); 0); g(0)g T (F; X ) ? ST9(X ) = f0g [ ff (1; t) j t 2 T (F; X )g [ fg(t) j t 2 T (F; X )g [ ff (f (t ; t ); t ) j t ; t ; t 2 T (F; X )g [ ff (g(t ); t ) j t ; t 2 T (F; X )g 1

1

2

2

3

1

1

2

3

2

4.2. Natural Simple Systems A natural simple system is a simple system t1 > : : : > tn such that, for every i, ti 2 N (X ). The following lemma shows that natural simple systems can easily be solved. Also, it will be used for reducing simple systems to natural simple systems. Lemma 8 . Let I  t1 > : : : > tn be a natural simple system. Then I has a solution i it has a solution  such that 8x 2 V ar(I ); x 2 N . Proof. Assume that  is a minimal solution of I . i.e. there is no solution  other than  such that, for all variable x, x lpo x (such a  exists because lpo is well-founded). Suppose now that some variable x satis es x 62 N . We are going to derive a contradiction. Let I  : : : tk > tk+1 : : : such that tk  62 N and tk+1  2 N . (It is always possible to assume such a situation, maybe by adding tn > 0 when k = n). Each xi  can be viewed as an ordinal; in order to get concise information, we will use these ordinal notations until the end of the proof. In such a framework, each element in N corresponds to a natural number and by Lemma 6, f (0; : : : ; 0; t) corresponds to t + 1 if t 62 N . Each ti may be written i xi + Ni , where i is 0 or 1 and Ni is a natural number. This is so because ti is supposed to belong to N (X ). tk must be a variable (otherwise, there would be a variable y occurring in tk such that y 62 N which contradicts the fact that all tj , j > k are in N ). Let be the ordinal tk . Now let X0 be the set of variables y such that y = + Ny with Ny < !. We are going to show that the substitution  de ned by  x  tk+1  + Nx + 1 if x 2 X0 ,  x  x otherwise, is again a solution of I s.t. x < x. We investigate then some cases showing that  is a solution of I  If tm > tj is in I and m > k, then obviously,  is a solution of tm > tj

 If tm > tj is in I and m  k; j > k, then tm  >lpo tk  by de nition of . +1

Therefore,  is a solution of tm > tj  If tm > tj is in I and tj   + ! then xj 62 X0 . Thus tj   tj . In the same way, tm   tm .  is again a solution.  If tm > tj is in I and tm   + !, tj  < + !, then tm   tm  > tj   tj  and  is a solution of tm > tj .  Suppose now that tm > tj is in I and that + ! > tm  > tj   . This means that xm ; xj 2 X0 : xm  = + Nm > xj  = + Nj . Then

xm  = tk+1  + 1 + Nm > xj  = tk+1  + 1 + Nj :  is again a solution of tm > tj . This proves that  is a solution of I : all possible cases for tm > tj have been investigated. On the other hand,  <  which contradicts the minimality hypothesis on . 2

Corollary 1 . The satis ability of natural simple systems is decidable.

This is an easy consequence of Lemma 8 because we get linear inequations over the integers. 4.3. From Simple Systems to Natural Simple Systems The rules given in Figs. 4 and 5 transform any simple system into either >, ? or a nite disjunction of simple systems. They preserve the existence of a solution. More precisely, we shall say that a rule (Si ) is correct if, for every simple system I , I ;Si Wj2J cj implies that each cj is a simple system and I has a solution i one of the cj 's has a solution. In particular, when J = ;, i.e. I ;?, I has no solution. In the de nition of the rules of Figs. 4 and 5 we adopt the following conventions:  I  t1 > : : : > tn is the simple system on which the rule is applied  If I  t1 > : : : > tn where t1 ; : : : ; tn are ground terms (we call it a ground system), then I )R >. In such a case, I is obviously satis able. Therefore, we always assume that there is at least one term ti 62 T (F ).  x1 is the greatest variable of I (i.e. the one occurring with the smallest index in t1 ; : : : ; tn . Such a variable exists since, by de nition of simple systems, any subterm of a term ti is itself some tj ).  The symbols i stand for any (possibly empty) sequence tk > : : : > tl .  A simple system consisting in an empty sequence of terms or in a single term must be understood as > .

0 must be the rightmost term in the system (S1 ) 1 > 0 > 2 ; ? if 2 is not empty.

(S2 ) > y ; fy 7! 0g > 0 _ > y > 0 if y is a variable and 0 does not occur in .

Elimination of the leftmost variable (S3 ) x1 > 1 ; 1 (S4 ) 1 > t[x1 ] > 2 > x1 > 3 ; 1 > 2 > x1 > 3 if x1 does not occur in any term of the sequence 1 .

Elimination of the leftmost variable : the case t 62 N (X ) (S5 ) 1 > t > x1 > 2 > 0 ; 1 > t > 2 > 0 If t 62 ST8 (X ) and x1 occurs only once in the problem.

(S6 ) 1 > t > x1 > 2 ; ? If t 2 ST8 (X ) ? N (X ) and x1 occurs only once in the problem.

Reduction to natural simple systems when t 2 N (X ) (S7 ) 1 > c > x1 > 2 ; c > x1 > 2 If c 2 C and 1 is not empty.

(S8 ) 1 > t > x1 > 2 > t0 > 3 ; ? If t 2 N (X ), t0 62 N (X ) and x1 62 V ar( 1 ; t).

Fig. 4. Transformation rules for simple systems

(S9 ) 1 > f (0; : : : ; 0; t) > x1 > 2 > 0 ; f (0; : : : ; 0; t) > x1 > 2 > 0 if

 Every term in the sequence 2 belongs to N (X ),  1 is not empty,  x1 occurs only once in the problem,  t 2 N (X ). Fig. 5. Transformation rules for simple systems (continued)

The rules S1 ; S2 force the last term tn to be 0. The rule S3 removes the largest variable x1 if there is no upper bound constraint on it. S4 removes all occurrences of x1 but one. S5 , S6 investigate the cases where the term t just before x1 is not in N (X ) : either the problem has no solution because x1 must be between a term and its predecessor (S6 ) or x1 can be removed because t cannot be a successor term (S5 ). Finally, the rules S7 ; S8 ; S9 assume t 2 N (X ) and remove from the system all terms that do not belong to N (X ). The following lemmas show the correctness of all rules (Si ). Lemma 9 . (S1 ) is correct. Proof. This is a consequence of the de nition of 0. 2 Lemma 10 . (S2) is correct. Proof. > y > 0 is a simple system: every subterm of some term ti is some term tj because this property holds for > y. On the other hand, if

> y > 0 )R

_

j 2J

cj ;

either the rule which is applied does not involve 0 and, for some j , cj  c0j > 0 where c0j is a subformula of > y or else, the rule involves 0. In the latter case, J contains one element (call it 1) and c1  > y, because 0 is the least term.

fy 7! 0g > 0 is a simple system: if s[t] is a term occurring in fy 7! 0g, then, replacing all occurrences of 0 with y, the corresponding term s0 [t0 ] occurs in . > y is a simple system. Therefore, t0 is one of the ti 's. Then t  t0 fy 7! 0g is a W member of the sequence fy 7! 0g > 0. In the same way, if fy 7! 0g > 0 )R j2J cj , then, eitherWthe transformation involves 0 or it can be lifted to a transformation

> y )R j2J c0j where cj  c0j fy 7! 0g for every j . In both cases, there is a cj which is a subformula of fy 7! 0g > 0. Finally, fy 7! tg is a solution of > y i either t  0 and  is a solution of

fy 7! 0g > 0 or else t > 0 and fy 7! tg is a solution of > y > 0. 2

Lemma 11 . (S ) is correct. Proof. is obviously a simple system because x does not occur in it. On the 3

1

1

other hand, if  is a solution of the left hand side, it is obviously a solution of the right-hand side. Now, if  is a solution of the right right-hand side, let t2 be the leftmost term in

and let u  succ(t2). Then  = fx1 7! ug is a solution of the left-hand side. 2

Lemma 12 . (S ) is correct. Proof. First, the right-hand side is a simple system because  it is a subformula of the left-hand side,  since x does not occur in , there is no term in having t[x ] as a subterm. 4

1

1

1

1

Now, we have to prove that any solution of the right-hand side is also a solution W of the left-hand side. Assume that I ;S4 I 0 . Let j2J cj be a solved form of ti > t[x1 ] (resp. t[x1 ] > ti ) and cj0 a subsystem of I . Every inequation in cj0 has one of the following forms:  xi > u : in this case xi > u is a subformula of I 0 since x1 is the greatest variable  u > xi and u is di erent from t[x1 ]. In this case u > xi is again a subformula of I 0  t[x1 ] > xi with i 6= 1. In this case, any solution of I 0 is also a solution of x1 > xi and therefore a solution of t[x1 ] > xi . In all cases, every solution of I 0 is also a solution of cj0 . Therefore, any solution of I 0 is a solution of I . The converse inclusion is obvious, hence the rule (S4 ) is correct. 2 Lemma 13 . (S5) is correct. Proof. As in the previous lemma, the right-hand side is a simple system because it is a subformula of the left-hand side and the term which is erased (x1 ) does not occur elsewhere in the right-hand side. Let I ;S5 I 0 . It is sucient to prove that, if I 0 has a solution , then I has a solution . First, if x 2 V ar(t), then x >lpo 0 because x > 0 is a subformula of 2 . Then, from lemmas 6 and 7, t 62 ST . (Actually, t 2 ST9(X ) implies that either t 2 C or t  f (y1; : : : ; yk ; u) with at least one index i such that yi 2 X . t 62 N and yi  >lpo 0 then imply that t 62 ST ). Now let 2 > 0  u > 3 and  = fx1 7! succ(u)g.  is a solution of I : x1  >lpo u by construction, t  t >lpo x1  because t 62 ST and all other inequations are satis ed since they do not contain x1 . 2

Lemma 14 . (S ) is correct. 6

Proof. We only have to prove that the left-hand side has no solution. By Lemma 7 and because of the conditions imposed on (S6 ), t  f (0; : : : ; 0; u) with u 62 N (X ). Then, by de nition of succ, for every ground substitution , t  succ(u). On the other hand, u is not a variable and it must occur in 2 . A solution  of the lefthand side should therefore satisfy succ(u) >lpo x1  >lpo u which is impossible, by Lemma 5. 2 Lemma 15 . (S ) is correct. Proof. c > x > is a simple system because  it is a subformula of the left-hand side,  Sub(c > x > ) \ = ;. (Indeed, a term cannot occur twice in a sequence 7

1

1

2

2

1

and a term cannot occur after one of its subterms). We only have to prove that every solution  of the right-hand sideWis a solution of the left-hand side. For, let s > t be a subformula of 1 > c. Let j2J cj be a solved form of s > t and cj0 be a subformula of 1 > c > x1 > 2 . Every inequation in cj0 has one of the following forms:  xi > u. In this case, the inequation occurs in c > x1 > 2 because x1 is the leftmost variable. Therefore, xi  >lpo u.  u > xi and u is not a variable. Let u  g(u1; : : : ; um). Either u occurs in c > x1 > 2 and  is a solution of u > xi or u occurs in 1 . In this latter case, g >F c (otherwise, u > c )R ?). Thus, for every , u >lpo c. In particular, since  is a solution of c > xi , u >lpo xi . We have proved that, in any case,  is a solution of s > t. Therefore,  is a solution of 1 > c > x1 > 2 . 2

Lemma 16 . (S ) is correct. Proof. First, if t 2 C , the correctness is obvious since no constant in C is greater than any instance of t0 2 N (X ). Assume now that t  f (0; : : : ; 0; v). Suppose that  is a solution of the left-hand side I . Then x  >lpo t0  62 N . By Lemma 6, a term in N cannot be larger than a term in T (F ) ? N . Therefore x  62 N . In the same way, f (0; : : : ; 0; v) 62 N and therefore v 62 N . From Lemma 6, f (0; : : : ; 0; v) = succ(v). But, as v 6 x , there is an inequation x > v in I . This means that succ(v) > x  > v which contradicts Lemma 5. 2 8

1

1

1

1

1

The crux of the proof is the correctness of (S9 ). We will use Lemma 8 and the following result: Lemma 17 . In any simple system t1 > : : : > tn, if ti  g(u1; : : : ; up) and ti+1  h(v1 ; : : : ; vq ), then g F h or ti+1 is a subterm of ti . Proof. Suppose h >F g, then

g(u1 ; : : : ; up ) > h(v1 ; : : : ; vq ) )R

_

i=1;:::;p

ui  h(v1 ; : : : ; vq )

and, by de nition of simple systems, there is an index i such that, either ui > h(v1 ; : : : ; vq ) occurs in I or ui  h(v1 ; : : : ; vq ). On the other hand, ui has to occur somewhere in I , and this must be after g(u1 ; : : : ; up ): either h(v1 ; : : : ; vq ) > ui occurs in I or h(v1 ; : : : ; vq )  ui . There is only one consistent combination of these two observations: ui  h(v1 ; : : : ; vq ). 2

Lemma 18 . (S ) is correct. Proof. By the control, every term in belongs to N (X ). (In particular, t belongs to this set). Assume I ;S9 I 0 . By Lemma 8, I 0 has a solution i it has a solution  such that every variable x satis es x 2 N . We are going to show that  is a solution of I . Let > f (0; : : : ; 0; t)  t > : : : > tk . (For convenience, we include here the case where is empty.) We prove this result by induction on k. If k = 1, this is straightforward. Assume now W k  2. Let ti > tj be such that i; j  k and let j2J cj be one of its solved forms. Some problem cj0 is a subformula of I . Let u > v be an inequation in cj0 . If u 6 ti , then  is a solution of u > v because this inequation must either occur in I 0 or be 9

2

1

1

1

an inequation ti > tj with i0 > i. In the latter case, we just use the induction hypothesis. Assume now that u  ti and v is a variable. If u 62 N (X ), then, from Lemma 6, u >lpo v. Actually, u 62 N (X ) is the only possible case. For, assume u 2 N (X ). From Lemma 17, tj is either a subterm of ti (and in this case, the solved form of ti > tj would be >) or tj  f (u1 ; : : : ; un ). In the latter case, 0

0

f (0; : : : ; 0; v) > f (u1 ; : : : ; un ) ) v > f (u1 ; : : : ; un) _(u1 = 0 ^ : : : ^ un?1 = 0 ^ v > un) _ c1 _ : : : _ cm : But, in each formula on the right, ti has been decomposed : it is not possible to nd it again in any solved form. This contradicts the assumption u  ti . This shows that  is a solution of ti > tj when i; j  k. Assume now that i  k ? 1 and j > k. Then ti  >lpo tk  and tk  >lpo tj , which proves again ti  >lpo tj . Thus, in any case,  is a solution of I . 2

Lemma 19 . The system of rules given in gures 4 and 5 terminates. Moreover, any irreducible simple system is either a natural simple system or a ground system. Proof. The termination is straightforward : the number of terms in a simple system strictly decreases by application of any rule but S2 . On the other hand, S2 may be applied at most once. We have only to show that the rules cover all possible cases. Assume that I  t1 > : : : > tn contains a variable and a term t 62 N (X ). For each rule Si , assuming I irreducible w.r.t. Sj , j < i, we show below which additional properties I has if it is moreover assumed to be irreducible w.r.t. Si . S1 ; S2 : I  1 > 0 S3 ; S4 : I  1 > t > x1 > 2 > 0 and x1 occurs only once.

S5 : t 2 ST8(X ) S6 : t 2 N (X ) S7 : t 2 N (X ) ? C i.e. t  f (0; : : : ; 0; u) with u 2 N (X ) or else, t 2 C and 1 is

empty. S8 : Every term in 2 belongs to N (X ) S9 : Now, all conditions for applying S9 are ful lled: there is a contradiction. 2

Bringing all results of this section together, it is now possible to state: Theorem 1 . The existence of a solution to a simple system is decidable. Together with the results of sections 2 and 3 we get: Theorem 2 . The existence of a solution to an inequational problem is decidable.

5. Further Remarks  Our technique for deciding the existence of a solution is actually constructive:

    

it is possible to extract an actual solution from the satis ability proof. However, our method does not provide a \compact" representation of the set of all solutions. More precisely, assuming that some variables of an inequational problem are existentially quanti ed, our method does not provide an equivalent quanti er-free formula. That is the reason why the technique we have presented here cannot be lifted in an obvious way for deciding the rst order theory of a lexicographic path ordering. Our algorithm has a high complexity. This is not surprising since the problem can actually be shown to be NP-hard. Theorem 2 and the technique we presented in this paper can be extended for solving inequations over arbitrary ordinal notations (J.-P. Jouannaud and M. Okada, private communication). The decidability of the rst order theory of the lexicographic path ordering with a total precedence is still an open question. However, it has been shown by R. Treinen that, as soon as two function symbols are uncomparable w.r.t. F , then the rst order theory of lpo is undecidable13 . In this latter case, the decidability of the existential fragment of the theory is still open. (Our technique cannot be generalized in a straightforward way.) As mentioned in the introduction, Theorem 2 can be used for de ning an extension of >lpo to non-ground terms which is more powerful than the usual extension7 where variables are considered as new (unrelated) function symbols. Let s >lpo t i for every ground substitution , s >lpo t. >lpo is

decidable because the unsatis ability of s  t is decidable (theorem 2). >lpo is a simpli cation ordering. Of course, it contains the usual extension >lpo . This inclusion may be strict, as shown by the two examples;

Example 9. f >F 1 >F 0. f (x; 1) and f (0; 0) are uncomparable w.r.t. >lpo. However, f (x; 1) >lpo f (0; 0) because any ground term is greater or equal to 0. Example 10. h >F g >F s >F 0. u  g(h(s(x); x); h(y; y)) and t  h(s(x); y) are uncomparable w.r.t. >lpo . On the other hand, u >lpo t. Indeed, u  t )R y > x ^ s(x) > y which has no solution. Acknowledgments

I acknowledge J.-P. Jouannaud, M. Rusinowitch and R. Treinen for comments and discussions on an early version of this paper. I also thank an anonymous referee for his careful reading of the paper and a number of relevant comments.

References 1. H. Comon and P. Lescanne, \Equational problems and disuni cation", J. Symbolic Computat. 7 (1989) 371-425. 2. M. J. Maher, \Complete axiomatizations of the algebras of nite, rational and in nite trees", in Proc. 3rd IEEE Symp. Logic in Computer Science, Edinburgh, July 1988, pp 348-357. 3. A. I. Mal'cev, \Axiomatizable classes of locally free algebras of various types", in The Metamathematics of Algebraic Systems. Collected Papers, 1936-1967 (North-Holland, 1971) pp. 262-289. 4. H. Comon, \Disuni cation: a survey", in Computational Logic: Essays in Honor of Alan Robinson, eds J.-L. Lassez and G. Plotkin (MIT Press, 1991) to appear. 5. K. N. Venkataraman, \Decidability of the purely existential fragment of the theory of term algebras", JACM, 34, 2 (1987) 492-510. 6. J. Hsiang and M. Rusinowitch, \On word problems in equational theories", in Proc. 14th ICALP, Karlsruhe, LNCS 267 (Springer-Verlag, July 1987). 7. N. Dershowitz, \Termination of rewriting", J. Symbolic Comput., 3, 1 (1987), 69-115. 8. L. Bachmair, N. Dershowitz and J. Hsiang, \Orderings for equational proofs", in Proc. 1st IEEE Symp. Logic in Computer Science, Cambridge, Mass., June 1986, pp. 346-357. 9. N. Dershowitz and J.-P. Jouannaud, \Rewrite Systems", in Handbook of Theoretical Computer Science, volume B, J. van Leeuwen ed. (North-Holland, 1990).

10. M. Rusinowitch, \Bounded deduction and application to completion procedures", unpublished draft (1989). 11. N. Dershowitz and Z. Manna, \Proving termination with multiset orderings", Commun. ACM 22, 8 (1979) 465-476. 12. N. Dershowitz and M. Okada, \Proof-theoretic techniques for term rewriting", in Proc. 3rd IEEE Symp. Logic in Computer Science, Edinburgh, June 1988. 13. R. Treinen, \A new method for undecidability proofs of rst order theories", Tech. Report A-09/90, Universitat des Saarladandes, Saarbrucken, May 1990.