Compositionality properties of SLD-derivations Marco Comini a and Maria Chiara Meo b a Dipartimento
di Informatica, Universit` a di Pisa, Corso Italia 40, 56125 Pisa, Italy.
b Dipartimento
di Matematica Pura ed Applicata, Universit` a di L’Aquila, via Vetoio, localit` a Coppito, 67010 L’Aquila, Italy.
Abstract The paper introduces a semantics for definite logic programs expressed in term of SLD-derivations and studies various properties of SLD-derivations by using the above semantics. The semantics of a program is a goal-independent denotation, which can equivalently be specified by a denotational semantics and a transition system. The denotation is proved to be correct, minimal, AND-compositional and OR-compositional. The denotational semantics and the transition system are defined in terms of a set of primitive semantic operators, whose properties are directly related to the properties of the denotation. The SLD-derivations semantics has been designed to act as collecting semantics for a framework of abstract semantics [3,4]. Key words: logic programming, SLD-derivations, semantics, compositionality.
1
Introduction
Lack of compositionality of conventional logic programming semantics has been a serious limitation, since by they very nature PROLOG program fragments are written to be used in an extensible, modular fashion. It has often been noted, in particular, that traditional bottom-up or top-down semantics fails to be sufficiently operational, identify too many computationally distinct programs, and are blind to many interesting observables. The paper introduces a semantics for definite logic programs expressed in term of SLD-derivations and studies various properties of SLD-derivations by using the above semantics. The semantics is defined according to the approach in [2], which was already used for some abstractions of SLD-derivations, such as
Preprint submitted to Elsevier Preprint
9 May 2001
computed answers [9], call patterns and partial answers [10] and resultants [11]. The basic idea underlying the approach is the goal independent program denotation, which can equivalently be specified by top-down and bottom-up constructions. The top-down definition is the set of SLD-derivations for pure atomic goals, while the bottom-up definition is the least fixpoint of a suitable TP operator. The denotation is proved to be correct and fully abstract w.r.t. the observational equivalence induced on programs by SLD-derivations. Moreover, it is proved to enjoy two important compositionality properties, i.e., AND-compositionality and OR-compositionality. AND-compositionality means that the SLD-derivations of any goal can be reconstructed from the goal independent denotation. OR-compositionality means that the denotation of P1 ∪ P2 can be reconstructed from the denotations of P1 and P2 . The above results simply extend to SLD-derivations similar results obtained for other (more abstract) observables. The main novelty of this paper is the semantics definition methodology and the structure of the resulting semantics. We start by defining a denotational semantics on domains consisting of sets of SLD-derivations. It is a rather standard denotational definition with two peculiarities. First it deals with low-level operational details, while the usual denotational semantics operates on the domain of computed answers, and are therefore much more abstract. Moreover, the typical compositional style of denotational semantics allows us to identify a small set of primitive semantic operators, which are the semantic counterpart of the language syntactic operators. The same primitive semantic operators are then used to define the operational semantics, by means of a transition system. The proof of all the main theorems, such as • equivalence between denotational and operational semantics, • equivalence between bottom-up and top-down (goal independent) denotations, • correctness and minimality of the denotation, • AND-compositionality and OR-compositionality of the denotation, heavily rely on some lemmata, which express properties of the primitive semantic operators. This is even more important, because the SLD-derivations semantics has been conceived as the collecting semantics for a hierarchy of semantics [3], systematically derived by using abstract interpretation theory [6]. Since abstraction is essentially abstraction of the primitive semantic operators, the abstract semantics will inherit all those properties of the collecting semantics for which the suitable lemmata on the semantic operators hold. This provides the basis for the definition of a taxonomy of abstractions [3,4]. It is worth noting that the SLD-derivations semantics is the most natural choice for a collecting semantics. It is essentially a traces semantics and it contains all the relevant information of SLD-trees. A more abstract semantics, such as 2
the resultant semantics, would not allow to derive properties such as proof trees (used in the Heyting’s semantics in [15,14]) or derivation lengths. The paper is organized as follows. Section 2 contains background definitions and terminology. Section 3 defines the semantic domain. Section 4 introduces the denotational semantics and the primitive semantic operators. Section 5 defines the transition system. Section 6 defines the goal independent denotations. Finally Section 7 contains the main equivalence and compositionality theorems.
2
Preliminaries
In the following sections, we assume familiarity with the standard notions of logic programming as introduced in [1] and [17]. Throughout the paper we assume programs and goals being defined on a first order language given by a signature Σ consisting of a finite set F of function symbols, a finite set Π of predicate symbols and a denumerable set V of variable symbols. T denotes the set of terms built on F and V . A substitution is a mapping ϑ : V → T such that the set dom(ϑ) := {x | ϑ(x) 6= x} (domain of ϑ) is finite. ε is the empty substitution. range(ϑ) denotes the range of ϑ i.e., the set {y | x 6= ϑ(x), y ∈ var (ϑ(x))}. If ϑ is a substitution and E is a syntactic expression, ϑ|E is the restriction of ϑ to the variables var (E) of E. The composition ϑσ of the substitutions ϑ and σ is defined as the functional composition. A substitution ϑ is called idempotent if ϑϑ = ϑ or, equivalently, if dom(ϑ) ∩ range(ϑ) = ∅. A renaming is a (non idempotent) substitution ρ for which there exists the inverse ρ−1 such that ρρ−1 = ρ−1 ρ = ε. The preordering ≤ (more general than) on substitutions is such that ϑ ≤ σ if and only if there exists ϑ0 such that ϑϑ0 = σ. The result of the application of a substitution ϑ to a term t is an instance of t and is denoted by tϑ. We define t ≤ t0 (t is more general than t0 ) if and only if there exists ϑ such that tϑ = t0 . The relation ≤ is a preorder (called subsumption) and by ≡ we denote the associated equivalence relation (variance). A substitution ϑ is a unifier of terms t and t0 if tϑ = t0 ϑ (where = denotes syntactic equality). If two terms are unifiable then they have an idempotent most general unifier which is unique up to renaming. Therefore mgu(t1 , t2 ) denotes such an idempotent most general unifier of t1 and t2 . All the above definitions can be extended to other syntactic expressions in the obvious way. We restrict our attention to idempotent substitutions, unless explicitly stated otherwise. The set of all idempotent substitutions is denoted by Subst. 3
An atom is an object of the form p(t1 , . . . , tn ) where p ∈ Π, t1 , . . . , tn ∈ T . A goal is a sequence of atoms A1 , . . . , Am . The empty goal is denoted by 2. The set of all atoms is denoted by Atoms and the set of all goals is denoted by Goals. We denote by G and B possibly empty sequences of atoms, by t, x tuples of, respectively, terms and distinct variables. Moreover we denote by t both the tuple and the set of corresponding syntactic objects. B, B 0 denotes the concatenation of B and B 0 . An atomic goal is called pure if it is in the form p(x). A (definite) clause is a formula of the form H ← A1 , . . . , An with n ≥ 0, where H (the head ) and A1 , . . . , An (the body) are atoms. “←” and “,” denote logical implication and conjunction respectively, and all variables are universally quantified. If the body is empty the clause is called a unit clause. A program is a set of (definite) clauses. Given a goal G and a program P , the formula G in P (or P ∪ {G}) is a query. Definite clauses have a natural computational reading based on the resolution procedure. The specific resolution strategy called SLD can be described as follows. Let G := A1 , . . . , Ak be a goal and c := H ← B be a (definite) clause. G0 is derived from G and c by using ϑ if and only if there exists an atom Am , 1 ≤ m ≤ k such that ϑ = mgu(Am , H) and G0 = (A1 , . . . , Am−1 , B, Am+1 , . . . , Ak )ϑ. An SLD-derivation (or simply a derivation) of the query G in P consists of a (possibly infinite) sequence of goals G0 , G1 , G2 , . . . called resolvents, together with a sequence c1 , c2 , . . . of variants of clauses in P which are renamed apart 1 and a sequence ϑ1 , ϑ2 , . . . of idempotent mgus such that G0 = G and, for i ≥ 1, each Gi is derived from Gi−1 and ci by using ϑi . An SLD-refutation of G in P is a finite SLD-derivation of G in P which has the empty goal 2 as the last goal in the derivation. An SLD-tree of G in P is the prefix tree of all SLD-derivations of G in P . A selection rule R is a function which, when applied to a “history” containing the goal, all the clauses and the mgus used in the derivation G0 , G1 , . . . , Gi , returns an atom in Gi . Such an atom is the selected atom in Gi . In the following for the sake of simplicity we consider the PROLOG leftmost selection rule. All our results can be generalized to skeleton rules [11]. ϑ
ϑ
n 1 Gn (n ≥ 0), denotes a (partial and finite) · · · −→ In the following G −→ cn c1 SLD-derivation of goal G via the leftmost selection rule. The derivation uses the renamed apart clauses c1 , . . . , cn and ϑ := (ϑ1 · · · ϑn )|G is the computed ϑ answer substitution of G. We also denote by G − →∗ B a finite SLD-derivation P of G in P via the leftmost selection rule, where ϑ is the computed answer substitution and B is the last resolvent.
1
i.e., such that ci does not share any variable with G0 , c1 , . . . , ci−1
4
Given a derivation d, first(d) and last(d) (if d is finite) are respectively the first and the last goal of d. length(d) denotes the length of the derivation and clauses(d) denotes the sequence of clauses of d. Moreover prefix (d) is the set of all derivations which are prefixes of d. By an abuse of notation, we denote a zero-length derivation of G by G itself. In the paper we use standard results on the ordinal powers ↑n of continuous functions on the complete lattices. Namely, given any monotonic operator T F on (C, ≤), T ↑ω := n 0 the following facts hold. P
P
P
X
{D0 | D 7−→n D0 } =
X
{D0 | D 7−→n−1 D00 , D00 7−→1 D0 } =
P
X X
{
X
[ by definition of 7−→n ] P
P
P
{D0 | D00 7−→1 D0 } | D 7−→n−1 D00 } = P
{D00 | D 7−→n−1 D00 } 1 su (tree(P )) =
(D 1 su n−1 (tree(P ))) 1 su (tree(P )) = D 1 su n (tree(P )).
15
[ by set theory ] [ by (14) ] [ by inductive hypothesis ] [ by Point 2 of Lemma 6 ]
Finally B [[G in P ]] = X
P
P
{D | φG 7−→∗ D} =
X X
{
X
[ by definition ] [ by definition of 7−→∗ ]
P
{D | φG 7−→n D}}n≥0 =
[ by previous result ]
{φG 1 su n (tree(P ))}n≥0 =
φG 1
X
[ by Lemma 7 ]
{su n (tree(P ))}n≥0 .
Point 2 The following facts hold. O [[P ]] = [ by definition ] X
{B [[p(x) in P ]]
. ≡C
}p(x)∈Goals =
[ since ≡C is a congruence w.r.t. X
{B [[p(x) in P ]]}p(x)∈Goals
. ≡C
X
]
=
[ by previous Point 1 ] X
{φp(x) 1
X
{su n (tree(P ))}n≥0 }p(x)∈Goals
. ≡C
=
[ Lemma 7 ] X
{φp(x) }p(x)∈Goals 1
X
{su n (tree(P ))}n≥0
. ≡C
=
[ by definition of Id I ]
Id I 1
X
{su n (tree(P ))}n≥0
. ≡C
.
2
The following (technical) corollary follows by Lemma 8, Lemma 6 and a straightforward inductive argument. Corollary 10 Let A be an atom, G be a goal, D ∈ PC and D0 , D00 ∈ C. Then, for any n ≥ 0, (1) A · (D0 1 su n (D)) = (A · D0 ) 1 su n (D). (2) (D0 1 su n (D)) × φG v (D0 × φG ) 1 su n (D). (3) If D0 1 su n (D) = D0 then (D0 × D00 ) 1 su n (D) = D0 × (D00 1 su n (D)). Essentially because of previous Corollary 10 we can always reconstruct an SLD-tree for a generic (non-pure and non-atomic) goal from the SLD-trees of pure atoms. Theorem 11 Let A be an atom, G1 , G2 be goals and P be a program. Then (1) B [[A in P ]] = A · O [[P ]]. 16
(2) B [[(G1 , G2 ) in P ]] = B [[G1 in P ]] × B [[G2 in P ]].
PROOF. We prove the points separately.
Point 1 The following equalities hold B [[A in P ]] =
[ by Point 1 of Corollary 9 ]
φA 1
[ by definition of · and Id I ]
X
{su n (tree(P ))}n≥0 =
(A · Id I ) 1 A · (Id I 1
X
X
{su n (tree(P ))}n≥0 =
[ by Corollary 10 and Lemma 7 ]
{su n (tree(P ))}n≥0 ).
Finally, since, by (6), ≡C is a congruence w.r.t. ·, by Corollary 9 we have that P A · (Id I 1 {su n (tree(P ))}n≥0 ) = A · O [[P ]].
Point 2 First of all note that, for any goal G and any n ≥ 0, B [[G in P ]] 1 su n (tree(P )) =
[ by Corollary 9 ]
(φG 1
[ by Lemmata 6 and 7 ]
X
{su k (tree(P ))}k≥0 ) 1 su n (tree(P )) =
φG 1
X
{su k (tree(P )) 1 su n (tree(P ))}k≥0 =
[ by (13) ]
φG 1
X
{su k (tree(P ))}k≥0 =
[ by Corollary 9 ]
B [[G in P ]]. Now we prove the two inclusions of the thesis separately.
v In this case the following facts hold. B [[(G1 , G2 ) in P ]] = [ by Point 1 of Corollary 9 and by definition of × ] (φG1 × φG2 ) 1
X
{su n (tree(P ))}n≥0 v
[ since × and 1 are monotonic and φG1 v B [[G1 in P ]] ] X (B [[G1 in P ]] × φG2 ) 1 {su n (tree(P ))}n≥0 = [ by prev. observ., by Point 3 of Corollary 10 and by Lemma 7 ] B [[G1 in P ]] × (φG2 1
X
{su n (tree(P ))}n≥0 ) =
[ by Corollary 9 ] B [[G1 in P ]] × B [[G2 in P ]].
w The following facts hold. B [[G1 in P ]] × B [[G2 in P ]] = [ by repeating previous steps ] 17
(B [[G1 in P ]] × φG2 ) 1
X
{su n (tree(P ))}n≥0 =
[ by Corollary 9 ] ((φG1 1
X
{su k (tree(P ))}k≥0 ) × φG2 ) 1
X
{su n (tree(P ))}n≥0 v
[ by Point 2 of Corollary 10 and by Lemma 7 ] ((φG1 × φG2 ) 1 {su k (tree(P ))}k≥0 ) 1 {su n (tree(P ))}n≥0 v [ by definition of × and by Point 1 of Corollary 9 ] X
B [[(G1 , G2 ) in P ]] 1
X
X
{su n (tree(P ))}n≥0 =
[ by Lemma 7 and by previous observation ] B [[(G1 , G2 ) in P ]]. 2 From Theorem 11 we can immediately derive that, for any atom A, goal G and program P , B [[2 in P ]] = φ2 , B [[(A, G) in P ]] = (A · O [[P ]]) × B [[G in P ]].
(15) (16)
The above closure property of O w.r.t. B allows us to show that the denotation O is correct and minimal w.r.t. ≈. Corollary 12 Let P1 , P2 be two programs. Then P1 ≈ P2 ⇐⇒ O [[P1 ]] = O [[P2 ]]. PROOF. The proof of the implication =⇒ is straightforward by definition of ≈ and of O. The proof of the other implication is by contradiction. Assume that P1 6≈ P2 and O [[P1 ]] = O [[P2 ]]. By definition of ≈, there exists G ∈ Goals such that B [[G in P1 ]] 6= B [[G in P2 ]]. Now the proof is by structural induction on G.
G = 2 Contradictory, since by (15), B [[2 in P1 ]] = φ2 = B [[2 in P2 ]].
G = (A, G0 ) By (16) two cases arise. If A · O[[P1 ]] 6= A · O[[P2 ]], O [[P1 ]] 6= O [[P2 ]] and this contradicts the hypothesis. Otherwise B[[G0 in P1 ]] 6= B[[G0 in P2 ]] and then, by inductive hypothesis, we have a contradiction. 2 Using the replacement operator we can define a semantic operator ] which computes the OR-composition of two denotations. Namely, given D1 , D2 ∈ PC, D1 ] D2 := [D1 + D2 ]∗ where [D]∗ is the least solution of the equation [D]∗ = Id I + ([D]∗ 1 su (D)). Theorem 15 shows the OR-compositionality 18
property of O [[P ]], i.e., the compositionality w.r.t. the ∪ operator. First we need the following (technical) lemma. Lemma 13 Let D, D0 ∈ PC. Then su (D 1 su (D0 )) v su (D) 1 su (D0 ).
PROOF. The following facts hold. su (D 1 su (D0 )) = [ by definition of su ] X
{(A · (D 1 su (D0 ))) × Id C }A∈Atoms = [ by Point 1 of Lemma 8 and by definition of Id C ]
X
{((A · D) 1 su (D0 )) ×
X
{φG }G∈Goals }A∈Atoms =
[ by Lemma 7 ] X X
{
{((A · D) 1 su (D0 )) × φG }G∈Goals }A∈Atoms v
[ by Point 2 of Lemma 8 ] X X
{
{((A · D) × φG ) 1 su (D0 )}G∈Goals }A∈Atoms =
[ by Lemma 7 ] X
{(A · D) ×
X
{φG }G∈Goals }A∈Atoms 1 su (D0 ) =
[ by definition of Id C and of su ] su (D) 1 su (D0 ). 2 Corollary 14 Let D, D0 ∈ PC. Then, for any k ≥ 0, su (D 1 su k (D0 )) v su (D) 1 su k (D0 ).
PROOF. If k = 0, by definition of 1 and since su 0 (D0 ) = φ, su (D 1 φ) = su (D) = su (D) 1 φ. Otherwise the proof is by induction on k > 0. First of all note that, by definition of su k , su (D 1 su k (D0 )) = su (D 1 (su k−1 (D0 ) 1 su (D0 ))) and, for k > 1, su (D0 ) v su k−1 (D0 ). Then by Lemma 6, D 1 su k (D0 ) = (D 1 su k−1 (D0 )) 1 su (D0 ). This result trivially holds also for k = 1. To conclude su (D 1 su k (D0 )) = su ((D 1 su k−1 (D0 )) 1 su (D0 )) v su (D 1 su k−1 (D0 )) 1 su (D0 ) v (su (D) 1 su k−1 (D0 )) 1 su (D0 ) = su (D) 1 su k (D0 ). 2
[ by [ by [ by [ by
previous result ] Lemma 13 ] inductive hypothesis ] previous result ]
Theorem 15 Let P1 , P2 be programs. Then O [[P1 ∪ P2 ]] = O [[P1 ]] ] O [[P2 ]]. 19
PROOF. By definition of ], O [[P1 ]] ] O [[P2 ]] = [O [[P1 ]] + O [[P2 ]]]∗ . Since PC is a complete lattice and by [·]∗ definition, [D]∗ = lfp(HD ) = HD ↑ω = P {HD ↑n}n≥0 , where HD : PC −→ PC is the continuous function HD (D0 ) = Id I + (D0 1 su (D)). First of all, we prove by induction that, for any n ≥ 1, P HD ↑n = Id I 1 su n−1 (D). Then, by Lemma 7, [D]∗ = Id I 1 {su n (D)}n≥0 .
n = 1 HD ↑1 = HD (φ) = Id I + (φ 1 su (D)) = Id I = Id I 1 su 0 (D). n > 1 The following hold. HD ↑n = HD (HD ↑n − 1) = HD (Id I 1 su n−1 (D)) = Id I + ((Id I 1 su n−1 (D)) 1 su (D)) = Id I + (Id I 1 su n (D)) = Id I 1 su n (D).
[ by definition of ↑n ] [ by inductive hypothesis ] [ by definition of HD ] [ by Lemma 6 ] [ since 1 is extensive ]
Now, to prove the thesis, we have to prove that {su n (tree(P1 ∪ P2 ))}n≥0 = P {su n (O [[P1 ]] + O [[P2 ]])}n≥0 . We prove the two inclusions separately. P
v First of all observe that, since (for any program P ) tree(P ) is a pure collection, tree(P ) v Id I 1 su (tree(P )) v O [[P ]]. Then, by definition of tree, tree(P1 ∪ P2 ) = tree(P1 ) + tree(P2 ) v O [[P1 ]] + O [[P2 ]] and therefore, since · and × are monotonic, su (tree(P1 ∪ P2 )) v su (O [[P1 ]] + O [[P2 ]]). Then, since 1 is also monotonic, for any n ≥ 0, su n (tree(P1 ∪ P2 )) v su n (O [[P1 ]] + O [[P2 ]]).
w We prove (by induction on h) that, for any derivation d, if there exists h ≥ 0 such that d ∈ su h (O [[P1 ]] + O [[P2 ]])(G) then there exists k ≥ 0 such that d ∈ su k (tree(P1 ∪ P2 ))(G). If h = 0 simply choose k = 0. Otherwise let h > 0 and observe that, by definition of su h and by Lemma 6, su h (O [[P1 ]] + O [[P2 ]]) = su h−1 (O [[P1 ]] + O [[P2 ]]) 1 su (O [[P1 ]] + O [[P2 ]]). (17) We have two possibilities. If d ∈ su h−1 (O [[P1 ]] + O [[P2 ]])(G) then, by inductive hypothesis, there exists k ≥ 0 such that d ∈ su k (tree(P1 ∪ P2 ))(G) and then the thesis follows. Otherwise, by definition of 1, by (17) and since su (O [[P1 ]] + O [[P2 ]]) is closed under renaming, d = d1 :: d2 , where d1 ∈ su h−1 (O [[P1 ]]+O [[P2 ]])(G), last(d1 ) = B and d2 ∈ (su (O [[P1 ]] + O [[P2 ]]))(B).
(18)
By inductive hypothesis, there exists m ≥ 0 such that d1 ∈ su m (tree(P1 ∪ P2 ))(G).
(19) 20
Now note that, by Lemma 7, su (O [[P1 ]]+O [[P2 ]]) = su (O [[P1 ]])+su (O [[P2 ]]) and therefore, by (18), d2 ∈ (su (O [[P1 ]]) + su (O [[P2 ]]))(B). Now assume, without loss of generality, that d2 ∈ su (O [[P1 ]])(B). Then, by Point 2 of Corollary 9, by Lemma 7 and since tree(P1 ) v tree(P1 ∪ P2 ), there exists l ≥ 0 such that d2 ∈ su (Id I 1 su l (tree(P1 )))(B) ⊆ su (Id I 1 su l (tree(P1 ∪ P2 )))(B). (20) Moreover, by Corollary 14 and since su (Id I ) v Id C , su (Id I 1 su l (tree(P1 ∪ P2 ))) v su (Id I ) 1 su l (tree(P1 ∪ P2 )) v Id C 1 su l (tree(P1 ∪ P2 )). By previous result and by (20), d2 ∈ (Id C 1 su l (tree(P1 ∪ P2 )))(B) and therefore, by (19) and since d = d1 :: d2 , d ∈ (su m (tree(P1 ∪ P2 )) 1 (Id C 1 su l (tree(P1 ∪ P2 ))))(G). Finally note that, by definition of 1, for any D ∈ C, Id C 1 D = Id C + D and D 1 Id C = D. Then, by (13) and since 1 is additive and extensive, d ∈ (su m (tree(P1 ∪ P2 )) 1 su l (tree(P1 ∪ P2 )))(G) = su m+l (tree(P1 ∪ P2 ))(G). 2 In Theorem 21 we will prove that the top-down and the bottom-up denotations are indeed equivalent, which implies (by Theorem 11) the equivalence between the denotational and the operational semantics. In the following, to simplify the notation, given a pure collection D, we denote by pu (D), pu n (D) and pu n (D) respectively the collections 6 pu (D) :=
X
{G [[G]]D }G∈Goals ,
(21)
pu n (D) := pu (D) 1 · · · 1 pu (D), |
{z
(22)
}
n
pu n (D) := pu (D 1 pu (D 1 pu (· · · ))) . |
{z
(23)
}
n
Note that pu (D) can be viewed as the parallel unfolding of the pure collection D and (analogously to su (D)) it is closed under renaming and under instantiation, since we consider all the possible evaluations of D. It is interesting to note that the operators su and pu enjoy some closure properties. Namely, given a pure collection D the following properties hold. • If d is a renamed version of an element d0 ∈ su (D)(G), by using a renaming ρ, then d ∈ su (D)(Gρ). The same holds for pu . • Using Lemma 4, it is easy to check that, for any idempotent substitution γ, goal G and derivation d, such that ∂γ (d) is defined, d ∈ su (D)(G) implies ∂γ (d) ∈ su (D)(Gγ). Moreover, if d ∈ su (D)(Gγ) and var (d) ∩ var (G) ⊆ var (Gγ), there exists a derivation d0 ∈ su (D)(G) such that clauses(d0 ) = clauses(d) and d = ∂γ (d0 ). The same for holds pu . 6
Note that pu 1 (D) := pu 1 (D) := pu (D) and we assume that pu 0 (D) := pu 0 (D) :=
φ.
21
Using the definition of pu (D) we can replace the definition (5) of C by the equation C [[c]]I = tree(c) 1 pu (I) and it is easy to check that P [[P ]]I = Id I + (tree(P ) 1 pu (I)).
(24)
The proof of the equivalence between the denotational and the operational semantics is mainly achieved by proving that the parallel unfolding can be simulated by the sequential one. Corollary 17 proves a form of associativity of the parallel unfolding which reverses from bottom-up to top-down. Lemma 18 states than that a step of sequential unfolding can be safely replaced by a step of parallel unfolding and that the parallel unfolding of a (finite) goal can be simulated by (a finite number of steps of) the sequential unfolding. Lemma 16 Let D, D0 ∈ PC and h ≥ 0. Then (1) pu (D) 1 pu (D0 ) v pu (D 1 pu (D0 )). P (2) pu h (D) v {pu k (D)}k≥0 . Corollary 17 Let D ∈ PC. Then
{pu h (D)}h≥0 =
P
{pu k (D)}k≥0 .
P
PROOF. The inclusion v is straightforward by Lemma 16. For the other inclusion we prove (by induction on n) that, for any n ≥ 0, pu n (D) v pu n (D). If n = 0, by definition, pu 0 (D) = φ = pu 0 (D). Otherwise the following holds. pu n (D) = pu (D) 1 pu n−1 (D) v pu (D) 1 pu n−1 (D) v pu (D 1 pu n−1 (D)) = pu n (D). 2
[ by [ by [ by [ by
(22) and by (13) ] ind. hypothesis and since 1 is monotonic ] Lemma 16 ] (23) ]
Lemma 18 Let D ∈ PC. Then (1) su (D) v pu (D + Id I ). P (2) pu (D) v Id C + {su n (D)}n≥0 . Corollary 19 Let D ∈ PC. Then X
{pu h (Id I + D)}h≥0 = Id C +
X
{su k (D)}k≥0 .
PROOF. The inclusion w is straightforward by Corollary 17 and by Point 1 of Lemma 18. For the other inclusion we prove (by induction on h) that, for P any h ≥ 0, pu h (Id I + D) v Id C + {su k (D)}k≥0 . Then the thesis follows by Corollary 17. 22
h = 0 Straightforward, since pu 0 (Id I + D) = φ.
h = 1 Straightforward by Point 2 of Lemma 18.
h > 1 Let G be a goal and d ∈ pu h (Id I + D)(G). By (22) and by Point 2 of Lemma 6, d ∈ (pu h−1 (Id I + D) 1 pu (Id I + D))(G). If d ∈ pu h−1 (Id I + D)(G) then the thesis follows by inductive hypothesis (and definition of v). Otherwise, by definition of 1, we can assume that d = d1 :: d2 , where d1 ∈ pu h−1 (Id I + D)(G), last(d1 ) = B 6= 2 and d2 ∈ pu (Id I + D)(B).
(25)
By inductive hypothesis, d1 ∈ (Id C + {su k (D)}k≥0 )(G) and then there exists n ≥ 0 such that P
d1 ∈ (Id C + su n (D))(G).
(26)
Moreover, by Point 2 of Lemma 18 and by (25), d2 ∈ (Id C + ( {su k (Id I + D))}k≥0 )(B) and therefore there exists m ≥ 0 such that P
d2 ∈ (Id C + (su m (Id I + D)))(B).
(27)
Now observe that, by Lemma 7 and since Id C = su (Id I ) + φ2 , su (Id I + D) = su (Id I ) + su (D) = Id C + su (D). Then (since ∀D ∈ C. D 1 Id C = D, Id C 1 P P D = Id C + D) Id C + (su m (Id I + D)) = Id C + {su k (D)}k≤m + {Id C + su k (D)}k<m . Then (since ∀D ∈ C, k ≤ k 0 . su k (D) v su k0 (D)) we obtain Id C + (su m (Id I + D)) = Id C + su m (D) and therefore, by (27), d2 ∈ (Id C + su m (D))(B). Then the following holds. d∈ [ by (25), by definition of 1, by (26) and last result ] ((Id C + su n (D)) 1 (Id C + su m (D)))(G) = [ by Lemma 7 and since 1 is extensive ] (Id C + (su n (D) 1 Id C ) + (su n (D) 1 su m (D)))(G) = [ since ∀D ∈ C. D 1 Id C = D and since 1 is extensive ] (Id C + su n (D) 1 su m (D))(G) = [ by (13) ] (Id C + su n+m (D))(G) ⊆ [ by definition of (Id C +
X
X
]
{su k (D)}k≥0 )(G).
2
Note that, by (21), by (8) and by definition of 1, + and Id I , for any D ∈ PC, Id I + D = Id I 1 pu (D) = Id I 1 su (D). 23
(28)
Corollary 20 For any program P , F [[P ]] = (Id I + tree(P )) 1 tree(P ))}n≥0 .
{pu n (Id I +
P
PROOF. First of all note that P [[P ]] is continuous and φ is the bottom of C. We will prove (by induction on n) that, for any n > 0, P [[P ]]↑n = (Id I + tree(P )) 1 pu n−1 (Id I + tree(P )). Then, by definition of F [[P ]], F [[P ]] = P P {P [[P ]]↑n}n≥0 = {(Id I + tree(P )) 1 pu n (Id I + tree(P ))}n≥0 and then the thesis follows by Lemma 7.
n = 1 By (24), P [[P ]]↑1 = Id I + (tree(P ) 1 pu (φ)) = Id I + tree(P ) = (Id I + tree(P )) 1 pu 0 (Id I + tree(P )).
n > 1 The following holds. P [[P ]]↑n = [ since P [[P ]]↑n − 1 v P [[P ]]↑n ] P [[P ]]↑n + P [[P ]]↑n − 1 = [ by definition of ↑n and (24) ] (Id I + (tree(P ) 1 pu (P [[P ]]↑n − 1))) + P [[P ]]↑n − 1 = [ by (28) ] (tree(P ) 1 pu (P [[P ]]↑n − 1)) + (Id I 1 pu (P [[P ]]↑n − 1)) = [ by Lemma 7 ] (Id I + tree(P )) 1 pu (P [[P ]]↑n − 1) = [ by inductive hypothesis ] (Id I + tree(P )) 1 pu ((Id I + tree(P )) 1 pu n−2 (Id I + tree(P ))) = [ by definition of pu n ] (Id I + tree(P )) 1 pu n−1 (Id I + tree(P )). 2 Theorem 21 Let P be a program. Then O [[P ]] = F [[P ]].
PROOF. By Corollaries 20 and 19, and since D 1 Id C = D, F [[P ]] = (Id I + tree(P )) 1
X
{pu n (Id I + tree(P ))}n≥0
= (Id I + tree(P )) 1
X
{su n (tree(P ))}n≥0 .
(29)
Now, since tree(P ) is a pure collection and by (28), Id I + tree(P ) = Id I 1 su (tree(P )). Finally by Lemma 6, by (29), by Point 2 of Corollary 9, and by a straightforward inductive argument, F [[P ]] = (Id I 1 su (tree(P ))) 1 P P {su n (tree(P ))}n≥0 = Id I 1 {su n (tree(P ))}n≥0 = O [[P ]]. 2 24
Now we can show the OR-compositionality of the fixpoint denotation and the equivalence between the denotational and the operational semantics. The following corollary follows immediately from Theorems 21 and 15. Corollary 22 Let P1 , P2 be programs. Then F [[P1 ∪ P2 ]] = F [[P1 ]] ] F [[P2 ]]. Corollary 23 For any goal G and program P , Q [[G in P ]] = B [[G in P ]].
PROOF. The proof is by structural induction on G.
G = 2 By definition of Q, F and G and by (15), Q [[2 in P ]] = G [[2]]F [[P ]] = φ2 = B [[2 in P ]].
G = (A, G0 ) The following equalities hold Q [[(A, G0 ) in P ]] = G [[(A, G0 )]]F [[P ]] =
[ by definition of Q and of F ] [ by definition of G and Q ]
A [[A]]F [[P ]] × Q [[G0 in P ]] =
[ by inductive hypothesis ]
A [[A]]F [[P ]] × B [[G in P ]] =
[ by definition of A ]
0
(A · F [[P ]]) × B [[G0 in P ]] = B [[(A, G0 ) in P ]]. 2
8
[ by Theorem 21 and by (16) ]
Conclusions and Future Work
As already mentioned in the introduction, our SLD-derivation semantics was defined as the collecting semantics of a framework for the systematic derivation of more abstract semantics, using the formal tools of abstract interpretation. The abstraction framework is described in [3] and, in more detail, in [4]. Due to the relation between the properties of the primitive semantic operators and the properties of the semantics, we can define a taxonomy of observables (abstractions). Each class in the taxonomy is characterized by a set of properties relating the primitive semantics operators and the Galois insertion which defines the observable. For each class we have • a methodology to automatically derive the “best” abstract semantics (transition system, denotational semantics or both), • the validity for the abstract semantics of some of the theorems which hold for the collecting semantics (equivalence between operational and denotational semantics, equivalence between top-down and bottom -up denotation, correctness, minimality and AND and OR compositionality). 25
The new relevant issue which can be discussed in the abstraction framework is precision, i.e., how good is the abstract semantics w.r.t. the abstraction of the collecting semantics. We have therefore classes of precise observables, where we can reconstruct all the semantics discussed in [2], and classes of approximate observables, where we can reconstruct several domains proposed for program analysis (groundness, types, . . . ). The abstraction framework has also been used as the semantic foundation of abstract diagnosis [5]. Let us finally note that, since our framework is based on standard operational and denotational semantic definitions, it can be adapted to other programming languages (especially extensions of logic programming).
A
Technical Proofs
Throughout the appendix we need some technical results about properties of substitutions. Given a set of equations E := {s1 = t1 , . . . , sn = tn }, a (most general) unifier of E is a (most general) unifier of (s1 , . . . , sn ) and (t1 , . . . , tn ). An unifiable set of equations (terms) has an idempotent mgu. Well known results on idempotent mgus state that, if ϑ is an idempotent mgu of a set of equations E, then ϑ is a relevant unifier of E, i.e., var (ϑ) ⊆ var (E). The lattice structure on idempotent substitutions [8] is isomorphic to the lattice structure on equations introduced in [16]. Therefore we can indifferently use equations or idempotent mgus. The following results show the connections between the two notions that we will use in the following. Given a substitution ϑ := {x1 /t1 , . . . , xn /tn } we define E (ϑ) := {x1 = t1 , . . . , xn = tn }. Observe that, for any substitution ϑ, ϑ = mgu(E (ϑ)). Lemma 24 [2] Let E1 , E2 be sets of equations. There exists β := mgu(E1 ∪E2 ) if and only if there exist ϑ := mgu(E1 ) and γ := mgu(E2 ϑ) where β = ϑγ. Lemma 25 Let d1 , d2 be derivations and γ, δ be idempotent substitutions. Then the following holds. (1) If γδ is idempotent and ∂γδ (d1 ), ∂δ (∂γ (d1 )) are defined, then ∂γδ (d1 ) = ∂δ (∂γ (d1 )). (2) If ∂γ (d1 :: d2 ) is defined, then either - length(∂γ (d1 )) < length(d1 ) and ∂γ (d1 :: d2 ) = ∂γ (d1 ) or - length(∂γ (d1 )) = length(d1 ) and ∂γ (d1 :: d2 ) = ∂γ (d1 ) :: ∂γ 0 (d2 ), where γ 0 is an idempotent substitution such that last(∂γ (d1 )) = (last(d1 ))γ 0 . PROOF. The proof of Point 1 is straightforward by definition of ∂. To prove Point 2 observe that if length(∂γ (d1 )) < length(d1 ) then the proof is straight26
forward by definition of ∂ operation. Otherwise, let G := (A, G0 ), c := H ← B and γ be an idempotent substitution such that var (c) ∩ var (γ) = ∅. Moreover assume that ϑ0 = mgu(A, H) and ϑ = mgu(Aγ, H). We prove that there exists an idempotent substitution γ 0 such that (B, G0 )ϑ0 γ 0 = (B, G0 )γϑ. Then the proof follows by definition of derivation and by a straightforward inductive argument. First of all observe that, since var (c) ∩ var (γ) = ∅, then ϑ = mgu(Aγ, Hγ) and therefore, by Lemma 24, γϑ = mgu(E (γ) ∪ {A = H})
(A.1)
Then, by Lemma 24, there exist ϑ00 = mgu(A, H) and γ 00 = mgu(E (γ)ϑ00 ) such that γϑ = ϑ00 γ 00 . Moreover, by definition of mgu and since ϑ0 = mgu(A, H), there exists a renaming ρ such that ϑ00 = ϑ0 ρ and therefore, by (A.1), γϑ = ϑ0 ργ 00 . Now let γ 0 := (ργ 00 )|(B ,G0 )ϑ0 . Then (B, G0 )ϑ0 γ 0 = (B, G0 )ϑ0 (ργ 00 )|(B ,G0 )ϑ0 = (B, G0 )ϑ0 ργ 00 = (B, G0 )γϑ.
[ by definition of γ 0 ] [ by definition of composition ] [ since γϑ = ϑ0 ργ 00 ]
Finally to prove the thesis we have only to prove that γ 0 is idempotent. First of all observe that, since ϑ0 is idempotent, dom(ϑ0 )∩var ((B, G0 )ϑ0 ) = ∅. Then, by definition of composition and since dom(γ 0 ) ⊆ var ((B, G0 )ϑ0 ), for any x/t ∈ γ 0 , x/t ∈ ϑ0 γ 0 and therefore, since γ 0 = (ργ 00 )|(B ,G0 )ϑ0 , x/t ∈ ϑ0 ργ 00 = γϑ. Then, the thesis follows since by construction γϑ is an idempotent substitution. 2
Now we can give the proof of all technical lemmata.
Proof of Lemma 3 The proof of Points 1 and 2 is straightforward by definiϑk ϑ1 tion of :: and ∂ operation. To prove Point 3 assume that d1 = G00 −→ G0k · · · −→ c1
ck
and let ϑ := ϑ1 · · · ϑk , G000 := first(d2 ) such that d1 ∧ d2 is defined. If G0k 6= 2 then the thesis is straightforward by definition of ∧. Otherwise, by definition ϑk ϑ1 ϑ2 0 00 of ∧, d1 ∧ d2 = d0 :: ∂ϑ (d2 ), where d0 = (G00 , G000 ) − → (G , G ϑ → · · · − → 1) − 1 0 c c c 1
2
k
G000 ϑ is a derivation. Moreover, since d1 ∧ d2 is defined, var (d1 ) ∩ var (d2 ) = var (G00 ) ∩ var (G000 ) and therefore var (d1 ) ∩ var (clauses(d2 )) = ∅. Then, since var (G000 ϑ) ⊆ var (G000 ) ∪ var (d1 ), var (G000 ϑ) ∩ var (d2 ) ⊆ var (G000 ) and therefore ∂ϑ (d2 ) is defined and, by Point 2, ∂ϑ (d2 ) is a derivation of G000 ϑ. Finally observe that by definition of ∂, var (d0 ) ∩ var (∂ϑ (d2 )) = var (G000 ϑ). Therefore d0 :: ∂ϑ (d2 ) is defined and the thesis follows by Point 1. 2 27
Proof of Lemma 4 We prove the points separately.
Point 1 Let Gi := first(di ), for i = 1, 2. First of all observe that by definition of ∧ it is easy to check that ∂γ (d1 ∧ G2 ) = ∂γ (d1 ) ∧ G2 γ and ∀β ∈ Subst. ∂β (d2 ) = ∂β|G2 (d2 ).
(A.2) (A.3)
Then in the following we can assume, without loss of generality, that given a derivation ∂β (d), dom(β)∩var (d) ⊆ var (first(d)). We distinguish the following three cases.
last(d1 ) 6= 2 In this case last(∂γ (d1 )) 6= 2 and therefore, by definition of ∧, d1 ∧ d2 = d1 ∧ G2 and ∂γ (d1 ) ∧ ∂γ (d2 ) = ∂γ (d1 ) ∧ G2 γ. Then the thesis follows by (A.2). ϑk ϑ1 last(d1 ) = 2 and last(∂ (d1 )) 6= 2 Let d1 := G1 −→ · · · −→ 2 and ϑ := c1 ck ϑ1 · · · ϑk . By definition of ∧, d1 ∧ d2 = (d1 ∧ G2 ) :: ∂ϑ (d2 ) and ∂γ (d1 ) ∧ ∂γ (d2 ) = ∂γ (d1 ) ∧ G2 γ.
(A.4) (A.5)
Moreover, by previous hypothesis, length(∂γ (d1 )) < length(d1 ) and therefore length(∂γ (d1 ∧ G2 )) < length(d1 ∧ G2 ). Then, by Point 2 of Lemma 25, ∂γ ((d1 ∧ G2 ) :: ∂ϑ (d2 )) = ∂γ (d1 ∧ G2 ) and therefore ∂γ (d1 ∧ d2 ) = ∂γ (d1 ∧ G2 ) = ∂γ (d1 ) ∧ ∂γ (d2 ).
[ by (A.4) and last result ] [ by (A.2) and (A.5) ]
last(∂ (d1 )) = 2 In this case last(d1 ) = 2 and then there exists k ≥ 0 ϑ
ϑ
σ
σ
1 k 1 k such that d1 = G1 −→ · · · −→ 2 and ∂γ (d1 ) = G1 γ −→ · · · −→ 2. Let c1 ck c1 ck ϑ := ϑ1 · · · ϑk and σ := σ1 · · · σk . Then, by definition of ∧,
d1 ∧ d2 = (d1 ∧ G2 ) :: ∂ϑ (d2 ) and ∂γ (d1 ) ∧ ∂γ (d2 ) = (∂γ (d1 ) ∧ G2 γ) :: ∂σ (∂γ (d2 )).
(A.6) (A.7)
Now observe that, since var (G1 ) ∩ var (clauses(d1 )) = ∅, by definition of derivation, γσ is idempotent. Moreover, since d1 ∧ d2 and ∂ϑ (d2 ) are defined, (var (γ) ∪ var (σ)) ∩ var (clauses(d2 )) = ∅ and ∂σϑ (d2 ) is defined. Since length(∂γ (d1 )) = length(d1 ), by Point 2 of Lemma 25, ∂γ ((d1 ∧ G2 ) :: ∂ϑ (d2 )) = ∂γ (d1 ∧ G2 ) :: ∂β (∂ϑ (d2 )),
(A.8)
where G2 γσ = last(∂γ (d1 ∧ G2 )) = last(d1 ∧ G2 )β = G2 ϑβ. Then (γσ)|G2 = (ϑβ)|G2
(A.9) 28
and, since γσ is idempotent, (ϑβ)|G2 is also idempotent and, by (A.5), ∂ϑβ (d2 ) is defined. Finally ∂γ (d1 ∧ d2 ) = ∂γ (d1 ∧ G2 ) :: ∂β (∂ϑ (d2 )) = ∂γ (d1 ∧ G2 ) :: ∂ϑβ (d2 ) = ∂γ (d1 ∧ G2 ) :: ∂γσ (d2 ) = ∂γ (d1 ∧ G2 ) :: ∂σ (∂γ (d2 )) = ∂γ (d1 ) ∧ ∂γ (d2 ).
[ by [ by [ by [ by [ by
(A.6) and (A.8) ] Point 1 of Lemma 25 ] (A.9) and (A.3) ] Point 1 of Lemma 25 ] (A.2) and (A.7) ]
Point 2 We have two possibilities. If last(d1 ) 6= 2, d1 ∧ (d2 ∧ d3 ) = d1 ∧ (G2 , G3 ) and (d1 ∧d2 )∧d3 = (d1 ∧G2 )∧G3 . Then the proof is straightforward ϑk ϑ1 by definition of ∧. Otherwise, let d1 := G1 − → ··· − → 2 and ϑ := ϑ1 · · · ϑk . c1 ck By definition of ∧, d1 ∧ d2 = (d1 ∧ G2 ) :: ∂ϑ (d2 )
(A.10)
and then, by definition of ∧ and Point 1, d1 ∧ (d2 ∧ d3 ) = (d1 ∧ (G2 , G3 )) :: ∂ϑ (d2 ∧ d3 ) = (d1 ∧ (G2 , G3 )) :: (∂ϑ (d2 ) ∧ ∂ϑ (d3 )).
(A.11)
Now two cases arise.
last(∂# (d2 )) 6= 2 In this case ∂ϑ (d2 ) ∧ ∂ϑ (d3 ) = ∂ϑ (d2 ) ∧ G3 ϑ and therefore d1 ∧ (d2 ∧ d3 ) = (d1 ∧ (G2 , G3 )) :: (∂ϑ (d2 ) ∧ ∂ϑ (d3 )) = (d1 ∧ (G2 , G3 )) :: (∂ϑ (d2 ) ∧ G3 ϑ) = ((d1 ∧ G2 ) :: ∂ϑ (d2 )) ∧ G3 = ((d1 ∧ G2 ) :: ∂ϑ (d2 )) ∧ d3 = (d1 ∧ d2 ) ∧ d3 .
[ by (A.11) ] [ by previous observation ] [ by definition of :: ] [ since last(∂ϑ (d2 )) 6= 2 ] [ by (A.10) ]
σ
σ
c1
cm
1 m last(∂# (d2 )) = 2 Let ∂ϑ (d2 ) := G2 ϑ −→ · · · −→ 2 and σ := σ1 · · · σm . 0 0
Then, by definition of ∧, ∂ϑ (d2 ) ∧ ∂ϑ (d3 ) = (∂ϑ (d2 ) ∧ G3 ϑ) :: ∂σ (∂ϑ (d3 )) = ∂ϑ (d2 ∧ G3 ) :: ∂σ (∂ϑ (d3 )).
(A.12)
Moreover observe that, by definition of ∧, ϑ
ϑ
σ
σ
c1
ck
c1
cm
1 m k 1 2 G2 ϑ −→ · · · −→ · · · −→ d1 ∧ d2 = (G1 , G2 ) −→ 0 0
29
(A.13)
and ϑσ is an idempotent substitution. Furthermore, analogously to the previous case, ∂ϑσ (d3 ) is defined. Finally d1 ∧ (d2 ∧ d3 ) = [ by (A.11) and (A.12) ] (d1 ∧ (G2 , G3 )) :: (∂ϑ (d2 ∧ G3 ) :: ∂σ (∂ϑ (d3 ))) = [ since :: is associative ] ((d1 ∧ (G2 , G3 )) :: ∂ϑ (d2 ∧ G3 )) :: ∂σ (∂ϑ (d3 )) = [ by definition of ∧ ] (((d1 ∧ G2 ) :: ∂ϑ (d2 )) ∧ G3 ) :: ∂σ (∂ϑ (d3 )) = [ by Point 1 of Lemma 25 ] (((d1 ∧ G2 ) :: ∂ϑ (d2 )) ∧ G3 ) :: ∂ϑσ (d3 ) = [ by (A.10) and (A.13) ] (d1 ∧ d2 ) ∧ d3 . 2 Proof of Lemma 8 We prove the points separately.
Point 1 We prove the two inclusions separately.
w Let d ∈ ((A · D0 ) 1 su (D))(A). If d ∈ (A · D0 )(A) then, since 1 is extensive and · is monotonic, d ∈ (A · (D0 1 su (D)))(A). Otherwise, by definition of 1 and since su (D) is closed under renaming, there exist two derivations d1 ∈ (A · D0 )(A) and d2 ∈ su (D)(G) such that G = last(d1 ), d = d1 :: d2 and var (d1 ) ∩ var (d2 ) = var (G).
(A.14)
By definition of · there exists a derivation d3 , which is a renamed apart (w.r.t. A) version of an element in D0 (A0 ), for some atom A0 ≤ A, and there exists an idempotent substitution γ such that first(d3 )γ = A and d1 = ∂γ (d3 ).
(A.15)
Without loss of generality, we can assume that var (first(d3 )) ∩ var (d2 ) = ∅.
(A.16)
Moreover, since D0 (A0 ) is a well-formed set of derivations, we can assume that length(d3 ) = length(d1 ) = length(∂γ (d3 )).
(A.17)
Then, by Point 2 of Lemma 25, there exists an idempotent substitution γ 0 such that last(d1 ) = G = G0 γ 0 where G0 = last(d3 ). Moreover, by definition of ∂, by (A.14) and (A.16), var (d3 ) ∩ var (d2 ) = ∅ and therefore var (G0 ) ∩ 30
var (d2 ) = ∅. Then, by properties of su (D) and since d2 ∈ su (D)(G0 γ 0 ), there exists a derivation d4 ∈ su (D)(G0 ) such that var (d3 ) ∩ var (d4 ) = var (G0 ) and ∂γ 0 (d4 ) = d2 .
(A.18)
Then by definition of 1 and by properties of su (D), d3 :: d4 is a renamed apart (w.r.t. A) version of an element in (D0 1 su (D))(A0 ). Moreover, by definition of ∂ and of γ, ∂γ (d3 :: d4 ) ∈ (A · (D0 1 su (D)))(A). Finally the following hold. ∂γ (d3 :: d4 ) = ∂γ (d3 ) :: ∂γ 0 (d4 ) = d.
[ by Point 2 of Lemma 25 and by (A.17) ] [ by (A.15), (A.18) and (A.14) ]
v Let d ∈ (A · (D0 1 su (D)))(A). If d ∈ (A · D0 )(A) then, since by extensivity A · D0 v (A · D0 ) 1 su (D), d ∈ ((A · D0 ) 1 su (D))(A). Otherwise, by definition of 1 and of ·, there exists a renamed apart (w.r.t. A) version d0 of an element in (D0 1 su (D))(A0 ), for some atom A0 ≤ A, and there exists an idempotent substitution γ such that A = first(d0 )γ and ∂γ (d0 ) = d.
(A.19)
Since d0 is a renamed version of an element in (D0 1 su (D))(A0 ) and d 6∈ (A · D0 )(A), d0 = d01 :: d02 ,
(A.20)
where d01 is a renamed version of an element in D0 (A0 ), G0 = last(d01 ) and, since su (D) is closed under renaming, d02 ∈ su (D)(G0 ). Then, by definition of ·, ∂γ (d01 ) ∈ (A · D0 )(A).
(A.21)
Moreover, by (A.19), (A.20) and (A.21) and since (by hypothesis) d 6∈ (A · D0 )(A), length(∂γ (d0 )) > length(∂γ (d01 )) and then length(∂γ (d01 )) = length(d01 ). Therefore by Point 2 of Lemma 25, ∂γ (d01 :: d02 ) = ∂γ (d01 ) :: ∂γ 0 (d02 ),
(A.22)
where γ 0 is an idempotent substitution such that last(∂γ (d01 )) = G0 γ 0 = (last(d01 ))γ 0 . Then, since d02 ∈ su (D)(G0 ), ∂γ 0 (d02 ) is defined and, by properties of su (D), ∂γ 0 (d02 ) ∈ su (D)(G0 γ 0 ). Therefore, by definition of 1, by (A.21) and (A.22), ∂γ (d01 ) :: ∂γ 0 (d02 ) ∈ ((A · D0 ) 1 su (D))(A). Now to prove the thesis it is sufficient to observe that, by (A.22), (A.20) and (A.19), ∂γ (d01 ) :: ∂γ 0 (d02 ) = ∂γ (d01 :: d02 ) = ∂γ (d0 ) = d.
31
Point 2 By definition of v, we have to prove that for any G0 ∈ Goals ((D0 1 su (D)) × φG )(G0 ) ⊆ ((D0 × φG ) 1 su (D))(G0 ). Let d ∈ ((D0 1 su (D)) × φG )(G0 ). Then by definition of ×, G0 = (G0 , G) and there exists a renamed version d1 of an element in (D0 1 su (D))(G0 ) such that first(d1 ) = G0 and d = d1 ∧ G.
(A.23)
Now, by definition of 1, two cases arise. If d1 is a renamed version of an element in D0 (G0 ) then, by definition of × and by (A.23), d ∈ (D0 × φG )(G0 ) and therefore, since 1 is extensive, d ∈ ((D0 × φG ) 1 su (D))(G0 ). Otherwise, by definition of 1 and since su (D) is closed under renaming, d1 = ϑk ϑ1 d3 :: d4 , where d3 = G0 − → ··· − → B is a renamed version of an element in c c 1
k
D0 (G0 ), B 6= 2, d4 ∈ su (D)(B) and var (d3 ) ∩ var (d4 ) = var (B). Let ϑ := ϑ1 · · · ϑk . Then, by (A.23) and by definition of ∧, d = (d3 ∧G) :: (d4 ∧Gϑ) and, by definition of ×, d3 ∧ G ∈ (D0 × φG )(G0 ). Moreover, since d4 ∈ su (D)(B), by definition of su and since B 6= 2, d4 ∧ Gϑ ∈ su (D)(B, Gϑ). Finally, by definition of 1, d ∈ ((D0 × φG ) 1 su (D))(G0 ).
Point 3 We prove the two inclusions separately.
v Let d ∈ ((D0 × D00 ) 1 su (D))(G0 ). We have two possibilities. If d ∈ (D0 ×D00 )(G0 ) then, since 1 is extensive and × is monotonic, d ∈ (D0 ×(D00 1 su (D)))(G0 ). Otherwise, by definition of 1 and since su (D) is closed under renaming, there exist two derivations d0 ∈ (D0 × D00 )(G0 ) and d00 ∈ su (D)(G) such that last(d0 ) = G, d = d0 :: d00 ,
var (d0 ) ∩ var (d00 ) = var (G).
(A.24)
Then, by definition of ×, there exist two goals G00 , G000 and two derivations ϑk ϑ1 d1 = G00 − → · · · − → G0k and d2 , which are renamed versions of elements in c c 1
k
D0 (G00 ) and D00 (G000 ) respectively, such that first(d2 ) = G000 , G0 = (G00 , G000 ) and d0 =d1 ∧ d2 . Let ϑ := ϑ1 · · · ϑk . Two cases arise G0k 6= 2 By definition of ∧, d0 = d1 ∧ G000 . Since (by hypothesis) d2 is a renamed version of an element in D00 (G000 ), D00 (G000 ) 6= ∅ and since (by definition of collection) D00 (G000 ) is a well-formed set of derivations, G000 ∈ D00 (G000 ). By (A.24), since last(d1 ) = G0k 6= 2 and last(d0 ) = (G0k , G000 ϑ), d = (d1 :: d3 ) ∧ G000 , where d3 ∈ su (D)(G0k ) is such that d00 = d3 ∧ G000 ϑ.
(A.25) (A.26)
By definition of 1, since d1 :: d3 is defined, since d1 is a renamed version of an element in D0 (G00 ) and by (A.26), d1 :: d3 is a renamed version of an 32
element in (D0 1 su (D))(G00 ) = D0 (G00 ), where the last equality holds since (by hypothesis) D0 1 su (D) = D0 . Therefore, by (A.25), by definition of × and since G000 ∈ D00 (G000 ), d = (d1 :: d3 ) ∧ G000 ∈ (D0 × D00 )(G0 ) and this contradicts the hypothesis. G0k = 2 By definition of ∧, d0 = d1 ∧ d2 = (d1 ∧ G000 ) :: ∂ϑ (d2 ).
(A.27)
We can assume that length(d2 ) = length(∂ϑ (d2 )), since D00 (G000 ) is a wellformed set of derivations. Then, by definition of ∂, by (A.24) and since by (A.27), last(∂ϑ (d2 )) = last(d0 ) = G, by Point 2 of Lemma 25 there exists an idempotent substitution δ such that last(∂ϑ (d2 )) = G00 δ = G, where G00 = last(d2 ). Note that by (A.24) and since var (d0 ) = var (d1 ) ∪ var (d2 ), we have that var (d2 ) ∩ var (clauses(d00 )) = ∅ and var (G00 ) ∩ var (clauses(d00 )) = ∅. Then, by properties of su (D), since d00 ∈ su (D)(G) and G = G00 δ, there exists d3 ∈ su (D)(G00 ) such that d00 = ∂δ (d3 ) and var (d3 ) ∩ var (d2 ) = var (G00 ).
(A.28)
Then d2 :: d3 is defined and, by Point 2 of Lemma 25 and since by construction length(d2 ) = length(∂ϑ (d2 )), ∂ϑ (d2 :: d3 ) = ∂ϑ (d2 ) :: ∂δ (d3 ).
(A.29)
Then d= (d1 ∧ d2 ) :: ∂δ (d3 ) = ((d1 ∧ G000 ) :: ∂ϑ (d2 )) :: ∂δ (d3 ) = (d1 ∧ G000 ) :: (∂ϑ (d2 ) :: ∂δ (d3 )) = (d1 ∧ G000 ) :: ∂ϑ (d2 :: d3 ) = d1 ∧ (d2 :: d3 ).
[ by (A.24), (A.27) and (A.28) ] [ by definition of ∧ ] [ since :: is associative ] [ by (A.29) ] [ by definition of ∧ and of d1 ]
By definition of 1, since d2 is a renamed version of an element in D00 (G000 ), since d3 ∈ su (D)(G00 ) and su (D) is closed under renaming, d2 :: d3 is a renamed version of an element in (D00 1 su (D))(G000 ) such that first(d2 :: d3 ) = G000 . Finally, since d1 is a renamed version of an element in D0 (G00 ) and first(d1 ) = G00 , by definition of ×, d1 ∧ (d2 :: d3 ) ∈ (D0 × (D00 1 0 00 su (D)))(G0 , G0 ). w Let d ∈ (D0 × (D00 1 su (D)))(G0 ). Then, by definition of ×, d = d0 ∧ d00 , where G0 = (G00 , G000 ) and d0 , d00 are renamed versions of elements in D0 (G00 ) and in (D00 1 su (D))(G000 ) respectively, such that first(d0 ) = G00 and first(d00 ) = G000. By definition of ×, two cases arise. last(d0 ) 6= 2 In this case d = d0 ∧ G000 ∈ (D0 × D00 )(G0 ). Therefore, since 1 is extensive, d ∈ ((D0 × D00 ) 1 su (D))(G0 ). 33
last(d0 ) = 2 By definition of 1, we distinguish two cases. If d00 is a renamed version of an element in D00 (G000 ), d = d0 ∧ d00 ∈ (D0 × D00 )(G0 ) and therefore, analogously to the previous case, d ∈ ((D0 × D00 ) 1 su (D))(G0 ). ϑk ϑ1 → 2 and let ϑ := ϑ1 · · · ϑk . By Otherwise assume that d0 = G00 − → ··· − c c 1
k
definition of 1 and since su (D) is closed under renaming, d00 = (d1 :: d2 ), σh σ1 · · · −→ G00h is a renamed version of an element in D00 (G0 ) where d1 = G000 −→ 0 0 c1
ch
and d2 ∈ su (D)(G00h ). Moreover, by definition of ∧, d = (d0 ∧ G000 ) :: ∂ϑ (d00 ) = (d0 ∧ G000 ) :: ∂ϑ (d1 :: d2 ).
(A.30)
Now we have two possibilities length(∂# (d1 )) < length(d1 ) By Point 2 of Lemma 25, ∂ϑ (d1 :: d2 ) = ∂ϑ (d1 ) and therefore, by (A.30) and by definition of ∧, d = (d0 ∧ G000 ) :: 0 00 ∂ϑ (d1 ) = d0 ∧ d1 ∈ (D0 × D00 )(G 0 ) ⊆ ((D × D ) 1 su (D))(G0 ). length(∂# (d1 )) = length(d1 ) By Point 2 of Lemma 25 there exists an idempotent substitution δ such that G00h δ = last(∂ϑ (d1 )) and ∂ϑ (d1 :: d2 ) = ∂ϑ (d1 ) :: ∂δ (d2 ).
(A.31)
Moreover, since ∂δ (d2 ) is defined, by properties of su (D) and since d2 ∈ su (D)(G00h ), ∂δ (d2 ) ∈ su (D)(G00h δ).
(A.32)
Then the following facts hold. d= (d0 ∧ G000 ) :: (∂ϑ (d1 ) :: ∂δ (d2 )) = ((d0 ∧ G000 ) :: ∂ϑ (d1 )) :: ∂δ (d2 ) = (d0 ∧ d1 ) :: ∂δ (d2 ).
[ by (A.30) and (A.31) ] [ since :: is associative ] [ by definition of ∧ ]
Finally, by construction and (A.32), (d0 ∧ d1 ) ∈ (D0 × D00 )(G0 ), ∂δ (d2 ) ∈ su (D)(G00h δ). Therefore, since (d0 ∧ d1 ) :: ∂δ (d2 ) is defined, by definition of 1 and by the previous result, d = (d0 ∧ d1 ) :: ∂δ (d2 ) ∈ ((D0 × D00 ) 1 su (D))(G0 ). 2 Lemma 26 Let D ∈ PC, D1 , D2 ∈ C and A ∈ Atoms. Then (1) (A · D1 ) 1 pu (D) = A · (D1 1 pu (D)). (2) (D1 × D2 ) 1 pu (D) v (D1 1 pu (D)) × (D2 1 pu (D)).
PROOF. The proof of Point 1 is analogous to the one of Point 1 of Lemma 8 and hence omitted. 34
To prove Point 2, let d ∈ ((D1 × D2 ) 1 pu (D))(G). If d ∈ (D1 × D2 )(G) then, since 1 is extensive and × is monotonic, d ∈ ((D1 1 pu (D)) × (D2 1 pu (D)))(G). Otherwise, by definition of 1 and since pu (D) is closed under renaming, d = d0 :: d00 ,
(A.33)
where d0 ∈ (D1 × D2 )(G), last(d0 ) = B 6= 2, d00 ∈ pu (D)(B) and var (d0 ) ∩ var (d00 ) = var (B).
(A.34)
By definition of ×, d0 = d1 ∧ d2 , where G = (G1 , G2 ) and (for i = 1, 2) di is a renamed version of an element in Di (Gi ) such that first(di ) = Gi and ϑk ϑ1 var (d1 ) ∩ var (d2 ) = var (G1 ) ∩ var (G2 ). Let d1 = G1 −→ · · · −→ B 0 and c1
ck
ϑ := ϑ1 · · · ϑk . Two cases arise, we prove only the case B 0 6= 2 since the other (B 0 = 2) is analogous. Then d0 = d1 ∧ G2 ,
(A.35)
B = (B 0 , G2 ϑ) and d00 ∈ pu (D)(B 0 , G2 ϑ). Moreover, by (21) and since (by Lemma 6) × is associative, d00 = d3 ∧ d4 ,
(A.36)
where d3 ∈ pu (D)(B 0 ) and d4 ∈ pu (D)(G2 ϑ). By (A.34), var (d1 ) ∩ var (d3 ) = var (B 0 ) and therefore, by definition of 1 and since pu is closed under renaming, d1 :: d3 is a renamed version of an element in (D1 1 pu (D))(G1 ). (A.37) Moreover, since d4 ∈ pu (D)(G2 ϑ) and since, by our hypothesis on variables, var (d4 ) ∩ var (G2 ) ⊆ var (G2 ϑ), by properties of pu (D) there exists d5 ∈ pu (D)(G2 ) such that d4 = ∂ϑ (d5 ).
(A.38)
Now observe that, since D2 (G2 ) 6= ∅ is a well-formed set of derivations, G2 ∈ D2 (G2 ) and therefore, by definition of 1, d5 = G2 :: d5 ∈ (D2 1 pu (D))(G2 ).
(A.39)
By our hypothesis on variables, var (d1 :: d3 ) ∩ var (d5 ) = var (G1 ) ∩ var (G2 ). Therefore, by definition of ×, by (A.37) and (A.39) and since G = (G1 , G2 ), (d1 :: d3 ) ∧ d5 ∈ ((D1 1 pu (D)) × (D2 1 pu (D)))(G). Finally, since (by hypothesis) last(d1 ) 6= 2, (d1 :: d3 ) ∧ d5 = (d1 ∧ G2 ) :: (d3 ∧ ∂ϑ (d5 )) =
[ by definition of ∧ ] [ by (A.38) ] 35
(d1 ∧ G2 ) :: (d3 ∧ d4 ) = d0 :: d00 = d. 2
[ by (A.35) and (A.36) ] [ by (A.33) ]
Proof of Lemma 16
Point 1 We have to prove that ∀G ∈ Goals. (pu (D) 1 pu (D0 ))(G) ⊆ (pu (D 1 pu (D0 )))(G). The proof is by structural induction on G. If G = 2 then, by (21), (pu (D) 1 pu (D0 ))(2) = {2} = (pu (D 1 pu (D0 )))(2). Otherwise let G := (A, G0 ) and d ∈ (pu (D) 1 pu (D0 ))(G). Two cases arise.
d ∈ pu (D)(G) In this case, since 1 is extensive and · and × are monotonic, 0 0 pu (D) v pu (D 1 pu (D )) and then d ∈ pu (D 1 pu (D ))(G). d 6∈ pu (D)(G) Since pu (D0 ) is closed under renaming, d = d1 :: d2 ,
(A.40)
where d1 ∈ pu (D)(G), last(d1 ) = B 6= 2 and d2 ∈ pu (D0 )(B). By (21) and since pu is closed under renaming, d1 = d3 ∧ d4 , where d3 ∈ (A · D)(A) and d4 ∈ pu (D)(G0 ). Then, by definition of × and by Point 2 of Lemma 26, d1 :: d2 ∈ (((A · D) × pu (D)) 1 pu (D0 ))(G) ⊆ (((A · D) 1 pu (D0 )) × (pu (D) 1 pu (D0 )))(G). Therefore, by (A.40), by definition of × and since G = (A, G0 ), there exist two renamed versions d5 and d6 of elements in ((A · D) 1 pu (D0 ))(A) and in (pu (D) 1 pu (D0 ))(G0 ) respectively, such that d = d5 ∧ d6 . By inductive hypothesis, d6 is a renamed version of an element in pu (D 1 pu (D0 ))(G0 ). Moreover, by Point 1 of Lemma 26 and since d5 is a renamed version of an element in ((A · D) 1 pu (D0 ))(A), d5 is a renamed version of an element in (A · (D 1 pu (D0 )))(A). Finally, by definition of × and pu and since d = d5 ∧ d6 , d ∈ ((A · (D 1 pu (D0 ))) × pu (D 1 pu (D0 )))(G) ⊆ pu (D 1 pu (D0 ))(G).
Point 2 The proof is by induction on h. The case h = 0 is straightforward, since pu 0 (D) = φ = pu 0 (D). Otherwise the proof is by structural induction on G.
G = 2 By (23) and (22), pu h (D)(2) = {2} = pu 1 (D)(2). G = A, G0
Let d ∈ pu h (D)(G) = (pu (D 1 pu h−1 (D)))(G). Then by (23),
d = d1 ∧ d2 , where d1 ∈ (A · (D 1 pu h−1 (D)))(A) and d2 ∈ pu h (D)(G0 ). By inductive hypothesis, d2 ∈ and therefore there exists m ≥ 0 such that d2 ∈ pu m (D)(G0 ).
(A.41) {pu k (D)}k≥0 (G0 )
P
(A.42) 36
Moreover A · (D 1 pu h−1 (D)) = (A · D) 1 pu h−1 (D) v
[ by Point 1 of Lemma 26 ] [ by inductive hypothesis ]
(A · D) 1
X
{pu k (D)}k≥0 v
[ by (21) ]
pu (D) 1
X
{pu k (D)}k≥0 =
[ by Lemma 7 ]
X
{pu (D) 1 pu k (D)}k≥0 =
X
{pu k (D)}k≥0 .
[ by (22) ]
Then, by (A.41), there exists l ≥ 0 such that d1 ∈ pu l (D)(A).
(A.43)
Now, by definition of ∧, we have two possibilities. last(d1 ) 6= 2 In this case d = d1 ∧ G0 . Since d2 ∈ pu m (D)(G0 ), for any predicate symbol p occurring in G0 , D(p(x)) 6= ∅. Then G0 ∈ pu l (D)(G0 ) and therefore, by (21), by (A.43) and by definition of 1, d = d1 ∧ G0 ∈ P pu l (D)(G) ⊆ {pu k (D)}k≥0 (G). ϑk ϑ1 last(d1 ) = 2 Let d1 := A − → ··· − → 2 and ϑ := ϑ1 · · · ϑk . By definition c c 1
k
of ∧ and by (A.41), d = (d1 ∧ G0 ) :: ∂ϑ (d2 ).
(A.44)
Analogously to the previous case, by using (A.43), d1 ∧ G0 ∈ pu l (D)(G).
(A.45)
Moreover, by (A.44), ∂ϑ (d2 ) is defined and, by (A.42), d2 ∈ pu m (D)(G0 ). Then, by properties of pu and by a straightforward inductive argument, ∂ϑ (d2 ) ∈ pu m (D)(G0 ϑ). Then, by definition of 1, by (A.44) and (A.45), d = (d1 ∧G0 ) :: ∂ϑ (d2 ) ∈ (pu l (D) 1 pu m (D))(G) and therefore, by Lemma 6 and by a straightforward inductive argument, d ∈ pu l+m (D)(G). 2
Proof of Lemma 18
Point 1 By definition of v it is sufficient to prove that, for any G ∈ Goals, su (D)(G) ⊆ pu (D + Id I )(G). We distinguish two cases. If G = 2, su (D)(2) is not defined and then the thesis follows trivially. Otherwise let G = (A, G0 ) and observe that, by (21), Id C v pu (D + Id I ). Then the following facts hold su (D)(G) = ((A · D) × Id C )(G) ⊆ ((A · D) × pu (D + Id I ))(G) ⊆
[ by (8) ] [ by previous observation ] [ since × is monotonic ] 37
((A · (D + Id I )) × pu (D + Id I ))(G) = pu (D + Id I )(G).
[ by definition of pu ]
Point 2 We prove (by induction on n) that, for any G = A1 , . . . , An ∈ Goals and n ≥ 0, G [[G]]D v Id C + su n (D). Then the thesis follows by (21). If n = 0 then the thesis follows trivially, since G = 2 and, by definition of G, G [[G]]D = φ2 v Id C . Otherwise let G := A, G0 . Then G [[A, G0 ]]D = (A · D) × G [[G0 ]]D v (A · D) × (Id C + su n−1 (D)).
[ by definition of G and of A ] [ by inductive hypothesis ]
Let d ∈ G [[G]]D (G). By previous result and by definition of ×, d = d1 ∧ d2 , where d1 ∈ (A · D)(A) and d2 ∈ (Id C + su n−1 (D))(G0 ). Two cases arise.
last(d1 ) 6= 2 In this case, by definition of ∧, d = d1 ∧G0 ∈ ((A·D)×Id C )(G) and therefore, by (8), d ∈ su (D)(G). Then, by definition of 1 and of +, d ∈ (Id C + su (D))(G). n ϑk ϑ1 last(d1 ) = 2 Let d1 := A −→ · · · −→ 2 and ϑ := ϑ1 · · · ϑk . By definition c1
ck
of ∧, d = (d1 ∧ G0 ) :: ∂ϑ (d2 ). Then, since d1 ∈ (A · D)(A), d1 ∧ G0 ∈ ((A·D)×Id C )(G) and therefore, by (8), d1 ∧G0 ∈ su (D)(G). Moreover, since ∂ϑ (d2 ) is defined and d2 ∈ (Id C + su n−1 (D))(G0 ), by properties of su and by a straightforward inductive argument, ∂ϑ (d2 ) ∈ (Id C + su n−1 (D))(G0 ϑ). By previous results and by definition of 1, d = (d1 ∧ G0 ) :: ∂ϑ (d2 ) ∈ (su (D) 1 (Id C + su n−1 (D)))(G).
(A.46)
Finally observe that, since n ≥ 1, su (D) v su n (D). Then su (D) 1 (Id C + su n−1 (D)) = (su (D) 1 Id C ) + (su (D) 1 su n−1 (D)) = su (D) + (su (D) 1 su n−1 (D)) = su n (D).
[ by Lemma 7 ] [ since D 1 Id C = D ] [ by (13) and prev. obs. ]
Therefore, by (A.46), d ∈ su n (D)(G) ⊆ (Id C + su n (D))(G). 2
References
[1] K. R. Apt. Introduction to Logic Programming. In J. van Leeuwen, editor, Handbook of Theoretical Computer Science, volume B: Formal Models and Semantics, pages 495–574. Elsevier and The MIT Press, 1990.
38
[2] A. Bossi, M. Gabbrielli, G. Levi, and M. Martelli. The s-semantics approach: Theory and applications. Journal of Logic Programming, 19–20:149–197, 1994. [3] M. Comini, G. Levi, and M. C. Meo. Compositionality of SLD-derivations and their abstractions. In J. Lloyd, editor, Proceedings of the 1995 Int’l Symposium on Logic Programming, pages 561–575. The MIT Press, 1995. [4] M. Comini, G. Levi, and M. C. Meo. A theory of observables for logic programs. Submitted for publication. http://www.di.unipi.it/˜comini/Papers/, 1996. [5] M. Comini, G. Levi, and G. Vitiello. Efficient detection of incompleteness errors in the abstract debugging of logic programs. In M. Ducass´e, editor, Proc. 2nd International Workshop on Automated and Algorithmic Debugging, AADEBUG’95, 1995. [6] P. Cousot and R. Cousot. Abstract Interpretation: A Unified Lattice Model for Static Analysis of Programs by Construction or Approximation of Fixpoints. In Proc. Fourth ACM Symp. Principles of Programming Languages, pages 238– 252, 1977. [7] S. K. Debray and P. Mishra. Denotational and Operational Semantics for Prolog. Journal of Logic Programming, 5:61–91, 1988. [8] E. Eder. Properties of substitutions and unification. Journal of Symbolic Computation, 1:31–46, 1985. [9] M. Falaschi, G. Levi, M. Martelli, and C. Palamidessi. Declarative Modeling of the Operational Behavior of Logic Languages. Theoretical Computer Science, 69(3):289–318, 1989. [10] M. Gabbrielli, G. Levi, and M. C. Meo. Observational Equivalences for Logic Programs. In K. Apt, editor, Proc. Joint Int’l Conf. and Symposium on Logic Programming, pages 131–145. The MIT Press, 1992. [11] M. Gabbrielli, G. Levi, and M. C. Meo. Resultants semantics for PROLOG. Journal of Logic and Computation, 6(4):491–521, 1996. [12] N. D. Jones and A. Mycroft. Stepwise Development of Operational and Denotational Semantics for PROLOG. In Sten-˚ Ake T¨arnlund, editor, Proc. Second Int’l Conf. on Logic Programming, pages 281–288, 1984. [13] N. D. Jones and H. Søndergaard. A Semantics-based Framework for the Abstract Interpretation of PROLOG. In S. Abramsky and C. Hankin, editors, Abstract Interpretation of Declarative Languages, pages 123–142. Ellis Horwood Ltd, 1987. [14] R. Kemp and G. Ringwood. Reynolds base, Clark Models and Heyting semantics of logic programs. Submitted for publication. [15] R. Kemp and G. Ringwood. An Algebraic Framework for the Abstract Interpretation of Logic Programs. In S. K. Debray and M. Hermenegildo, editors, Proc. North American Conf. on Logic Programming’90, pages 506–520. The MIT Press, 1990.
39
[16] J. L. Lassez, M. J. Maher, and K. Marriott. Unification Revisited. In J. Minker, editor, Foundations of Deductive Databases and Logic Programming, pages 587– 625. Morgan Kaufmann, Los Altos, Ca., 1988. [17] J. W. Lloyd. Foundations of Logic Programming. Springer-Verlag, 1987. Second edition.
40