Visual Interpretation of Events in Petroleum Geology Joel L. Carbonera, Mara Abel Institute of Informatics Universidade Federal do Rio Grande do Sul – UFRGS Porto Alegre – RS – Brazil
[email protected],
[email protected] Abstract—In visual domains, the tasks are accomplished through intensive use of visual knowledge. In this paper, we are interested in the visual interpretation task, which is prevalent in many visual domains. We investigate the role played by foundational ontologies in reasoning processes that deals with visual information, as those that are performed in visual interpretation tasks. We propose an approach for visual interpretation that combines ontologically well founded domain ontologies, a reasoning model, and a cognitively well founded meta-model for representation of inferential knowledge. Our approach was effectively applied in the task of visual interpretation of depositional processes, within the Sedimentary Stratigraphy domain. Keywords-Ontology; Knowledge Representation; Knowledge Engineering; Visual Knowledge
I. I NTRODUCTION Visual domains are those in which the problem-solving process starts with a visual pattern-matching process, capturing the information that support further abstract inference processes of interpretation. In this sense, visual domains make intensive use of Visual Knowledge, which is the set of mental models that support the process of reasoning over information of the spatial arrangement and other visual aspects of domain entities [1], [2]. Visual interpretation is a very common kind of task in visual domains. It consists in the reasoning process that starts with the perception of visual features of domain objects and reaches abstract interpretations that are meaningfully related to these perceptions [2]. An example of visual interpretation process is reported in cognitive studies of expertise in chess [3], where chess masters literally “see” the next move, when they observe the state of the chessboard. In [4] other examples are provided, including the process of reading a text, when we interpret abstract meanings from visually perceived patterns of symbols; and the visual interpretation of radiographs, when the radiologists interpret abnormalities from visual patterns in a radiograph. In this paper, we are interested in a task performed by geologists that comprises the visual inspection of a given portion of rock and the interpretation of the event that was the most probable genesis of it. This is considered a specific instance of a task of visual interpretation of events, which, in its turn, is a specific type of visual interpretation task.
Claiton M. Scherer, Ariane K. Bernardes Institute of Geosciences Universidade Federal do Rio Grande do Sul – UFRGS Porto Alegre – RS – Brazil
[email protected],
[email protected] Some computational approaches for processing visual data apply image processing or machine learning techniques. These approaches are based on detectable geometric features of the image (such as texture and shape) extracted from the raw data. However, these features cannot support the inferences that are developed in a more abstract level by the experts, as demonstrated in [5]. On the other hand, in some domains, data are not always available a priori, hindering the application of bottom-up approaches, as machine learning. This scenario encourages the adoption of top-down symbolic approaches [5], which explore the availability of high-level domain knowledge. In recent years, there was an increase in the use of ontologies as the core of knowledge-based systems, since they specify in a formal and explicit way the shared conceptualization in a given domain, allowing the knowledge reusing and promoting the semantic interoperability among systems [6]. Following this trend, works such as [7] has been exploring the application of ontologies in knowledge-based systems to deal with visual domains, in tasks that rely intensively on visual knowledge. In this paper, we propose an approach to deal with tasks of visual interpretation of events. This approach is based on a symbolic pattern matching process that works with the support of well-founded domain ontologies. Our (top-down) knowledge-based approach combines three main components: an ontologically well-founded domain ontology, a domain inferential knowledge base and a reasoning model for visual interpretation. Our main innovation concerns the acknowledgment of the role played by ontological metaproperties of domain concepts in computational accounts of visual interpretation tasks. We claim that there are ontological meta-properties of domain concepts that are related to the conditions that allow the visual perception of its instances, determining the domain concepts that can participate in the visual interpretation processes. In this work, we attempt to clarify the meta-properties of domain ontology concepts that participate in visual interpretation tasks, exploring their roles in the reasoning models used for this kind of task. This ontological clarification allows the definition of inferential knowledge meta-models and reasoning models that embody explicit ontological constraints, increasing the potential of reuse of them and allowing a more accurate mapping be-
tween these models and the domain ontology. In [6], the meta-properties that we are interested in are formalized in a foundational ontology called Unified Foundational Ontology (UFO). For this reason, we adopt UFO for bringing these meta-properties to our proposal. It is important to notice that our approach deals only with symbolic representations of visual knowledge. In this sense, it does not deal directly with raw visual data. The objects that are visually perceived in a given scene are described as instances of the domain ontology that is being considered in the problem at hand, and the reasoning model performs the interpretation from these instances of the ontology. Section II details our domain-independent approach for visual interpretation of events. In this work, we deal with a specific type of visual interpretation task, which concerns the visual interpretation of the events responsible by the generation of the visually observed object. Thus, in order to evaluate our approach we work with an instance of this task, that is, the visual interpretation of depositional processes, within the domain of Sedimentary Stratigraphy. Therefore, in section III we present an overview of the Sedimentary Stratigraphy domain and describe how our approach was applied to solve this task. Finally, section IV presents our main conclusions. II. A N APPROACH FOR VISUAL INTERPRETATION OF EVENTS
In this section, we will present the components of our approach that are domain-independent: the Visual Chunk, a meta-model for representation of inferential knowledge for visual interpretation tasks; and the reasoning model for visual interpretation. A. Visual Chunk In this work, we call inferential the knowledge that is applied by the experts to establish the connection between (visual) evidences and abstract interpretations, allowing the performing of inferences from visual stimuli to abstract interpretations. In our approach, we propose a meta-model, which we call Visual Chunk, for representing inferential knowledge. Our meta-model is based on the notion of chunk (or perceptual chunk), which was proposed in cognitive studies of the expertise [3]. Chunks are sets of related perceptual stimuli that, when recognized together, allow the fast access to the semantic content that is meaningfully associated to them. In high degrees of expertise, the problem-solving process of visual interpretation tasks is driven by a patternmatching process, where the visual stimuli that come from the environment are confronted against the visual patterns that are represented in perceptual chunks, triggering the abstract interpretations related to them. The visual chunk meta-model represents visual patterns that are applied by the experts in a pattern matching
process performed in visual interpretation tasks; capturing the direct relationship between the representation of these visual patterns and the semantic content (the interpretations) related to them. Our intent, proposing this meta-model, is to develop a structure for representing inferential knowledge that reflects in a narrower way the cognitive structures that the experts apply in this kind of tasks. In order to reach these aims, we constrain the structure of the visual chunk, as well as the kinds of contents that can be admitted in it. In our approach, the content of each instance of visual chunk is specified in terms of the knowledge provided by the domain ontology, and the constraints embodied in the metamodel are ontological ones. These constraints are supported by UFO, which was adopted because it formalizes the ontological meta-properties that we adopt for characterizing the concepts that can participate in visual interpretation processes. The UFO is a foundational Ontology, that is, a theoretically well-founded domain independent system of categories and their ties that can be used to construct models of specific domain in realities [6]. Due to this, it can serve as a foundation for analyzing domain specific concepts, providing guides to make modeling decisions in the conceptual modeling process, clarifying and justifying the meaning of the models, improving the understandability and reusability. UFO is an ontology of particulars and universals. Roughly speaking, the distinction between particular and universal is analogous to the distinction between types and their instances, in conceptual modeling. Thus, UFO provides a set of categories of particulars and a set of categories of universals. The categories of universals can be viewed as meta-types (“types of types”). These meta-types classify concepts in specific ontologies. In this sense, we can view the concepts in specific domain ontologies as instances of the meta-types provided by UFO. These meta-types are structured in a taxonomy according to a set of metaproperties, such as principle of identity, which supports the judgment whether two instances of the universal are the same; and principle of unity, which supports the counting of the instances of the universal. A full description of UFO can be found in [6]. In an overview, the visual chunk meta-model has a rich internal structure, which is organized as a pattern of constrained arrangements of Visual Chunk Elements (VCE). The VCEs are the building blocks of visual chunk. Each instance of VCE has a mapping to some specific type of domain ontology concept, respecting ontological constraints in the meta-level. Thus, they can be viewed as placeholders, in visual chunks, for knowledge (content) provided by the domain ontology. We consider the following kinds of VCEs: • ObservableEntity: It maps to domain concepts (in the ontology) whose instances can be directly visually perceived. We consider that this type of VCE can map only domain concepts classified as Substantial Universals,
•
•
•
•
according to UFO. We adopt this assumption because, according to [8], the object perception depends on establishing a direct, causal and informational relation with a set of external physical objects, which corresponds to any unique material body that possesses hierarchically organized and cohesive parts, which exists independently of internal states of the perceiver and his/her perceptual systems. These bodies are instances of concepts that are instances of Substantial Universal. VisualQuality: It maps to concepts (in the ontology) that characterize domain concepts that are mapped by ObservableEntity. That is, this type of VCE can map only domain concepts that are classified as Quality Universals, in UFO. A Quality Universal characterizes other Universals and is associated to Quality Structures, which are structures that represent the set of all values that a quality can assume. Thus, considering the property color as a Quality Universal, a given instance of Car could be characterized by an instance of the quality universal Color, which is associated with a value (claled quale) in ColorStructure, which represents all the possible values that the property color can assume. VisualQuale: It maps to a possible value that an instance of a domain concept mapped by a VisualQuality can assume. That is, this type of VCE can map only to a quale that is member of a quality structure associated to a Quality Universal that is mapped by a VisualQuality. PartOfRelation: It maps to a parthood relation between two domain concepts mapped by ObservableEntity. This type of VCE can map only to domain relationships that are classified by one of the four part-of relations provided by UFO. There are four types of parthood relations in UFO, with specific semantics: memberOf, componentOf, subCollectionOf and subQuantityOf. Each parthood relation only can be established between individuals of specific UFO meta-types, respecting some ontological constraints embodied in UFO. InterpretableEvent: It maps to domain concepts whose instances are events. This type of VCE can map only to domain concepts that are classified as Event, according to UFO. Instances of an Event (such as Game, War, etc), are individuals composed by temporal parts, that is, they happen in time, accumulating temporal parts.
Instances of VCEs map to concepts in the domain ontology, respecting some constraints regarding the metaproperties of the domain concepts. Thus, in the case of adopting the UFO as an ontology that characterizes1 the 1 When we classify a domain concept according to the meta-types provided by UFO, we are characterizing the domain concept with the corresponding meta-properties
meta-properties of the domain concepts, we can say that instances of VCEs (in visual chunks) map to instances of UFO meta-types, which are concepts in the domain ontology. Moreover, it is important to notice that two or more instances of VCE can map to a same ontology concept. The Visual Chunk is structured according to some recurrent patterns of relationship among VCEs. This structure can be described as following. Firstly, a Visual Chunk is a 2-tuple V C = hV P, iei; where ie is an instance of InterpretableEvent and V P represents the visual pattern in the visual chunk. V P is a 3-tuple V P = hoe, SP V F, SP V P i; where oe is an instance of ObservableEntity, SP V F represents a set of possible visual features related to oe and SP V P represents a set of possible visual characterizations of the parts related to oe. Intuitively, SP V F represents the global features (or holistic features) of oe, while SP V P represents the alternative local features of oe. SP V F is a set: SP V F = {pvf0 , ..., pvfn }; where each pvfi is a P V F . In its turn, P V F represents a possible visual feature, as a 2-tuple: P V F = hvqual, V Qi; where vqual is an instance of VisualQuality, and V Q is a set of admissible values for vqual. That is, V Q = {vq0 , ..., vqn }; where each vqi is a VisualQuale. By its side, SP V P is a set: SP V P = {pvp0 , ..., pvpn }; where each pvpi is a P V P , which represents a possible visual part. That is, P V P = hpor, SP LV P i; where por is an instance of PartOfRelation and SP LV P is a set of possible local visual patterns. Finally, SLV P = {vp0 , ..., vpn }; where each vpi is a V P by itself. The adoption of visual chunk provides the following benefits. • •
• •
It guides the process of acquisition of the inferential knowledge for visual interpretation tasks. Due to the constraints that are embedded in Visual Chunk, it approximates the intended models of inferential knowledge, that is, the models of inferential knowledge that are effectively applied by the experts in visual interpretation tasks. Visual Chunk guides de mapping to the domain ontology, through meta-level constraints. Visual Chunk allows the description of visual patterns in terms of global features (or holistic features) of wholes and local features of parts. That is, our approach acknowledges that the notions of part and partonomies play important roles in the human perceptual and cognitive processes, as pointed out in [9]. Thus, a single visual chunk instance can describe several alternative descriptions of a part that, together with the description of the whole, support the corresponding interpretation. A single unit of visual chunk can represent a large amount of information, allowing the aggregation of the relevant knowledge in a few large units, imposing a manageable structure to the inferential knowledge base.
Algorithm 1: Visual Interpretation algorithm visualInt(ode, hyposet , do, intset ) Input: ode, hyposet , do, intset . Output: intset . begin hypoN ewset ← ∅; end ← true; foreach hi ∈ hyposet do childrenHyposet ← specialize(hi , do); if childrenHyposet 6= ∅ then hypoConf irmed ← f alse; foreach chi ∈ childrenHyposet do vP atternset ← recovery(chi ); matchingResult ← f alse; foreach vpi ∈ vP atternset do if matching(vpi , ode) then matchingResult ← true; if matchingResult = true then hypoN ewset ∪ {chi }; hypoConf irmed ← true; end ← f alse; if hypoConf irmed = f alse then intset ∪ {hi }; else intset ∪ {hi }; if end = f alse then visualInt(ode, hypoN ewset , do, intset );
B. Reasoning model The reasoning model assumes the availability of a domain ontology that provides suitable concepts for describing the domain entities that are the focus of the tasks of visual interpretation of events. In this sense, for example, this ontology must provide concepts that can be mapped by ObservableEntity and InterpretableEvent in visual chunks. In addition, the ontology must provide a taxonomy of concepts that will be mapped by InterpretableEvent. Our model also assumes the availability of a set of visual chunks (instances of the visual chunk meta-model). Finally, our approach assumes that the domain instance that will be interpreted is suitably described using the domain ontology. Our reasoning model, presented in Algorithm 1, operates in a top-down way, performing a process of generation and test of hypothesis. The generation of hypothesis is constrained by the structure of the taxonomy of domain concepts (provided by the domain ontology) that were mapped by InterpretableEvent in instances of visual chunks. The test of hypothesis is based on a symbolic pattern matching process, where the (symbolically described) visual pattern represented in a visual chunk (associated to the hypothesis) is confronted against the symbolic description of the domain
entity (instance), which is under visual interpretation. The Algorithm 1 operates over a set of inputs: ode is the symbolic description of the observed domain entity (instance) which is under visual interpretation; hyposet is a set of initial hypotheses (concepts in the domain ontology, which were mapped by an InterpretableEvent); do is the domain ontology; and intset is a set of interpretations reached by the visual interpretation process (concepts in the domain ontology, which were mapped by an InterpretableEvent. The output of the algorithm is given in the intset itself, which initially is an empty set. Firstly, the algorithm traverses the hyposet and obtains the children concepts (through specialize function) of each hypothesis (hi ). The specialization relies on a taxonomy in the domain ontology. When the specialization results in an empty set, this means that it was reached a leaf in the taxonomy of concepts that were mapped by InterpretableEvent, and, therefore, this leaf concept is included in the intset . In this case, the algorithm terminates. Otherwise, if specialize returns a non-empty set, the algorithm tries to find support for each one of the more specific hypothesis (chi ). This is done by recovering the visual patterns that are related (in visual chunks) to the hypothesis and testing the matching of these visual patterns against the description of the domain entity that is under interpretation. The recovering is done by the recovery function, which inspects the inferential knowledge base, selects the visual chunks that have an InterpretableEvent corresponding to the hypothesis under testing, and returns the visual pattern (vpi ) represented in each of these visual chunks. The matching is done by the matching function. If, at least, one matching is detected, this means that there is support for the specific hypothesis (chi ) under test, and, therefore, this hypothesis will be included in a set of hypothesis (hypoN ewset ), which could be refined in further steps of the algorithm (in a recursive call). Otherwise, if no match is found, this means that we cannot support the specific hypothesis (chi ), therefore, we include the more general hypothesis (hi ) in the intset , since it was supported in a previous step of algorithm, or was given (in hyposet ). In this case, the algorithm also terminates. If there are hypothesis in hypoN ewset , the algorithm is recursively called to refine them. At the end, the algorithm returns a list of interpretations that can be supported by the evidences observed in the ode (instance), considering the set of visual chunks in the inferential knowledge base. The matching returns true if the visual pattern represented in the visual chunk matches against the visual description of the instance (user data) that is under inspection. In general, the matching process consists in the following steps. •
It is verified if the instance that is under inspection is an instance of the domain concept that is mapped by the ObservableEntity provided in visual pattern represented in the visual chunk.
•
•
For each domain concept that is mapped by a VisualQuality associated to the ObservableEntity, it is verified if there is, at least, one value mapped by a VisualQuale, characterizing the instance. Thus, in this step, it is verified if the observed entity has the holistic features specified in visual pattern of the visual chunk. For each domain relation that is mapped by a PartOfRelation, it is verified if there is a part of the instance under inspection related to it through this relations. In the positive case, it is verified if, at least, one of the alternative part descriptions provided in the chunk matches against the part that was found. This verification is performed through a recursive call of the matching procedure. In this step, it is verified if the observed entity has the necessary local features specified in the visual pattern of the visual chunk.
III. A N APPLICATION IN S EDIMENTARY S TRATIGRAPHY In this section, we will present an application of our approach within the domain of Sedimentary Stratigraphy. Thus, firstly, we shall present an overview of the domain and the task in that we are interested in. Next, we present how our approach was applied, showing how the knowledge models required by our approach are instantiated within this domain. A. Overview of the domain The main objects of study of Sedimentary Stratigraphy are: Body of Rock, Sedimentary Facies, Sedimentary Structure and Depositional Process. A body of rock is a portion of rock exposed in the surface or extracted from the subsurface by drilling. A sedimentary facies is a region in a body of rock, visually distinguishable of adjacent regions. Each sedimentary facies is viewed as a direct result of the occurrence of a depositional process. A sedimentary structure is the external visual aspect of some internal spatial arrangement of the rock grains. Finally, depositional processes are events that involve the complex interaction of natural forces and sediments, and which are responsible for the formation of sedimentary rocks. In this domain, the geologist describes sedimentary facies and provides a geological interpretation of the depositional processes that originated the geological features that were observed. This interpretation is a crucial step in petroleum exploration. B. Expert system for visual interpretation of depositional processes The description of sedimentary facies is time consuming and expensive. Once captured the descriptions, huge volumes of data must be interpreted manually by experts. Our system allows novice users to make the visual description of facies using the domain ontology; while the interpretation is delegated to the system, which applies expert knowledge
(represented in the knowledge base) in the process of interpretation. Our system takes a visual description of an instance of sedimentary facies as input and suggests possible interpretations of the depositional process that was responsible by the generation of this facies. The capturing of the visual descriptions is done by geologists using a commercial descriptive software tool2 , which was developed based on the concepts described in this paper. Following, we shall discuss the knowledge models that are required by our approach and that support its application in this task: a well-founded domain ontology for Sedimentary Stratigraphy and a set of instances of visual chunks. The main fragment of our ontology can be viewed in Figure 1. It shows the main domain entities that are relevant to the task which is the focus of this paper. Here we do not present the quality concepts and the taxonomies of depositional processes and sedimentary structures. The full presentation of this domain ontology is beyond the scope of this paper, but can be found in [2].
Bi ogeni cst r uct ur e
Deposi t i onalst r uct ur e
Sedi ment ar yst r uct ur e
Chemi calanddi agenet i cst r uct ur e
* has Sedi ment ar y St r uc t ur e
Def or mat i onst r uct ur e
Bodyofr ock
1 has Sedi ment ar y Fac i es Sedi ment ar yf aci es
1
Deposi t i onalpr ocess 1
*
gener at edBy
1
Figure 1. Diagram representing the main domain entities of the wellfounded domain ontology. In this diagram, the UFO meta-types are represented as stereotypes. For example, the stereotype kind associated to the concept Sedimentary facies means that the domain concept of Sedimentary facies is an instance of the meta-type kind, according to UFO.
In this application, the inferential knowledge base was modeled as a set of instances of visual chunks. Each visual chunk in this base is composed by instances of VCEs (discussed in II-A) which maps to concepts in the domain ontology. An example of instance of visual chunk in this domain can be viewed in Figure 2. This instance of visual chunk specifies that Migration of longitudinal bars can be interpreted when an instance of Sedimentary Facies have Imbricated as value for Fabric Orientation and Conglomerate as value for Lithology; and, additionally, when this instance is related to an instance of Sedimentary Structure that has Horizontal as value for Angularity. Notice that in other cases, a visual chunk could specify more than one possible value for Lithology, for example. We can specify a set of possible values (mapped by a VisualQuale) for 2 http://www.endeeper.com/products/software/strataledge
each quality concept (mapped by a VisualQuality), and it is enough finding only one of these values (for each quality) to support the interpretation. Also, in other cases, a visual chunk could specify more than one part characterization for each PartOfRelation. In this way, each visual chunk could represent a huge amount of information in a single unity with a clear structure. Due to this, the resulting inferential knowledge base becomes well-structured and easy to manage and to maintain. Mi gr at i onof l ongi t udi nalbar s hasSedi ment ar y St r uc t ur e Sedi ment ar y Faci es
Fabr i cOr i ent at i on Li t hol ogy I mbr i cat ed
Concl omer at e
I nt er pr et abl eEvent
Vi sual Qual i t y
Obser vabl eEnt i t y
Par t Of Rel at i on
Vi sual Qual e
I mpl i cat i on
Sedi ment ar y St r uct ur e Angul ar i t y Hor i z ont al
Figure 2. Graphic representation of an instance of visual chunk in the domain of Sedimentary Stratigraphy. In this representation, the instances of VCEs are labelled with terms of concepts in the domain ontology, indicating the mapping from each VCE instance to a concept in the domain ontology.
When applied to this domain, the Algorithm 1 will receive the following inputs: ode will be an instance of the concept Sedimentary Facies (of the domain ontology); hyposet will contain the concept Depositional Process; do will be the domain ontology itself; and intset will be an empty set. During the processing the concept of Depositional Process (in hyposet ) is specialized in more specific types of depositional processes; and for each one of these concepts, the algorithm tries to find evidences in the instance of sedimentary facies that support it, according to the inferential knowledge base. At the end, the suggested interpretations – specific types of depositional processes – are returned in intset . A preliminary evaluation of the resulting system is presented in [2]. IV. C ONCLUSION We described a modeling approach to explicitly deal with the ontological meta-properties that characterizes the domain concepts that are used by experts to support the problemsolving in visual interpretation tasks. We recognized that the notion of perceptual chunk, previously identified in several studies, plays a fundamental role in the connection between the visual stimuli and abstract interpretations in high degrees of expertise. We combined the notion of perceptual chunk with the notion of ontological meta-property, in order to develop a meta-model (called visual chunk) for representing visual inferential knowledge. We also proposed a reasoning
model for visual interpretation of events, which uses visual chunks as inferential knowledge units. Thus, this work explores the role played by foundational ontologies in problem solving methods involving visual information. We applied the proposed model to build a robust representation of visual knowledge in a complex real application in Petroleum Geology. In future works, we intend to evaluate our approach in other visual interpretation tasks. We also plan to investigate the benefits of considering uncertainty in our approach. Finally, we intend to continue the investigations regarding the relationship among visual knowledge, reasoning and ontological meta-properties. ACKNOWLEDGMENT The authors would like to thank Brazilian Research Council, CNPq Master programme Grant 132264/2009-9; and PRH-PB program of Petrobras; and ENDEEPER, for the support to this work. In addition, we would like to thank Sandro Fiorini for comments and ideas. R EFERENCES [1] A. Lorenzatti, M. Abel, S. R. Fiorini, A. K. Bernardes, and C. M. dos Santos Scherer, “Ontological primitives for visual knowledge,” in Proceedings of the 20th Brazilian conference on Advances in artificial intelligence (2010), ser. Lectures Notes in Artificial Intelligence, vol. 6404. S˜ao Bernardo do Campo: Springer Berlin / Heidelberg, 2011, pp. 1–10. [2] J. L. Carbonera, M. Abel, C. M. S. Scherer, and A. K. Bernardes, “Reasoning over visual knowledge,” in Proceedings of Joint IV Seminar on Ontology Research in Brazil and VI International Workshop on Metamodels, Ontologies and Semantic Technologies, R. Vieira, G. Guizzardi, and S. R. Fiorini, Eds., vol. 776, 2011. [3] F. Gobet and H. A. Simon, “Pattern recognition makes search possible: Comments on holding (1992),” Psychological Research, vol. 61, pp. 204–208, 1998. [4] B. P. Wood, “Visual expertise,” Radiology, vol. 211, pp. 1–3, 1999. [5] M. Abel, L. A. Silva, J. A. Campbell, and L. F. De Ros, “Knowledge acquisition and interpretation problem-solving methods for visual expertise: study of petroleum-reservoir evaluation,” Journal of Petroleum Science and Engineering, vol. 47, pp. 51–69, 2005. [6] G. Guizzardi, Ontological Foundations for Structural Conceptual Models, ser. CTIT PhD Thesis Series. Enschede, The Netherlands: Universal Press, 2005, vol. 05-74. [7] S. R. Fiorini, M. Abel, and C. M. S. Scherer, “Semantic image interpretation of gamma ray profiles in petroleum exploration,” Expert Systems with Applications, vol. 38, pp. 3724–3734, April 2011. [8] M. Matthen, Seeing, Doing, and Knowing: A Philosophical Theory of Sense Perception. Oxford University Press, 2005. [9] B. Tversky, “Parts, partonomies, and taxonomies,” Developmental Psychology, vol. 25, pp. 983–995, 1989.