Knowledge representation and reasoning
To give an answer to the first question asked in the previous section, that is how knowledge should be represented, in this section an overview about the field of knowledge representation and reasoning will be provided. A definition of Knowledge representation and reasoning (KR2, KR&R) is given in , based on Charles S. Pierce, as “the field of artificial intelligence dedicated to representing information about the world in a form that a computer system can utilize to solve complex tasks such as diagnosing a medical condition or having a dialog in a natural language”. Knowledge representation incorporates findings from psychology  about how humans solve problems and represent knowledge. The justification for its use in expert systems is that it makes it easier the development of complex software and lessening the semantic gap between users and developers. Knowledge reasoning is based on logic to automate various kinds of reasoning.
Examples of knowledge representation formalisms include semantic nets, sys-tem architectures, frames, rules, and ontologies. Examples of automated reason-ing engines include inference engines, theorem provers, and classifiers.
The present research tackles the knowledge representation issue by using se-mantic networks implemented as a graph database and derived from an ontol-ogy model, while the reasoning about knowledge is not a key purpose of the research. The use of inference engines is discarded since over years many re-searches [41, 42, 43] have proven their inadequacy in understanding. Rather, the present thesis explores the use of diﬀerent classifiers in the proposed approaches, both supervised and unsupervised in order to make a sort of inference about the acquired knowledge. The concept of “Semantic Network Model” was formed in the early 1960s by the cognitive scientists Allan M. Collins, M. Ross Quillian and the psychologist Elizabeth F. Loftus [44, 45, 46, 47, 48] as a form to repre-sent semantically structured knowledge. When applied in the context of modern Internet, it extends the network of hyperlinked human-readable web pages by inserting machine-readable metadata about pages and how they are related to each other .
Knowledge representation and reasoning is a key enabling technology for the Semantic Web. The term was coined by Tim Berners-Lee  to introduce a web of data that can be processed by machines. According to the W3C, “The Seman-tic Web provides a common framework that allows data to be shared and reused across application, enterprise, and community boundaries”. Hence, the ultimate goal of the Web of data is to enable computers to do more useful work and to de-velop systems that can support trusted interactions over the network. Semantic Web technologies enable people to create data stores on the Web, build vocab-ularies, and write rules for handling data . While its critics have questioned its feasibility, proponents argue that applications in industry, biology and human sciences research have already proven the validity of the original concept . After a slow start, in 2013, more than four million Web domains contained Se-mantic Web markup following the Linked Open Data principles, which are based on HTTP dereferenced URIs for things and the use of open standards such as RDF or SPARQL. Figure 1.4 shows how the Semantic Web stack is organized. One of the most eﬀective ways of representing knowledge in computer science is through the use of ontologies. The term ontology is a compound word deriving its etymology from Greek words on(gen. ontos) and logia, indicating a meta-physical science or the study of being. Over the years many definitions have been given and one of the most complete is that of Studer in : “An ontology is a formal, explicit specification of a shared conceptualization. A conceptualization refers to an abstract model of some phenomenon in the world by having identified the relevant concepts of that phenomenon. Explicit means that the type of con-cepts used, and the constraints on their use are explicitly defined. Formal refers to the fact that the ontology should be machine readable, which excludes natural language. Shared reflects the notion that an ontology captures consensual knowl-edge, that is, it is not private to some individual, but accepted by a group”. A more detailed discussion about the conceptualization is provided in Section 1.5.
Definitely, it can be said that, for the intended meaning of this work, an ontol-ogy is a formal definition of concepts and relationships between them, belonging to a certain domain. Hence, ontologies can be considered as the most powerful means of realization for the original vision of the Semantic Web provided by Tim Berners-Lee in which computers become capable of analyzing all the data on the Web, content, links, and transactions between people and computers .
At this point of the discussion it is necessary to clarify how to distinguish ontologies from the conceptualization of knowledge and what are the intercon-nections between them.
Gruber  and Smith  define a conceptualization as an abstract simplified view of some selected part of the world, containing the objects, concepts, and other entities that are presumed of interest for some particular purpose and the relationships between them. An explicit specification of a conceptualization is an ontology, and it may occur that a conceptualization can be realized by several distinct ontologies . An ontological commitment in describing ontological comparisons is taken to refer to that subset of elements of an ontology shared with all the others .
Guarino  adds the following distinction between an ontology and a con-ceptualization saying that an ontology is language-dependent, its objects and interrelations are described within the language it uses, while a conceptualization is always the same, more general, its concepts exist independently of the language used to describe it.
Some researchers in the knowledge engineering prefer not to use the term “con-ceptualization”, but instead refer to the conceptualization itself as an overarching ontology . Such an approach requires a way to make it possible the “commu-nication” among multiple ontologies referring to the same conceptualization. A general resolution is not at hand and diﬀerent approaches exist .
In this dissertation the approach used is to consider an overarching top-level ontology model as a conceptualization. The reasons that led to this choice are manifold. In fact, considering conceptualization itself as an ontology brings nu-merous advantages in terms of independent sharing of knowledge between intel-ligent artificial systems, greatly facilitating the definition of inter-agent commu-nication standards. An approach to knowledge sharing is presented in Section 2.5.
At a higher level of abstraction, a conceptualization facilitates the discussion and comparison of its various ontologies, facilitating knowledge sharing and reuse . Each ontology based upon the same overarching conceptualization maps the conceptualization into specific elements and their relationships.
Moreover the use of well-known processes, like ontology matching and merg-ing allows to incorporate new knowledge and conceptualize it in a way that is compliant with the already acquired knowledge.
The other process of primary interest for the purposes of this thesis is that relating to the acquisition of knowledge. The topic will be addressed in this section, first from a purely general point of view, and then move on to the discus-sion of this process more specifically, from a computer engineering point of view, analyzing its use within artificial intelligent systems.
Knowledge acquisition has been seen in the past more as an objective that diﬀerent research domains are trying to attain rather than as a research field.
With the advent of complex expert systems, which are considered to be one of the first successful applications of artificial intelligence technology to real world business problems , it was soon realized that the acquisition of domain ex-pert knowledge was one of the most critical tasks in the knowledge engineering process. More and more researchers investigated this process, which became an intense area of research on its own. In the frame of this work the acquisition of new knowledge is intended as the process of acquiring new knowledge through experience, education or, following the definition given in , as the process used to define the rules and ontologies required for a knowledge-based system. One of the earlier works  on the topic used Batesonian theories  of learning to guide the process. Natural Language Processing (NLP), which is today a well-known research branch of artificial intelligence, has been used as an approach to facilitate the acquisition of knowledge .
Many words have been spent about the relationship between the acquisition and creation of knowledge: Wellman  limits the role of knowledge manage-ment to the organization of what is already known, while the creation of new knowledge is seen as a separate subject; in the theory of knowledge creation , I. Nonaka aﬃrms that the interaction between individuals plays a critical role in the development of new knowledge starting from the ideas already formed in peo-ple’s minds. In other words knowledge creation is the formation of new notions and concepts through interactions between explicit and tacit knowledge.
In this thesis, the investigation of the field regarding knowledge acquisition has been conducted through a double-sided analysis. The goal is to cover the needed aspects of both symbolism and connectionism movements of artificial intelligence which have been used as baseline theories of this work regarding the development of cognitive human-inspired approaches in artificial intelligent systems. The first one, detailed in Section 1.6.1, is a brief literature review of ontology matching and merging approaches used to construct and integrate knowledge; the second part, described in Section 1.6.2, analyzes approaches and methodologies for knowl-edge acquisition specifically designed and implemented for artificial intelligent systems. Therefore, the problem of knowledge acquisition is tackled at diﬀerent dimensions. The approach based on ontology matching and merging is more ab-stract and strictly bounded to symbolic approaches and use of ontologies, hence it deals only with high-level knowledge, while the second analysis incorporates diﬀerent approaches including abstract high-level knowledge, perceived low-level knowledge, or both kinds applied to artificial intelligent systems.
Ontology matching and merging
As stated in Section 1.5, ontologies are powerful and widely used as a tool for representing and conceptualizing specific domains of knowledge. More in detail, knowledge can be developed in ontologies that conform to standards such as the Web Ontology Language (OWL) . This is a common way to standardize knowledge and facilitate the knowledge sharing across a broad community of knowledge workers. One example domain where this approach has been successful is bioinformatics [64, 65]. The process of matching and merging ontologies is a way to integrate together diﬀerent sources of knowledge in order to enrich or extend the current knowledge. This kind of approach to knowledge acquisition is a reuse based approach [66, 67]. One of the goals of matching ontologies is to reduce or, ideally eliminate, heterogeneity between them. An exhaustive classification of types of heterogeneity is given in :
• syntactic heterogeneity: in the case of ontologies expressed in diﬀerent lan-guages.
• terminological heterogeneity: occurs when there are variations in names referring to the same entities, e.g. Paper vs. Article.
• conceptual heterogeneity: it is also called semantic heterogeneity, stands for the diﬀerences in modeling the same domain of interest. For example, two ontologies can diﬀer in coverage, in granularity or in perspective.
• semiotic heterogeneity: it is concerned with how entities are interpreted by people. This type of heterogeneity is very diﬃcult to detect for computers.
In , Euzenat and Shvaiko observe that, also in this field of study, diﬀerent authors adopt diﬀerent terms to refer to similar concepts, so it is aprropriate to point out some definitions. According to their view, ontology matching is defined as “the process of finding relationships or correspondences between entities of diﬀerent ontologies. The output of this process is a set of correspondences, named alignment”. Ontology merging is the creation of a new ontology from two, at least partially overlapping, ontologies, while the inclusion of an ontology in one another is referred to as ontology integration. In  the authors define six main application areas for ontology matching:
• ontology engineering
• information integration
• peer-to-peer information sharing
• web service composition
• autonomous communication systems
• navigation and query answering on the web
The challenges in this domain grow with every progress in information technolo-gies. Ontology matching results can manifest the same diﬃculties as the source ontologies: they can be large, complex, and heterogeneous. Yet, so long as the information, new diﬀerent ontologies turn up everyday for the same information, adding heterogeneity .
Table of contents :
Motivation and aim of the work
Frame of the work
1 State of the Art
1.2 Definitions of knowledge
1.3 Knowledge Management
1.4 Knowledge representation and reasoning
1.5 Knowledge Conceptualization
1.6 Knowledge acquisition
1.6.1 Ontology matching and merging
188.8.131.52 Schema-based systems
184.108.40.206 Instance-based systems
220.127.116.11 Mixed and meta-matching systems
18.104.22.168 Merging systems
22.214.171.124 Differences and similarities between this work and the related works
1.6.2 Knowledge acquisition in intelligent systems
126.96.36.199 Differences and similarities between this work and the related works
2 Modular framework for knowledge management
2.1 Logical view of the research framework
2.2 General Knowledge base
2.2.1 Ontological model
2.2.2 Extensibility through matching and merging
188.8.131.52 Matching phase
184.108.40.206 Merging phase
2.3 Approaches to knowledge construction
2.3.1 Top-down approach
2.3.2 Bottom-up approach
220.127.116.11 Self-Organizing Maps
18.104.22.168 Growing Hierarchical Self-Organizing Maps
2.4 Combined approach
2.5 Sharing the knowledge
3 Implementation and evaluation
3.1 Knowledge base implementation
3.1.1 Neo4J population through Cypher queries
22.214.171.124 Graph visualization
3.1.2 Extending the general knowledge base
126.96.36.199 Models creation and reconciliation
188.8.131.52 Creation of a reference alignment
3.2 Top-down approach implementation
3.2.1 Top-down approach evaluation
3.3 Bottom-up approach implementation
3.3.1 Data preparation
3.3.2 Bottom-up approach evaluation
3.3.3 Optimizing parameters for controlling maps growth
3.4 Combined approach implementation and evaluation
4 Validation and applications
4.1 Description of the robotic platform
4.2 Case study: home & office domain
4.3 Case study: cultural heritage domain