Evaluating the Suitability of Enterprise Modelling Languages to Support Assessments

Get Complete Project Material File(s) Now! »

ISO/IEC 33020 Measurement Framework

Although this thesis introduces approaches that can be applicable to evaluate different types of enterprise entities and aspects, all case studies of this work are related to business process assessment. In this context, the ISO/IEC 33020 [20] standard was selected as main Assessment Framework to guide the appraisals.
ISO/IEC 330xx standard series is a well-known and established international standard that is widely used in the literature and the industry. It is composed of a set of standard documents describing the requirements for performing process assessment in organisations, requirements for process measurement frameworks, requirements for process reference, process assessment and maturity models, a guide for process improvement, a process measurement framework for assessment of process capability, assessment models (e.g. for software testing, system life cycle process, safety extension, etc.), and a set of definitions of concepts and terminology that are relevant in the scope of the standard.
Specifically, the Measurement Framework (ISO/IEC 33020) defined by the standard contains a set of Process Capability Levels in which each level groups one or more Process Attributes (PA). A PA represents a measurable property of process capability and it implicitly defines a set of Indicators or Requirements to evaluate a business process. It also provides a result calculation method. Figure 1.2 presents the assessment meta-model with elements that are specialised for business process capability assessment. The framework was used as base to define the new elements.

Automated Results Calculation and Presentation

Regarding improved Results Calculation and Presentation activities of the assessment, the work by [27], which describes the SEAL of Quality Assessment Tool, a software tool for software pro-cess assessment developed in the Software Engineering Applications Laboratory (SEAL) of the Electrical Engineering department of the University of Witwatersrand. It aims at supporting assessments compliant with the ISO/IEC 15504 standard [94]. Its main functionality is based on storing the assessment framework in a database. The link associations of evidences identi-fied during the assessment are also able to be stored. Moreover, the tool allows to register the assessment processes, the participants and their roles. It also provides a user interface and the possibility to obtain data visualisations and reports about the assessment. On the other hand, the paper by [28] introduced a knowledge-based decision support system for measuring enterprise performance. The knowledge base contained a set of rules that were used for inference. The data collection mechanism is based on the involvement of top managers that give scores and weights for relative key dimensions based on their own experience. The weights provided by the managers for each dimension are summed and averaged. The input information is used by the system to provide a view of the current state of the performance of the enterprise. The system also includes an artificial neural network in order to forecast future financial measures. Following the same perspective, the work by [89] proposed an ontology-based Records Management (RM) evaluation system. It uses reasoner that classifies information in a database, containing the baseline and the actual state of the RM system as asserted individuals in the ontology, which was devised by a knowledge-engineer. These individuals are analysed using Semantic Web Rule Language (SWRL) rules defined in the rule base. The final result is calculated by the user interface, which compares the baseline RM system to the current state of the system. To perform data collection, there is a need for an evaluator to input the baseline scenario (considered as the ideal) for the RM system that will be evaluated. For the evaluation, the evaluator must enter the current state of the system. The work by [90] also presented an expert tool for evaluating the Business Intelligence competences of enterprise systems. In the paper, a factor analysis is performed to determine that the intelligence of enterprise systems can be assessed through six main factors, which are measured through a set of 34 evaluation criteria that are evaluated through a ques-tionnaire. In [61], the authors presented the Software-mediated Process Assessment (SMPA) to automate the assessment of IT Service Management processes. The tool allows to select the process to be assessed and data collection is supported through an online survey. The results are obtained by automatically analysing the collected data to measure the process capability. Improvement recommendations are also provided by the system. The SMPA approach provides four main instruments: assessment survey questionnaire, a method for allocating assessment questions to process roles, the logic to calculate the process capability, and a knowledge base for process improvement compliant to the recommendations defined by the Information Tech-nology Infrastructure Library (ITIL), which is a library that includes a set of best practices to manage and support IT services. The work by [55] presents the AssessAgility software tool that aims at automating and guiding the assessment process based on an exemplar assessment process containing the definitions and guidance to conduct assessments defined by AgilityMod, which is a reference model for performing agility assessment in organisations. The tool provides a graphical user interface that provides distinct sections for each role and responsibility of the assessment process. The data collection depends on the role, for lead assessors, it is possible to register and manage teams, progress of the assessment, assignments, and reports. The assessors, on the other hand, are allowed to register assessments, take notes, and upload evidences. The tool is focused on data collection, whilst the results calculation is performed manually by the assessment team. The work by [30] describes a Software as a Service tool for performing pro-cess assessment using the Tudor IT Process Assessment (TIPA) framework. In the tool, data collection is performed manually, and it is user-centred, implying that it is based on the use of Personas, which are representations of real-world individuals in the platform. The latest version of the software (beta) allows to cover almost all activities of an assessment process, from the definition of the assessment to the results presentation activity. All data must be registered manually. In this case, the ratings are manually introduced by the assessors and the final results are calculated automatically by the system. The paper [91] also addressed the TIPA framework by defining an ontology defining TIPA for ITIL concepts, that can be further used to perform inference on the devised knowledge base. Finally, the work by [92] introduced a semi-automated tool for enterprise interoperability assessment that is based on an ontological core to automate the results calculation phase of an assessment of enterprise interoperability.

Systematic Literature Review and Analysis

There are two main activities in the literature review presented in this chapter. Those activities are shown in Figure 2.1 through a Business Process Model and Notation (BPMN) diagram [166]. The first activity is based on retrieving a set of papers containing definitions of Smartness and Smart Systems in the scientific literature. This activity produces a set of selected papers that are then used as input to the next activity, which is based on analysing the definitions contained in the papers through text mining methods.

Systematic Literature Review

An SLR is based on the application of a systematic process to define the research question, iden- tify relevant studies, evaluate their quality and summarise the findings qualitatively or quanti-tatively [167]. Moreover, the tools used for selecting the studies must also be identified [168]. The search performed in this work is based on the methodology introduced by [169]. In general, systematic literature reviews are based on carrying out a cycle composed of three main phases:
(1) define keywords, (2) search in the literature, (3) analyse the results [170]. The main advan-tage of performing an SLR is that it follows a predefined strategy that allows to provide concrete evidence regarding the data sources and the criteria to select and analyse papers [77].
The SLR carried out in this work is a process that starts with the definition of the research question and is followed by the definition of the keywords, the search strings and the databases.
The search is then performed by querying the databases using the search strings. A filtering sub-process is then carried out, and it is based on analysing metadata from each paper to define if it satisfies the first set of criteria F1. The full text of a paper that satisfies F1 is then examined in order to check the second set of inclusion criteria F2. The paper is included if F2 is satisfied. Any other paper is discarded. Details regarding each activity of the methodology are described in the following sections.
Initially, the research question is defined. Note that the objective is to obtain a characterisation that is independent from the application domain. Hence, the research question that drives the SLR is:
• RQ: What are the characteristics of Smart Systems?
Considering RQ, the following specific questions are derived to serve as guide for the analysis of the papers gathered with the SLR.
• What are the domains of the papers containing definitions?
• What is the time period in which the papers were proposed?
• What are the main keywords used by the authors to propose their definitions?
• Are there any common aspects in the definitions when considering different application domains?
• Considering common points, what are the characteristics of a smart system?

READ  TEACHING, ASSESSMENT AND FEEDBACK IN ADULT LEARNING

Smartness and Intelligence

The definitions of intelligent and smart found in the papers are diverse. Some works treat them as synonym, for instance, [201] considers that a smart system is also intelligent, stating that « smart is intelligence – the ability to learn or understand or to deal with new or trying situations. The ability to apply knowledge to manipulate one’s environment or to think abstractly as measured by objective criteria (as tests) ». Note that this definition is focused on the behaviour. The work by [202] follows the same tendency, stating that « an intelligent system is in which different structures are able to co-operate with each other in a coherent way. A smart system is an intelligent system in which services can be exploited by users to their maximum ». However, in this case, the definition is more related to the structure than the behaviour.
Other works consider them as different or complementary concepts. In the work by [142], for instance, it is stated that « Smart Systems meet to some extent unexpectedly the expectations of their users when they provide comfort or do some daily magic; they sense and act », whilst « intelligent systems solve problems in a rational way like humans do and are able to reflect and explain their threads of inference; they solve and justify ». The definition refers to the behaviour of the systems, stating that smartness includes only the capability to sense and act, whilst intelligence allows to reason and explain the why the system performs actions. [203] presents intelligence as an evolution of smartness, stating that « the future of Intelligent Manufacturing will not stop at the level of smart, but to truly realise the intelligent. Intelligent Manufacturing system achieves autonomous learning, autonomous decision-making and continuous optimisation ». Finally, some papers refer to intelligence as a component that must be embedded into systems in order to make them smart. For instance, [204] consider that « sensors, machines, equipment, products, etc., are equipped with embedded local intelligence – which makes them smart objects –and are invisibly (over the cloud) interwoven in order to cooperate and negotiate with each other, thus becoming capable to reconfigure automatically themselves (through actuators) for flexible production of multiple types of products ». Moreover, [205] describe smart objects within the IoT from the same point of view: « objects in IoT are made smart through embedding intelligence using some innovational technologies ». However, the authors could be possibly describing the concept of knowledge when they refer to intelligence in their definitions.

Similarities within definitions

The final analysis of the definitions was focused on describing the most frequent words for each domain and their relationships. With this approach, the objective was to extract aspects from Smart Systems that could be considered as common to all domains. Table 2.4 presents the words that were present in the top-25 most frequent words (within the extracted definitions) for at least three different domains. It is worth mentioning that the words are counted after the pre-processing phase described in Section 2.3.2.
Table 2.4: Words, from the top-25 most frequent ones, that appear in the definitions from at least 3 domains. BM: Business and Management. IT: Information Technology.

Smart Capabilities in the Context of Design Science

Both the list of characteristics and the meta-model of smart systems presented in Section 2.5 are considered as research artefacts obtained as result of the application of the DSR methodology. Within the Relevance Cycle, the main requirement was the definition of the elements that compose a Smart System and their relationships, in order to serve as base for the development of the SAF. The Rigour Cycle considered the papers that were gathered through the SLR introduced in Section 2.3. Moreover, the information related to GST, the literature related to the term « smart » in diverse domains, and the related work presented in Section 2.2 were also considered as relevant elements that were treated as part of the knowledge base for DSR. Finally, the construction of the artefacts within the scope of the Design Cycle was based on the SLR and the analysis of the gathered papers using the quantitative approaches described in Section 2.3. Modelling concepts and methodologies were also applied for the construction of the meta-model, which was qualitatively evaluated to check its correctness in terms of semantic and syntactic. Note that the ultimate validation metric for the artefacts is based on assessing if they effectively serve as part of the knowledge-base used for the development of the SAF or any other Smart System. In this sense, it can be considered that there exists a validation pipeline: if the SAF is validated, then the knowledge-base used to build it is validated, thus the artefacts introduced in this work can be also considered as valid.

Table of contents :

Part I Towards a Smart Assessment Framework
Chapter 1 State of the Art
1.1 Introduction
1.2 Enterprise Assessment
1.2.1 Types of assessment
1.2.2 Maturity and Capability Assessment
1.2.3 ISO/IEC 33020 Measurement Framework
1.3 Related Work
1.3.1 Automated Results Calculation and Presentation
1.3.2 Automated Data Collection
1.3.3 Frameworks to Enhance Assessments
1.4 Research Gaps and Contributions
Chapter 2 Characterising Smart Systems
2.1 Introduction
2.2 Background
2.2.1 General Systems Theory
2.2.2 The Concept of Smartness in Different Domains
2.2.3 Previous Literature Reviews
2.3 Systematic Literature Review and Analysis
2.3.1 Systematic Literature Review
2.3.2 Papers Analysis
2.4 Literature Review Results and Discussion
2.4.1 Application domains
2.4.2 Publication period
2.4.3 Text mining results
2.4.4 Smartness and Intelligence
2.4.5 Similarities within definitions
2.5 Characteristics of Smart Systems
2.6 Threats to the Validity of the Literature Review
2.7 Smart Capabilities in the Context of Design Science
2.8 Conclusion
Chapter 3 The Smart Assessment Framework
3.1 Introduction
3.2 The Smart Assessment Meta-model
3.3 Meta-model Specialisation: Interoperability Assessment
3.4 The Smart Assessment Framework (SAF)
3.4.1 Data Perception Service
3.4.2 Organisation Service
3.4.3 Presentation Service
3.4.4 Data Flow in SAF
3.5 SAF Case Study: Business Process Capability Assessment
3.6 The Conceptual Models in the Context of Design Science
3.7 Conclusion
Part II Smart Assessment Framework Implementations 
Chapter 4 A Hybrid Approach to Perform Assessments Using Text Evidence
4.1 Introduction
4.2 Hybrid Approach to Evaluate Business Processes
4.2.1 Overview of the Approach
4.2.2 LSTM Model
4.2.3 Knowledge Base
4.3 Case Study
4.3.1 Case Study Description
4.3.2 Specialisation for Samples Management Process
4.3.3 LSTM Training and Testing
4.3.4 Assessment Results
4.4 Software Tool
4.5 The Hybrid Approach in the Context of Design Science
4.6 Conclusion
Chapter 5 Evaluating the Suitability of Enterprise Modelling Languages to Support Assessments
5.1 Introduction
5.2 Enterprise Models and Assessments
5.3 Modelling Language and Assessment Requirements Analysis
5.3.1 Requirements Decomposition
5.3.2 Requirement and modelling elements matching
5.4 Analysing BPMN and ArchiMate Considering ISO/IEC 33020
5.4.1 Considerations
5.4.2 Process Attributes and Indicators of ISO/IEC 33020
5.4.3 Results and Discussion
5.5 Assessing Capability Level of a Business Process
5.5.1 BPMN
5.5.2 ArchiMate
5.5.3 Discussion
5.6 The Enterprise Models Analysis Approach in the Design Science Context
5.7 Conclusion
Chapter 6 An Approach to Evaluate Business Process Capability Using Process Models
6.1 Introduction
6.2 Previous Attempts to Evaluate Process Models
6.3 GCN Approach for Business Process Models Assessment
6.3.1 Process Model Generator
6.3.2 Process Model Classifier
6.3.3 GCN Deployment
6.4 GCN Approach Experiments and Results
6.4.1 System Setup
6.4.2 Dataset
6.4.3 Results
6.5 The GCN Approach in the Design Science Context
6.6 Conclusion
General Conclusion
Bibliography

GET THE COMPLETE PROJECT

Related Posts