Software Performance Engineering

Get Complete Project Material File(s) Now! »

Mapping Study Context

Motivation

We discussed the motivation for our topic more broadly in Chapter 2. Here we focus on the specific challenges faced by empirical performance analysis that we are aiming to answer with this mapping study.
We know that the increasing scale of dynamic behaviour in modern systems means the amount of data being produced by profilers has become overwhelming. is amount of data is very difficult to interpret manually therefore we are interested in the form of feedback provided by empirical performance analysis approaches and the techniques they use to aid with interpret-ing performance data. In particular we are looking for approaches that aim to provide actionable feedback, information that provides specific advice on what to change, and even how to make the change to improve the performance of an application.
It is also relevant how practical it is to apply an approach, as this can impact the utility of the approach. Approaches that are easier to apply, particularly those that can be safely used in pro-duction environments, have a distinct advantage over approaches that require specific conditions or setup to work accurately. e easier and safer an approach is the more likely it is to be used in practice.
More generally we are interested in how effective the described performance analysis approa-ches are at helping to improve performance. To that end we are interested in how the approaches described in the literature are evaluated, as a good evaluation is necessary to verify an approach is useful in practice.

Scope

The scope of this mapping study is empirical performance analysis approaches that are applicable to object-oriented software. We have not limited the scope to just approaches specific to object-oriented software but have also included approaches that are applicable to most applications and are therefore likely to be useful when analysing a typical object-oriented application. e aim is provide a survey of techniques that are relevant both for practitioners who develop object-oriented applications as well as researchers interested in large-scale object-oriented applications and runtime bloat.
We completed this mapping study during the second half of 2013 and cover empirical per-formance analysis approaches described in the literature since January 2000.
We have chosen to cover empirical approaches only, and exclude model-based approaches (discussed in Section 2.1), because empirical approaches are the focus of our research. As dis-cussed in Chapter 2 empirical approaches are our focus because they are more commonly used in industry than model-based approaches [Woodside et al. 2007]. ey are generally comple-mentary to and are used independently of model-based approaches. at is to say empirical approaches form a distinct body of research largely independent from model-based software per-formance engineering.

Mapping Study Context

We chose January 2000 as the start date as this represented a practical starting point for the review. It was far enough in the past to include the majority of relevant approaches but recent enough that all relevant published literature was indexed and available within the major electronic databases. We have not updated our search results to cover the time since we first completed the mapping study in 2013. is is because we wanted to present the results as we found them at that time as they formed the motivation for the approaches we describe in Chapters 5 and 8. We cover all the related work, including more recent publications, in the later evaluation chapters for our new approaches.
A discussion of the possible impact of these restrictions on scope is included in Section 3.5.

Related Surveys

There are three existing survey papers published since the year 2000 that discuss literature in some area of software performance [Balsamo et al. 2004, Koziolek 2010, Xu et al. 2010b]. None of the three survey papers we found were systematic literature reviews or mapping studies. Two were traditional expert reviews of a particular specialised field and one was a position paper that included a survey of current research into runtime bloat.
Balsamo et al [Balsamo et al. 2004] survey model-based performance prediction in software development. e survey completes a detailed description, classification and analysis of 16 dif-ferent integrated methods for model-based software performance prediction. e goal for the survey is to assess the maturity of the research in the field and point out promising research direc-tions. e survey has a different focus from ours and they do not consider empirical performance analysis approaches. e age of the survey (it was completed over ten years ago) also means it does not include the more recent literature, which is a large proportion of the research we surveyed.
Koziolek [Koziolek 2010] presented a survey on the performance evaluation of component-based software systems. e survey was limited to discussing approaches for component-based software, that is software based on frameworks such as Java EJB, Microsoft COM or CORBA CCM, but did cover both empirical measurement based and predictive model-based approa-ches. e survey contains a detailed breakdown and analysis of the literature covered with useful information and conclusions for both practitioners and researchers, however with its focus on component-based software and inclusion of model-based approaches it only covers a small sub-set of the research we are interested in.
Xu et al [Xu et al. 2010b] summarised the state of research into runtime bloat. e focus of this position paper was to describe software bloat and to argue why it is primarily a software engineering problem. e paper was motivated by many of the same concerns that have moti-vated our research. It surveys existing research into runtime bloat and outlines possible future research directions. However currently there is only a small body of work that identifies itself as investigating software bloat. erefore the paper only covers a small amount of the literature we are interested in that is applicable to object-oriented software performance more generally.
.Systematic Mapping of Empirical Performance Analysis Approaches

READ  Kripke-stylemodels for classical logic 

Related Literature

Besides the related surveys described above there is a great deal of literature related to software performance we did not include in our review. Here we provide a description of the major research areas that we excluded from our mapping study and the reasons why they were excluded.
As we noted in section 3.1.2 our focus on empirical performance engineering has meant we have excluded literature related specifically to model-based SPE approaches. ese are per-formance prediction approaches based on constructing and solving mathematical models of the running system. ere is a large body of literature describing how to create or improve useful mathematical models, discussing new more efficient algorithms for solving the created models or describing tools and methodologies for applying the approaches in practice.
Because of our focus on object-oriented software there is literature that we excluded because it had a specific focus on some other specialised field. is included performance analysis approa-ches for:

  • Real-time systems
  • Embedded systems
  • Distributed systems
  • HPC and massively parallel systems
    Performance analysis approaches focussed in these areas are concerned with aspects of perfor-mance that are not applicable to object-oriented software generally. For example embedded sys-tems tend to have severely constrained hardware requirements and specialised runtime environ-ments that make performance monitoring difficult, and performance analysis for distributed sys-tems usually focuses on remote communication patterns. We also excluded a substantial amount of research into performance for HPC and massively parallel systems that focuses on the chal-lenges unique to supercomputing and cluster computing scenarios. is includes research into understanding and improving parallelisation and inter-node communication, computer archi-tecture related inefficiencies such as non-uniform memory access, and approaches for processing, analysing and visualising the enormous quantities of performance data that are generated by high-end systems.
    Finally we excluded a range of research that leveraged dynamic analysis of software but had goals other than performance understanding. ese goals included:
  • General purpose program comprehension or reverse engineering
  • Automated verification or validation, defect detection
  • Workload characterisation, normally for automated test workload generation
  • Monitoring for intrusion detection
  • Monitoring for quality of service violations or capacity planning purposes
    All of these use runtime monitoring or profiling systems that are similar in nature to the dynamic data capture systems that are used for performance analysis.

Methodology
Overview

A Systematic Literature Review (SLR) is described by Kitchenham & Charters as “a method-ologically rigorous review of research results” [Kitchenham and Charters 2007]. A systematic mapping study is a form of SLR that aims to give a broader overview of a particular field. It does not evaluate the articles in as much depth as an SLR, with the advantage that a broader range of primary studies is covered [Petersen et al. 2008].
We have undertaken a systematic mapping study of empirical performance analysis approa-ches applicable to object-oriented software. In general we followed the SLR guidelines produced by Kitchenham & Charters [Kitchenham and Charters 2007, Kitchenham et al. 2010] and also incorporated some of the recommendations given by Petersen et al [Petersen et al. 2008].
e high level steps in our review process were:

  • Define research questions
  • Perform manual search to pilot inclusion/exclusion criteria and generate a reference set of articles
  • Develop automated search strategy
  • Formalise review protocol
  • Conduct search for relevant studies
  • Screen and select studies for inclusion
  • Data extraction
  • Analysis
    e manual search at step 2 was a necessary addition for us to the typical SLR process. As described in section 3.2.3 it assisted us with the development of the inclusion criteria and the automated search phrases that were required to create the formal review protocol.
    e other departure from recommended SLR procedure was that we did not perform any quality assessment on the primary studies. is is customary for systematic mapping studies aiming to structure the literature in an entire field in order to include as much relevant literature as possible [Petersen et al. 2008].

Research Questions
The focus of our systematic review is empirical software engineering approaches to performance analysis for object-oriented software. Our research questions are motivated by this focus and by the challenges in empirical performance analysis discussed in Section 3.1.1:

  • What approaches to empirical performance analysis have been proposed that are applicable to object-oriented software?
  • How can these approaches be characterised?
  • What form of feedback does the approach provide?

1 Introduction 
1.1 Scope
1.2 Research Contributions
1.3 Dissertation Organisation
2 Background 
2.1 Software Performance Engineering
2.2 Empirical Performance Analysis
2.3 Challenges of Empirical Performance Analysis
2.4 Summary
3 Systematic Mapping of Empirical Performance Analysis Approaches 
3.1 Mapping Study Context3.2 Methodology
3.3 Results
3.4 Discussion
3.5 reats to Validity
3.6 Conclusion
4.1 Motivation 
4.2 Methodology
4.3 Analysis Toolset
4.4 Experimental Harness
4.5 Summary
5 Subsuming Methods Analysis 
5.1 Understanding Runtime Costs
5.2 A Motivating Example
5.3 Subsuming Methods Analysis
5.4 Conclusion
6 Subsuming Methods Analysis Evaluation 
6.1 Methodology
6.3 Implementation Efficiency
6.4 Case Studies
6.5 Discussion
6.6 Related Work
6.7 Conclusion
7 Letterboxd: An Industrial Case Study
7.1 Background: e Industrial Setting
7.2 Methodology
7.3 Case Study Results
7.4 reats to Validity
7.5 Lessons Learned
7.6 Conclusion
8 Efficiency Analysis
8.1 Understanding Runtime Efficiency
8.2 Quantifying Value
8.3 Blended Efficiency Analysis
8.4 Example – Time Formatting
8.5 Conclusion
9 Efficiency Analysis Evaluation 
9.1 Methodology9.2 Quantitative Measures
9.3 Implementation Efficiency
9.4 Case Studies
9.5 Discussion
9.6 Related Work
9.7 Conclusion
10 Conclusion
10.1 Key Research Contributions
10.3 Conclusion
GET THE COMPLETE PROJECT
Performance Analysis for Object-Oriented Software

Related Posts