PRIMITIVE REFACTORINGS AS FGT COLLECTIONS

Get Complete Project Material File(s) Now! »

CHAPTER 2 REFACTORING ___ STATE OF THE ART

Software Evolution

“Software evolution is an essential part of the software development process. Nearly all software inevitably undergoes changes during its lifetime. Changes can be large or small, simple or complex, important or trivial – all of which influence the effort needed to implement the changes“ [51]. Sommerville [79] explains that proposals for change are the driver for system evolution. Change identification and evolution continue throughout the system’s lifetime. Lehman & Belady [46] conducted empirical studies into software evolution and concluded the following eight laws:

  • Continuing change: Software that is used in a real-world environment necessarily must change or become progressively less useful in that environment.
  • Increasing complexity: As evolving software changes, its structure tends to become more complex. Extra resources must be devoted to preserve and simplify the structure.
  • Large program evolution: Software evolution is a self-regulating process. System attributes such as size, time between releases and the number of reported errors is approximately invariant for each system release.
  • Organizational stability: Over a software lifetime, its rate of development is approximately constant and independent of the resources devoted to system development.
  • Conservation of familiarity: Over the lifetime of a software, the incremental change in each release is approximately constant.
  • Continuing growth: The functionality offered by systems has to continually increase to maintain user satisfaction.
  • Declining quality: The quality of systems will appear to be declining, unless they are adapted to changes in their operational environment.
  • Feedback system: Evolution processes incorporate multi-agent, multi-loop feedback systems and you have to treat them as feedback systems to achieve significant product improvement.

Experience over the last 30 years has shown that making software changes without visibility into their effects can lead to poor effort estimates, delays in release schedules, degraded software design, unreliable software products, and the premature retirement of the software system. The immaturity of current-day software evolution is clearly stated in the foreword of the international workshop on principles of software evolution [69]:
Software evolution is widely recognised as one of the most important problems in software engineering. Despite the significant amount of work that has been done, there are still fundamental problems to be solved. This is partly due to the inherent difficulties in software evolution, but also due to the lack of basic principles for evolving software systematically.”
Software evolution is not restricted to the implementation phase only. Even in the earlier phases of requirements specification, analysis and design, evolution is a strict necessity. To date, most research on evolution has been dedicated to the implementation and maintenance phases, and to a lesser degree in the earlier phases of requirements specification and design [12, 15, 33, 41, 87, and 88]. However, there is a tendency to shift towards earlier phases.

Refactoring

Although in the context of software reengineering, refactoring is often used to convert legacy code into a more modular or structured form [20], refactoring can also be applied to any type of software artifact. For example, it is possible and useful to refactor design models, database schemas, software architectures and software requirements. Refactoring of these kinds of software artifacts rids the developer from many implementation-specific details, and raises the expressive power of the changes that are made. On the other hand, applying refactorings to different types of software artifacts introduces the need to keep them all in sync[59 ].
In the following subsections, an introduction of refactorings at different types of software artifacts is given.

Codes Leve

Non-Object-Oriented Programming Languages

Programs that are not written in an object-oriented language are more difficult to restructure because data flow and control flow are tightly interwoven. Because of this, restructurings are typically limited to the level of a function or a block of code [59].
In [27], Griswold proposes a technique to restructure programs written in a block-structured programming language. The language he worked on is Scheme. His transformations concern program restructuring for aiding maintenance. To insure that the transformations are meaning preserving, he uses Program Dependence Graphs to reason about the correctness of transformation rules.
Lakhotia and Deprez [42] present a transformation called tuck for restructuring programs by decomposing large functions into small functions. The transformation breaks large code fragments and tucks them into new functions. The challenge they faced was creating new functions that capture computations that are meaningfully related. There are three basic transformation to tuck functions.

  • Related code is gathered by driving a wedge (which is a program slice bounded with single-entry and a single exit point) into the function.
  • Then the code that is isolated by the wedge is split.
  • Finally, the split code is folded into a function.

These transformations can even create functions from non-contiguous code.

Object-Oriented Programming Languages

Opdyke, in his PhD thesis [65] was the first to introduce the term refactoring. His proposed refactorings were in the context of object-oriented programming languages. He identified twenty-three primitive refactorings and gave examples of three composite refactorings. He arrived at his collection of refactorings by observing several systems and recording the types of refactorings that OO programmers applied.
The importance of the achievements of Opdyke is not only the identification of refactorings, but also the definition of the precondition that is required to apply a refactoring to a program without changing its behaviour. For that, he defined for each primitive refactoring a set of precondition conjuncts that would ensure that the refactoring would preserve behaviour.
Roberts, in his PhD thesis [70], improves the work of Opdyke. He gives a definition of refactoring that focuses on their pre- and postcondition conjuncts. The definition of postcondition conjuncts allows the elimination of program analysis that is required within a chain of refactorings. This comes from the observation that refactorings are typically applied in a sequence intended to set up precondition conjuncts for later refactorings.
In his book [22], Fowler presents a catalogue of refactorings. Each refactoring is given a name and short summary that describes it. A motivation describes why the refactoring should be done, a step-by-step description of how to carry out the refactoring and an example.
Back [3] propose a method called stepwise feature introduction for software construction. The method is based on incrementally extending the system with a new feature at a time. Introducing a new feature may destroy some already existing features, so the method must allow for checking that old features are preserved.

READ  WATER AND ITS PRIVATIZATION

Design Level Models

A recent research trend is to deal with refactoring at a design level, for example, in the form of UML models [64]. Applying refactoring to models rather than to source code can encompass a number of benefits [23]. Firstly, software developers can simplify design evolution and maintenance, since the need for structural changes can be more easily identified and addressed on an abstract view of the system. Secondly, developers are able to address deficiencies uncovered by model evaluation, improving specific quality attributes directly on the model. Thirdly, a designer can explore alternative decision paths in a cheaper way (although small prototypes may be necessary). An apparent scenario for model refactorings is the incorporation of design patterns into a system’s design model [37].
France et al. [23] identified two classes of model transformations: vertical and horizontal transformations. Vertical transformations change the level of abstraction, whereas horizontal transformations maintain the level of abstraction of the target model. A model refactoring is an example of horizontal transformation. In contrast, the Model-Driven Architecture (MDA) approach [78], in which abstract models automatically derive implementation-specific models and source code, provides examples of a vertical transformation.
As the idea of refactoring models adds simplicity to software evolution, automatization and behaviour preservation are even more complex issues when dealing with models. Editing a class diagram may be as simple as adding a new line when introducing an association, but such changes must include identifying lines of affected source code, manually updating the source, testing the changes, fixing bugs and retesting the application until the original behaviour is recovered [83]. Methods and tools for partially or even totally removing human interaction in this process are invaluable for the refactoring practice.
Suny’e et al. [81] have provided a fundamental paradigm for model refactoring to improve the design of object-oriented applications. They present refactorings of class diagrams and state charts. In order to guarantee behaviour-preserving transformations of state charts, they specify the constraints that must be satisfied before and after the transformation using the OCL at the meta-model level.

ABSTRACT
ACKNOWLEDGEMENTS
TABLE OF CONTENTS 
LIST OF FIGURES 
LIST OF TABLES 
LIST OF ALGORITHMS 
I Prologue
CHAPTER INTRODUCTION
1.1 The Problem
1.2 The Proposed Formalism
1.3 Thesis Overview
2. REFACTORING—STATE OF THE ART
2.1 Software Evolution
2.2 Refactoring
2.2.1 Codes Level
2.2.1.1 Non-Object-Oriented Programming Languages
2.2.1.2 Object-Oriented Programming Languages
2.2.2 Design Level Models
2.2.3 Database Schemas Level
2.2.4 Software Architectural Level
2.2.5 Software Requirements Level
2.3 Formalisms
2.3.1 Graph Transformations
2.3.2 Pre- and Postcondition
2.3.3 Program Slicing
2.3.4 Formal Concept Analysis
II The Approach
3. LOGIC-BASED REPRESENTATION 
3.1 Introduction
3.2 Object Element Logic-Terms
3.3 Relation Element Logic-Terms
3.4 Example
3.5 Reflection on this Chapter
4. FGT-BASED APPROACH
4.1 Introduction
4.2 Fine-Grain Transformations (FGTs)
4.3 FGT Sequential Dependency
4.4 FGTs for Primitive and Composite Refactorings
4.5 Reflection on this Chapter
5. PRIMITIVE REFACTORINGS AS FGT COLLECTIONS
5.1 Introduction
5.2 Add Element Refactorings
5.3 Change Element Refactorings
5.4 Delete Element Refactorings
5.5 Reflection on this Chapter
6. MOTIVATED EXAMPLE 
6.1 LAN Simulation
6.2 Logic-Based Representation
6.3 encapsulateAttribute Refactoring
6.4 createClass Refactoring
6.5 pullUpMethod Refactoring
6.6 LAN after Refactorings
III Features Of The Approach
7. REDUNDANCY REMOVAL
7.1 Introduction
7.2 Absorbing Reduction
7.3 Cancelling Reduction
7.4 Advantages of Reduction Process
7.5 Reduction Algorithm
7.6 Example
7.7 Efficiency Considerations
8. DETECTING AND RESOLVING CONFLICTS 
8.1 Introduction
8.2 Conflicts in FGT-Based Approach
8.3 FGT’s Conflicts-Pairs
8.4 Conflict Algorithm
8.5 LAN Motivated Example
8.6 Reflections on Conflicts
9. SEQUENTIAL DEPENDENCY BETWEEN REFACTORINGS
9.1 Introduction
9.2 Sequential Dependency in Previous Approaches
9.3 Sequential Dependency between FGT-Based Refactorings
9.4 Sequential Dependency Algorithm
9.5 Deadlock Problem
9.6 LAN Motivated Example
10. COMPOSITE REFACTORINGS 
10.1 Introduction
10.2 FGT-based Composite Refactoring
10.3 Examples
10.4 Reflection on this Chapter
11. PARALLELIZING OPPORTUNITIES
11.1 Introduction
11.2 Parallelizing Opportunities
11.3 Reflection on Parallelization
12. NEW REFACTORINGS
12.1 Introduction
12.2 Example
12.3 New Refactorings in the FGT-Based Approach
12.4 Reflection on this Chapter
IV Epilogue
13. CONCLUSIONS 
13.1 Summary
13.2 Conclusions
13.3 Future Work
V Appendix
GET THE COMPLETE PROJECT

Related Posts