Detection of Mobile-Specific Code Smells

Get Complete Project Material File(s) Now! »

Code Smells in Mobile Apps

We present in this section research works related to the identification, detection, and empirical study of mobile-specific code smells. We also briefly present some studies about OO code smells in mobile apps and other software systems.

Identification of Mobile-Specific Code Smells

The works that identified mobile-specific code smells laid the foundation for all other studies about mobile code smells. The main work in this category was conducted by Reimann et al. [169], who proposed a catalog of mobile-specific code smells.
Reimann et al. identified 30 quality smells dedicated to Android based on the following information providers:
— Documentation: including official ones, like Android Developer Documentation 1 and semi-official ones like Google I/O talks; 2
— Blogs: this includes blogs from developers at Google or other companies who explain development issues with Android;
— Discussions: this includes questions and answers forums, like the Stack Exchange network, 3 bug tracker discussions, 4 and other developer discussion forums.
To identify code smells from these resources, Reimann et al. opted for a semi-automated approach. First, they automatically crawled and queried (when possible) the information providers and gathered their data into a local database. Afterwards, they filtered this data using keywords like “energy efficiency”, “memory”, and “performance”, paired with issue keywords like “slow”, “bad”, “leak”, and “overhead”. The results of this filtering were again persisted in a database. Finally, the authors read the collected resources and extracted a catalog of code smells dedicated to Android. While the catalog included 30 different quality code smells, many of them were only “descriptive and abstract”. That is, Reimann et al. only provided a detailed specification for nine code smells. The other 21 were not characterized using a catalog schema and did not have a precise definition. Interestingly, later studies provided detailed definitions for some of these code smells to study them. We will present this in the upcoming sections. Reimann et al. used the following schema to present the nine defined code smells:
— Name;
— Context: generally UI, implementation, or network;
— Affected qualities: the aspect of performance or user experience that may be affected by the quality smell;
— Roles of Interest: the source code entity that hosts the code smell;
— Description;
— Refactorings;
— References;
— Related Quality Smells;

Studies About Mobile-Specific Code Smells

The studies of mobile-specific code smells fall into two categories: (i) quality studies and (ii) performance studies.
Quality Studies
The closest study to our research topic is the one of Hecht et al. [108] in which they used code smells as a metric to track the quality of Android apps. The study analyzed four Android-specific and three OO code smells and covered 3,568 versions of 106 Android apps. The four studied Android-specific code smells included two smells from the catalog of Reimann et al. [169], namely Leaking Inner Class and Member Ignoring Method, and two new code smells called UI Overdraw and Heavy Broadcast Receiver. The authors described these code smells based on the Android documentation, but did not provide any details about their identification process. The authors relied on Paprika to detect these code smells and their results showed that the quality of Android apps generally follow five patterns: constant decline, constant rise, stability, sudden decline, and sudden rise. Nevertheless, these patterns were built by combining both OO and Android-specific code smells, thus they cannot provide fine-grained insights about the evolution of Android-specific code smells. Interestingly, in the results discussion, the authors suggested that the rise of the quality metric is due to refactoring operations but they did not provide any elements to justify this hypothesis.
Another close study was conducted by Mateus and Martinez [134], who used OO and Android-specific code smells as a metric to study the impact of the Kotlin programming language on app quality. Their study included four OO and six Android-specific code smells and covered 2,167 Android apps. They found that three of the studied OO smells and three of the Android ones affected proportionally more apps with Kotlin code, but the differences were generally small. They also found that the introduction of Kotlin in apps originally written in Java produced an increase in the quality scores in at least 50% of the studied apps.
Another close study is the one of Gjoshevski and Schweighofer [87], which did not study explicitly mobile-specific code smells but relied on Android Lint rules. The study included 140 Lint rules and 30 open-source Android apps. The aim of this study was to identify the most prevalent issues in Android apps. They found that the most common issues are Visibility modifier, Avoid commented-out lines of code, and Magic number.
Performance Studies
Many studies focused on assessing the performance impact of mobile code smells [40, 107, 157]. For instance, Palomba et al. [157] studied the impact of Android-specific code smells on energy consumption. The study is, to the best of our knowledge, the largest in the category of performance assessment. Indeed, it considered nine Android-specific code smells from the catalog of Reimann et al. [169] and 60 Android apps. By analyzing the most energy-consuming methods, the authors found that 94% of them were affected by at least one code smell type. The authors also compared smelly methods with refactored ones and showed that methods that represent a co-occurrence of Internal Setter, Leaking Thread, Member Ignoring Method, and Slow Loop, consume 87 times more energy than refactored methods. Hecht et al. [107] conducted an empirical study about the individual and combined impact of Android-specific code smells on performance. The study covered three code smells from the catalog of Reimann et al. [169]: Internal Getter Setter, Member Ignoring Method, and HashMap Usage. To assess the impact of these code smells, they measured the performance of two apps with and without code smells using the following metrics: frame time, number of delayed frames, memory usage, and number of garbage collection calls. The results of these measurements did not always show an important impact of the studied code smells. The most compelling observations suggested that in the SoundWaves app, refactoring HashMap Usage reduced the number of garbage collection calls by 3.6% and the refactoring of Member Ignoring Method reduced the number of delayed frames by 12.4%. Carette et al. [40] studied the same code smells as Hecht et al., but focused on their energy impact. They detected code smell instances in five open-source Android apps and then derived four versions of them: three by refactoring each code smell type separately, and one by refactoring all of them at once. Afterwards, they used user-based scenarios to compare the energy impact of the five app versions. Depending on the app, the measurement differences were not always significant. Moreover, in some cases, the refactoring of code smells like Member Ignoring Method slightly increased energy consumption (max +0.29%). The biggest observed impact was on the Todo app where refactoring the three code smells reduced the global energy consumption by 4.83%. Morales et al. [140] conducted a study to show the impact of code smells on the energy efficiency of Android apps. The study included five OO and three Android-specific code smells and covered 20 Android apps. The included Android code smells are Internal Getter Setter and HashMap Usage, from the catalog of Reimann et al. [169], and Binding Resources Too Early. The results of their analysis of Android code smells showed that refactoring Internal Getter Setter and Binding Resources Too Early can reduce energy consumption in some cases. As for OO code smells, they found that two of them had a negative impact on energy efficiency, while the others did not. Also, the authors reported that code smell refactoring did not increase energy consumption. Morales et al. also proposed EARMO, for Energy-Aware Refactoring approach for MObile apps. EARMO is a multi-objective approach that refactors code smells in mobile apps while considering their impact on energy consumption.

READ  Multivariate Leakages and Multiple Models 

Management in Practice

As there are no studies about the management of mobile-specific code smells in practice, we present in this section studies that addressed similar concerns. In this regard, the closest studies are the ones investigating performance management in mobile apps and software systems in general. Another relevant topic for this section is mobile-specific linters, which represent the equivalents of code smell detection tools in practice. There are no studies about the usage of linters for managing mobile-specific code smells or performance in general. Therefore, we present other studies that provide qualitative insights about linters and can be useful for our research.

Performance Management

There are two main works about performance management in mobile apps. First, the study of Linarez et al. [125] that surveyed developers to identify the common practices and tools for detecting and fixing performance issues in open-source Android apps. Based on 485 answers, they deducted that most of developers rely on reviews and manual testing for detecting performance bottlenecks. When asked about tools, developers reported using profilers and framework tools and only five of them mentioned using static analyzers. Developers were also openly questioned about the targets of their performance improvement practices. From 72 answers, the study established the following categories: GUI lagging, memory bloats, energy leaks, general performance, and unclear benefits. Another study was conducted by Liu et al. [127] who investigated the management of performance bugs in mobile apps. Specifically, they analyzed the issue reports and source code of 29 Android apps to inspect, among other questions, the manifestation of performance bugs and the efforts needed to fix them. They found that, contrarily to desktop apps, small scale data is enough to manifest performance bugs in mobile apps. However, they observed that more than one third of these performance bugs required user interaction to manifest. We consider that this observation is reasonable since mobile apps are known to be very event-driven. The study results also showed that some performance bugs require specific software or hardware platforms to manifest. As for the efforts, the study suggested that fixing performance bugs is more difficult as it requires more time and discussions and larger patches than fixing non-performance bugs. By manually analyzing the discussions and patches, the authors showed that during the debugging process, information provided by stacktrace was less helpful than information provided by profilers and performance measurement tools. Nonetheless, the study reports that these tools still need enhancement to visualize simplified runtime profiles.
Performance management has also been addressed in the pre-mobile era through many studies. Notably, Zaman et al. [207] manually analyzed a random sample of 400 performance and non-performance bug reports from Mozilla Firefox and Google Chrome. The objective was to understand the practices and shortcomings of reproducing, tracking, and fixing performance bugs. Among other interesting findings, they realized that fixing performance bugs is a more collaborative task than for non-performance bugs. However, performance bugs show more regression proneness and they are not well tracked.

Table of contents :

List of Figures
List of Tables
1 Introduction 
1.1 Context
1.2 Problem Statement
1.2.1 Problem 1: Code Smells in Different Mobile Platforms
1.2.2 Problem 2: The Motives of Mobile-Specific Code Smells
1.2.3 Problem 3: Developers’ Perception of Mobile-Specific Code Smells
1.3 Contributions
1.3.1 Contribution 1: Study of the iOS Platform
1.3.2 Contribution 2: Discerning the Motives of Mobile-Specific Code Smells
1.3.3 Contribution 3: Study of Developers Perception of Mobile-Specific Code Smells
1.3.4 Contribution 4: A Reevaluation of Mobile-Specific Code Smells
1.4 Outline
1.5 Publications
1.5.1 Published
1.5.2 Under Evaluation
2 State of the Art 
2.1 Code Smells in Mobile Apps
2.1.1 Identification of Mobile-Specific Code Smells
2.1.2 Detection of Mobile-Specific Code Smells
2.1.3 Studies About Mobile-Specific Code Smells
2.1.4 Studies About OO Code Smells
2.1.5 Summary
2.2 Management in Practice
2.2.1 Performance Management
2.2.2 Linters
2.2.3 Linter Studies
2.2.4 Summary
2.3 Conclusion
3 Code Smells in iOS Apps 
3.1 Background on iOS Apps Analysis
3.1.1 Dynamic Analysis
3.1.2 Static Analysis of Binaries
3.1.3 Static Analysis of Source Code
3.2 Code Smells in iOS Apps
3.2.1 Code Smells Identification Process
3.2.2 Catalog of iOS Code Smells
3.2.3 Similarities with Android Code Smells
3.3 Sumac
3.3.1 Code Analysis
3.3.2 Sumac Model for iOS
3.3.3 Storing the iOS Model
3.3.4 Code Smell Queries
3.4 Study of Code Smells in iOS
3.4.1 Objects
3.4.2 iOS Dataset and Inclusion Criteria
3.4.3 Hypotheses and Variables
3.4.4 Analysis Method
3.4.5 Results
3.4.6 Threats to Validity
3.5 Comparative Study Between iOS and Android
3.5.1 Android Dataset and Inclusion Criteria
3.5.2 Variables and Hypotheses
3.5.3 Analysis Method
3.5.4 Results
3.5.5 Threats to Validity
3.6 Summary
4 Code Smells in the Change History 
4.1 Study Design
4.1.1 Context Selection
4.1.2 Dataset and Selection Criteria
4.2 Data Extraction: Sniffer
4.2.1 Validation
4.3 Evolution Study
4.3.1 Data Analysis
4.3.2 Study Results
4.3.3 Discussion & Threats
4.3.4 Study Summary
4.4 Developers Study
4.4.1 Data Analysis
4.4.2 Study Results
4.4.3 Discussion
4.4.4 Study Summary
4.5 Survival Study
4.5.1 Data Analysis
4.5.2 Study Results
4.5.3 Discussion
4.5.4 Study Summary
4.6 Common Threats to Validity
4.7 Summary
5 Developers’ Perception of Mobile Code Smells 
5.1 Methodology
5.1.1 Interviews
5.1.2 Participants
5.1.3 Analysis
5.2 Results
5.2.1 Why Do Android Developers Use Linters for Performance Purposes? .
5.2.2 How Do Android Developers Use Linters for Performance Purposes? .
5.2.3 What Are the Constraints of Using Linters for Performance Purposes?
5.3 Implications
5.3.1 For Developers
5.3.2 For Researchers
5.3.3 For Tool Creators
5.4 Threats To Validity
5.5 Summary
6 A Second Thought About Mobile Code Smells 
6.1 Study Design
6.1.1 Data Selection
6.1.2 Data Analysis
6.2 Study Results
6.2.1 Leaking Inner Class (LIC)
6.2.2 Member Ignoring Method (MIM)
6.2.3 No Low Memory Resolver (NLMR)
6.2.4 HashMap Usage (HMU)
6.2.5 UI Overdraw (UIO)
6.2.6 Unsupported Hardware Acceleration (UHA)
6.2.7 Init OnDraw (IOD)
6.2.8 Unsuited LRU Cache Size (UCS)
6.2.9 Common Theory
6.3 Discussion
6.3.1 Implications
6.3.2 Threats to Validity
6.4 Summary
7 Conclusion and Perspectives 
7.1 Summary of Contributions
7.2 Short-Term Perspectives
7.2.1 Rectify Code Smell Definitions
7.2.2 Improve Code Smell Detection
7.2.3 Reevaluate the Impact of Code Smells
7.2.4 Integrate a Code Smell Detector in the Development Environment .
7.2.5 Communicate About Code Smells
7.3 Long-Term Perspectives
7.3.1 Involve Developers in Code Smell Identification
7.3.2 Leverage Big Code in Code Smell Identification
7.4 Final Words
Bibliography

GET THE COMPLETE PROJECT

Related Posts