A general framework for behavioral P2P traffic classification 

Get Complete Project Material File(s) Now! »

Machine learning algorithms for classification

In this section we will briefly introduce the problem of classification in machine learning theory, with a particular focus on the algorithms we actually employed in the remainder of this work, all falling in the category of supervised classification. There is a whole field of research on machine learning theory which is dedicated to supervised classification [107], hence it is not possible to include a complete reference in this thesis. Moreover, instead of improving the classification algorithms themselves, we rather aim at taking advantage of our knowledge of network applications to identify good properties for their characterization. However, some basic concepts are required to correctly understand how we applied machine learning to traffic classification. A supervised classification algorithm produces a function f, the classifier, able to associate some input data, usually a vector x of numerical attributes xi called features, to an output value c, the class label, taken from a list C of possible ones. To build such a mapping function, whic can be arbitrary complex, the machine learning algorithm needs some examples of already labeled data, the training set, i.e. a set of couples (x, c) from which it learns how to classify new data. In our case the features xi are distinctive properties of the traffic we want to classify, while the class label c is the application associated to such traffic. From a high-level perspective, supervised classification consists of three consecutive phases which are depicted in Fig. 2.1. During the training phase the algorithm is fed with the training set which contains our reference data, the already classified training points. The selection of the training points is a fundamental one, with an important impact on the classifier performance.
Extra care must be taken to select enough representative points to allow the classifier to build a meaningful model; however, including too many points is known to degenerate in overfitting, where a model is too finely tuned and becomes “picky”, unable to recognize samples which are just slightly different from the training ones. Notice that, preliminary to the training phase, an oracle is used to associate the protocol label to the traffic signatures. Oracle labels are considered accurate, thus representing the ground truth of the classification. Finding a reliable ground truth for traffic classification is a research topic on its own, with not trivial technical and privacy issues and was investigated by a few works [64, 82]. Sec. 2.4 is dedicated to the description of the different datasets with the related ground-truth used throughout this thesis.
The second step is the classification phase, where we apply the classifier to some new samples, the test set, which must be disjoint from the training set. Finally a third phase is needed to validate the results, comparing the classifiers outcome against the reference ground truth. This last phase allows to assess the expected performance when deploying the classifier in operational networks.The next section in this chapter is dedicated to describe themetrics used to express the performance of classification algorithms.

Granularity and Direction

Since IPFIX records provide counters with different granularities, a trivial criterion regards the level of coarseness of the statics: features can be computed entity-wise E, packet-wise P and byte-wise B. Another intuitive criterion consists in discriminating incoming versus outgoing traffic, or aggregating both directions together. Notice that the adopted type of transport layer protocol can cause significant difference in the pattern of traffic observed in the two directions. For example, an application using a connectionless service (i.e. a UDP datagram socket) can easily multiplex all incoming and outgoing traffic over the same endpoint IPy : py. Conversely an application employing a connection oriented service (i.e., a TCP stream socket) is likely to receive traffic on a single TCP port (py), but it surely spreads the outgoing traffic on different ephemeral ports, whose allocation is controlled by the OS.

Portability across Access Technologies (AT)

We now test to what extent signatures are portable across different access technologies, e.g., ADSL versus High Bandwidth (HB) access. As noted in [86], nodes with high-bandwidth access can act as “amplifiers”, providing content to possibly several peers; conversely, ADSL peers may only act as “forwarder” due to the limited uplink capacity. Despite we consider only the downlink traffic, such different behaviors can impact the Abacus signatures, e.g., due to a different fraction of signaling packets a peer receives. For example, an amplifier peer can receive many small sized acknowledgments, while a low-capacity peer mainly receives large packets containing video data. We therefore split the testbed dataset into two parts: the first contains traces collected from all High
Bandwidth PCs, while the second contains ADSL PCs. Three tests are performed: (i) classifying ADSL traces using ADSL training set, (ii) classifying ADSL traces using the HB training set and (iii) classifying HB traces using ADSL training set. Each test has been repeated 10 times, and average results are reported. Results are reported in rows labeled AT in Tab. 4.4. Overall, Abacus signatures confirm their portability even across different access networks: for TVAnts, SopCast and Joost, results are modestly impacted by train/test combination (being 8% of reduced TPR the worst case). In case of PPLive, the TPR drops to 58% when HB training is used to classify ADSL traffic. This is likely due to the fact that PPLive is very aggressive in exploiting HB peers upload capacity, so that the number of peers sending acknowledgments shifts the signature toward low bins, i.e., few acknowledgment packets are received from a given peer. ADSL peers, on the contrary, contribute with little upload bandwidth, so that the incoming traffic is mainly due to video chunks received as trains of packets, i.e., groups of large data packets that are received from contributing peers.

READ  Experimental verification of the crack analogue model

Table of contents :

1 Introduction 
1.1 The rise and (apparent) fall of P2P applications
1.2 Motivation
1.3 Contributions of this thesis
1.4 Thesis outline
I Traffic Classification 
2 Introduction to traffic classification 
2.1 Definitions and State of the art
2.2 Machine learning algorithms for classification
2.2.1 Support Vector Machine
2.2.2 Decision Trees
2.3 Evaluating classification accuracy
2.4 Overview of dataset
3 A general framework for behavioral P2P traffic classification 
3.1 Defining behavioral features
3.1.1 Timescale
3.1.2 Entities
3.1.3 Granularity and Direction
3.1.4 Categories
3.1.5 Operations
3.2 Methodology
3.2.1 Dataset
3.2.2 Metrics
3.2.3 Preliminary Examples
3.3 Experimental Results
3.3.1 Comparing feature ranking
3.3.2 Classification results
3.4 Summary
4 Abacus – Behavioral classification of P2P-TV Traffic 
4.1 Introduction
4.2 Classification Framework
4.2.1 The Rationale
4.2.2 Behavioral P2P-TV Signatures
4.3 Methodology
4.3.1 Workflow overview
4.3.2 Rejection Criterion
4.4 Dataset
4.4.1 Testbed Traces
4.4.2 Real Traces
4.5 Experimental Results
4.5.1 Baseline results
4.5.2 Signatures Portability
4.6 Sensitivity Analysis
4.6.1 Impact of the Rejection Threshold R
4.6.2 Impact of Time Interval T
4.6.3 Impact of Training Set Size
4.6.4 Impact of Training Set Diversity
4.6.5 Impact of SVM Kernel and Binning Strategy
4.7 Improving the Accuracy: Extending the Signature
4.8 Summary
5 Comparing behavioral and payload based classification algorithms 
5.1 Classification algorithms
5.1.1 Abacus
5.1.2 Kiss
5.2 Experimental Results
5.2.1 Methodology and Datasets
5.2.2 Classification results
5.3 Comparison
5.3.1 Functional Comparison
5.3.2 Computational Cost
5.4 Demo software
5.5 Summary
II Traffic Classification and data reduction 
6 Behavioral classification with reduced data 
6.1 Behavioral classification with NetFlow
6.1.1 Netflow data
6.1.2 Using flow-records for classification
6.1.3 Dataset and Methodology
6.1.4 Classification results
6.2 Behavioral classification in the core: the issue of flow sampling
6.2.1 Spatial distribution of application traffic
6.2.2 Real-life IP routing analysis
6.2.3 Impact of flow-sampling on Abacus signatures
6.2.4 Impact of flow-sampling on classification accuracy
6.3 Summary
7 Impact of Sampling on traffic characterization and classification 
7.1 Related work
7.2 Dataset and Features
7.2.1 Dataset
7.2.2 Features
7.3 Methodology
7.3.1 Sampling Policies
7.3.2 Metrics
7.4 Aggregate Feature Distortion
7.4.1 Overview of Sampling Impact
7.4.2 Impact of Protocol Layer
7.4.3 Impact of Sampling Policy
7.5 Single-flow Feature Distortion
7.5.1 Overview of Sampling Impact
7.5.2 Ranking Features
7.6 Traffic classification under sampling
7.6.1 Impact of Feature Set
7.6.2 Impact of Training Policy
7.6.3 Impact of Dataset
7.7 Summary
7.7.1 Impact on traffic characterization
7.7.2 Impact on traffic classification
III Congestion control for P2P 
8 A measurement study of LEDBAT 
8.1 Introduction
8.2 Methodology and Preliminary Insights
8.3 Single-flow scenarios
8.4 Multiple Flows
8.5 Related work
8.6 Summary
9 Simulation study of LEDBAT 
9.1 LEDBAT Overview
9.1.1 Queuing Delay Estimate
9.1.2 Controller Dynamics
9.1.3 TCP Friendliness Consideration
9.2 Simulation results
9.2.1 Implementation details
9.2.2 Reference scenario
9.2.3 Homogeneous Initial Conditions
9.2.4 Heterogeneous Initial Conditions
9.2.5 Latecomer advantage in real networks
9.3 Addressing the latecomer advantage
9.3.1 Correcting the measurement error
9.3.2 Introducing multiplicative decrease
9.4 Related work
9.5 Summary
10 Designing an efficient and fair LEDBAT
10.1 Current LEDBAT fairness issues
10.1.1 Impact of additive decrease
10.2 Proposed LEDBAT modification
10.2.1 Fluid model description
10.2.2 Fluid system dynamics
10.2.3 System convergence
10.3 Simulation overview
10.4 Impact of traffic model
10.4.1 Chunk-by-chunk transfer
10.4.2 Backlogged transfer
10.5 Sensitivity analysis
10.5.1 Observations on , and low-priority level
10.5.2 fLEDBAT vs TCP
10.5.3 fLEDBAT vs LEDBAT
10.5.4 fLEDBAT vs fLEDBAT
10.6 P2P Scenarios
10.6.1 Single peer perspective
10.6.2 Entire swarm perspective
10.7 Summary
11 Conclusion 
11.1 Summary
11.2 Future work
Appendices 168
A List of publications 
A.1 Publications
A.2 Under review
B List of traffic features output by tstat 
Bibliography

GET THE COMPLETE PROJECT

Related Posts