Impact of AQM policy and LPCC protocol

Get Complete Project Material File(s) Now! »

Active Queue Management (AQM)

Traditional routers use drop-tail discipline in queues holding packets to be scheduled on each interface, and drop packets only when the queue is full. This mechanism tends to cause global synchronization between flows and penalize bursty flows. To overcome these issues, AQM disciplines drop or mark packets before the queue is full in a probabilistic way, thus provide endpoints with an earlier congestion indication. As a result, AQM disciplines are able to maintain a shorter queue length than drop-tail queues, which serve as a practical method to counter “bufferbloat”. Studies on scheduling and active queue management were very popular during the 90s (e.g., SFQ [3], RED [5], DRR [4]), and declined after the beginning of the early 2000s (e.g., CHOKe [16]). Despite numerous AQM proposals, they have so far encountered limited adoption. The difficulties in tuning RED [54] are well known, and the computational cost of Fair Queuing was, back in the 90s, considered to be prohibitive (see [53] for a historical perspective). The situation has however started to change, with operators worldwide implementing AQM policies in the upstream of the ADSL modem (e.g., in France, Free implements SFQ since 2005 [7], and Orange starts to deploy SQF [55]) to improve the quality of user experience. We currently see a resurgence of the topic, in terms of novel proposals (e.g., CoDel [6], AFpFT [56]) and further research [57–59], as also testified by the very recent proposal to create a dedicated IETF AQM WG [60]. We provide the brief mechanism of AQM/Schedulings considered in our work, namely, SFQ, DRR, RED, CHOKe, and CoDel.

Low Priority Congestion Control (LPCC) protocols

Congestion and flow control may have different goals, such as controlling the streaming rate over TCP connections as done by YouTube or Netflix, or aggressively protecting user QoE as done by Skype over UDP, or to provide low-priority bulk transfers service toward the Cloud (e.g., Picasa background upload or Microsoft Background Intelligent Transfer Service BITS, etc.).
The standard TCP congestion control needs losses to back off: this means that, under a drop-tail FIFO queuing discipline, TCP necessarily fills the buffer. As uplink devices of low-capacity home access networks can buffer up to hundreds of milliseconds, this may translate into poor performance of interactive applications (e.g., slow Web browsing and bad gaming/VoIP quality). Low priority congestion control protocols tackle this problem by using congestion indicator other than packet loss that enables it to react faster than standard TCP.
Studies on low-priority congestion control protocols started in the early 2000s, with several contributions such as TCP-Nice [11], TCP-LP [10], 4CP [66, 9] and LEDBAT [12]. While it is out of scope for this thesis to provide a full overview of the above protocols, we refer the reader to [67] for a more thorough survey. Finally, a simulation based analysis of the impact of LEBDAT parameters on its behavior has been recently carried out in [68]. It focuses on a DropTail bottleneck-link shared by LEBDAT and TCP New Reno flows, proposing a set of LEBDAT parameters that minimize the overall LEBDAT bandwidth share. Protocols mentioned above share the same low-priority spirit of LEDBAT. We carried out a simulation-based comparison of TCP-Nice, TCP-LP, and LEDBAT in [48], showing that LEDBAT has the lowest level of priority.
In terms of adoption, while TCP-LP and TCP-Nice have been around in the Linux kernel for about a decade, they have seldom been used 3. However, ignited by the ease of application-layer deployment, scavenging congestion control services are now becoming popular: examples of this trend are represented by Picasa’s background upload option and the adoption of an LPCC by BitTorrent. Indeed, BitTorrent recently abandoned TCP in favor of LEDBAT, a “low extra delay background transport” protocol implemented at the application layer over a UDP framing.

READ  separating discriminative and non-discriminative information for semi-supervised learning 

Joint AQM and LPCC studies

From our knowledge, only [15] merely mentions, without being the main focus, the interplay of AQM and LEDBAT via an experimental approach: in one of the tests, authors experiment with a home gateway that implement some (non-specified) AQM policy other than DropTail. When LEDBAT and TCP are both marked in the same “background class” the “TCP upstream traffic achieves a higher throughput than the LEDBAT flows but significantly lower than” that under DropTail [15]. This fact is also recognized by the LEDBAT RFC, which states that under AQM it is possible that “LEDBAT reverts to standard TCP behavior, rather than yield to other TCP flows” [12]. Hence, the interplay of AQM and LPCC has been anecdotally covered, though a broad and deep study is missing so far.

Table of contents :

List of Figures
List of Tables
1 Introduction 
1.1 Overview
1.2 Concerns in access network
1.3 Concerns in data center network
1.4 Contributions
I Access network 
2 Background
2.1 Motivation
2.2 Related work
2.2.1 Active Queue Management (AQM)
2.2.2 Low Priority Congestion Control (LPCC) protocols
2.2.3 Joint AQM and LPCC studies
2.2.4 Fairness
2.2.5 Fluid modeling
3 Hands-on investigation 
3.1 Methodology
3.2 Simulation results
3.2.1 Impact of AQM policy
3.2.2 Impact of AQM policy and LPCC protocol
3.2.3 Sensitivity analysis
3.3 Experimental results
3.3.1 Testbed experiments
3.3.2 Internet experiments
3.4 Summary
4 Control theoretic analysis 
4.1 Open-loop model
4.1.1 The mathematical model
4.1.2 Equilibrium and properties
4.1.3 Discussion
4.2 Closed-loop model
4.2.1 Characterizing reprioritization
4.2.2 Characterizing system dynamics
4.3 Validation
4.3.1 Scenario
4.3.2 Model validation against ns2 simulations
4.3.3 Model refinement
4.4 System-level solution
4.5 Summary
II Data center network (DCN) 
5 Background 
5.1 Motivation
5.2 Related work
5.2.1 Broad view
5.2.2 Representative proposals
6 Fairness in data center network 
6.1 Fairness
6.1.1 Fair scheduling
6.1.2 Suitability for DCN
6.2 Methodology
6.2.1 Calibration
6.3 Simulation
6.4 Summary
7 Conclusion 
7.1 Summary
7.2 Future work
Bibliography

GET THE COMPLETE PROJECT

Related Posts