Brocade VCS Technology

somdn_product_page

(Downloads - 0)

Catégorie :

For more info about our services contact : help@bestpfe.com

Table of contents

1 Introduction 
1.1 Motivation
1.2 Problematics
1.3 Contributions
1.4 Plan of the thesis
2 State of the art 
2.1 Introductory remark
2.2 Solutions and Protocols used for fabric networks
2.2.1 Fabric network solutions
2.2.1.1 QFabric
2.2.1.2 FabricPath
2.2.1.3 Brocade VCS Technology
2.2.2 Communication protocols for fabric networks
2.2.2.1 TRILL
2.2.2.2 SPB
2.3 Fast packet processing
2.3.1 Terminology
2.3.1.1 Fast Path
2.3.1.2 Slow Path
2.3.2 Background on packet processing
2.4 Software implementations
2.4.1 Click-based solutions
2.4.1.1 Click
2.4.1.2 RouteBricks
2.4.1.3 FastClick
2.4.2 Netmap
2.4.3 NetSlices
2.4.4 PF_RING (DNA)
2.4.5 DPDK
2.5 Hardware implementations
2.5.1 GPU-based solutions
2.5.1.1 Snap
2.5.1.2 PacketShader
2.5.1.3 APUNet
2.5.1.4 GASPP
2.5.2 FPGA-based solutions
2.5.2.1 ClickNP
2.5.2.2 GRIP
2.5.2.3 SwitchBlade
2.5.2.4 Chimpp
2.5.3 Performance comparison of different IO frameworks
2.5.4 Other optimization techniques
2.6 Integration possibilities in virtualized environments
2.6.1 Packet processing in virtualized environments
2.6.2 Integration constraints and usage requirements
2.7 Latest approaches and future directions in packet processing
2.8 Conclusion
3 Fabric network architecture by using hardware acceleration cards 
3.1 Introduction
3.2 Problems and limitations of traditional layer 2 architectures
3.3 Fabric networks
3.4 TRILL protocol for communication inside a data center
3.5 Comparison of software and hardware packet processing implementations
3.5.1 Comparison of software solutions
3.5.1.1 Operations in the user-space and kernel-space
3.5.1.2 Zero-copy technique
3.5.1.3 Batch processing
3.5.1.4 Parallelism
3.5.2 Comparison of hardware solutions
3.5.2.1 Hardware used
3.5.2.2 Usage of CPU
3.5.2.3 Connection type
3.5.2.4 Operations in the user-space and kernel-space
3.5.2.5 Zero-copy technique
3.5.2.6 Batch processing
3.5.2.7 Parallelism
3.5.3 Discussion on GPU-based solutions
3.5.4 Discussion on FPGA-based solutions
3.5.5 Other hardware solutions
3.6 Kalray MPPA processor
3.6.1 MPPA architecture
3.6.2 MPPA AccessCore SDK
3.6.3 Reasons for choosing MPPA for packet processing
3.7 ODP (OpenDataPlane) API
3.7.1 ODP API concepts
3.7.1.1 Packet
3.7.1.2 Thread
3.7.1.3 Queue
3.7.1.4 PktIO
3.7.1.5 Pool
3.8 Architecture of the fabric network by using the MPPA smart NICs
3.9 Conclusion
4 Data plane offloading on a high-speed parallel processing architecture 
4.1 Introduction
4.2 System model and solution proposal
4.2.1 System model
4.2.2 Frame journey
4.2.2.1 Control frame
4.2.2.2 Data frame
4.2.3 Implementation of TRILL data plane on the MPPA machine
4.3 Performance evaluation
4.3.1 Experimental setup and methodology
4.3.2 Throughput, latency and packet processing rate
4.4 Conclusion
5 Analysis of the fabric network’s control plane for the PoP data centers use case 
5.1 Data center network architectures
5.2 Control plane
5.2.1 Calculation of the control plane metrics
5.2.1.1 Full-mesh topology
5.2.1.2 Fat-tree topology
5.2.1.3 Hypercube topology
5.2.2 Discussion on parameters used
5.2.3 Overhead of the control plane traffic
5.2.4 Convergence time
5.2.5 Resiliency and scalability
5.2.6 Topology choice
5.3 Conclusion
6 Conclusion 
6.1 Contributions
6.1.1 Fabric network architecture by using hardware acceleration cards
6.1.2 Data plane offloading on a high-speed parallel processing architecture
6.1.3 Analysis of the fabric network’s control plane for the PoP data centers use case
6.2 Future works
Publications
Bibliography

Laisser un commentaire

Votre adresse e-mail ne sera pas publiée. Les champs obligatoires sont indiqués avec *