Implementation of NAN-MS Decoders as Deterministic Decoders 

Get Complete Project Material File(s) Now! »

Generalities on LDPC codes

An LDPC code is a linear block code defined by a sparse parity-check matrix H = [hmn] of M rows by N columns, with M < N. The following matrix is an example of a binary parity-check matrix. The usual graphical representation of an LDPC code is made by a Tanner graph which is a bipartite graph G composed of two types of nodes, the variable nodes (VNs) vn, n = 1, …, N and the check nodes (CNs) cm, m = 1, …, M. A VN in the Tanner graph corresponds to a column of H and a CN corresponds to a row of H, with an edge connecting CN cm to VN vn exists if and only if hmn ̸= 0, i.e. only CNS are connected to VNs and vice versa, other types of connection are not allowed. In figure 2.1, we show a bipartite graph which is related to the sparse parity-check matrix presented in (2.1). This graph also represents a graph for a (3,4)-regular LDPC code. Note that a VN is always represented with a circle, while a CN is always represented with a square.
LDPC codes are classified according to their structural properties as regular or irregular LDPC codes. Additionally, the LDPC codes can also be classified as non-structured LDPC codes and structured LDPC codes. Non-structured LDPC codes does not exhibit a specific structure, while in structured LDPC codes, H is generated with algebraic equations, or constrained by specific topological properties. Usually, those constraints are introduced in order to help the decoder to have a low cost hardware implementation. One could further categorize the structured LDPC codes in three types: (i) quasi-cyclic LDPC codes, (ii) convolutional LDPC codes, and (iii) algebraic constructions of LDPC codes. Let us assume that v is any VN and c is any CN. Let us also denote by V(v) the set of neighbors of a VN v, and denote V(c) the set of neighbors of a CN c. The degree of a node is the number of its neighbors in G.

Ensemble of Regular LDPC codes

A code is said to have a regular column-weight dv = |V(v)| if all VNs v have the same degree dv. Similarly, if all CNs c have the same degree dc = |V(c)|, a code is said to have a regular row-weight dc. A (dv, dc)-regular LDPC code has all its VNs with the same degree dv and all its CNs of a fixed degree dc. In the corresponding sparse parity-check matrix H, dv is the amount of nonzero elements per column, and dc is the amount of nonzero elements per row. For a regular LDPC code, the density of H, denoted by dH , is equal to dH = dv/M = dc/N. For example, the matrix in (2.1) has a density dH equal to dH = 1/3. We can note that, for a fixed dc, when N increases to infinity, dH converges to zero. Let us denote by E the number of edges in the Tanner graph, or equivalently E is the amount of non-zero elements in H. For a regular LDPC code we have E = dv × N = dc × M. The code rate R can be calculated as R = K/N ≥ [26], and if the rows of H are linearly independent, we can write R = 1 −(dv/dc), which is usually defined as the design rate [27].
Now, we introduce the concept of the ensemble of (dv, dc)-regular LDPC codes in order to do the theoretical analysis: An ensemble or family CN (dv, dc) of LDPC codes is composed of all the Tanner graphs with N VNs and regular degrees dv and dc.

Message-Passing Decoders

Message-Passing (MP) decoders are iterative decoders that use a Tanner graph to pass messages along the edges. In MP decoders, a VN vn (respectively a CN cm) sends its message to its neighbors V(vn) (respectively V(cm)). In each iteration, the VN update (VNU) and the CN update (CNU) compute outgoing messages from all incoming messages. Here we present the notations used to describe different MP decoders. We consider that the message alphabet is AC , while the decoder input alphabet is denoted by AL. Unless otherwise stated, the decoder input alphabet will be the one of the messages, i.e AL = AC . Let us denote by ℓ ∈ N the iteration number. Let us also denote by m(vℓ→)c ∈ AC the message sent from VN v to CN c in the ℓth iteration, denote by m(cℓ→)v ∈ AC the message sent from CN c to VN v in the ℓth iteration, and denote by γ (ℓ) = (γ1(ℓ), …, γN(ℓ)) the a posteriori probability (APP), where γn(ℓ) is associated to the VN vn, for n = 1,2, …, N. Let us also denote by m(vℓ→),Uc the unsaturated variable-to-check message in the ℓth iteration. Fig. 2.4 depicts a Tanner graph fragment, showing the flow of messages in MP decoders. From this figure we have V(vn) = {cm, c1, c2, c3} and V(cm) = {vn, v1, v2, v3}.

Quantized Min-Sum-Based Decoders and Density Evolution

This section is dedicated to the presentation of the main theoretical tool that is used to analyze the performance of LDPC ensembles and decoders, called Density Evolution (DE). The concept of DE is to track the evolution of the probability mass function of the messages during the iterations of LDPC decoders. Although DE has been introduced as a theoretical approach, it turns out to be a very efficient tool to predict the performance of LDPC decoders in the waterfall region. This is especially true when the DE can follow the exact density of the messages (under the independence assumption), which is the case of quantized decoders, when the message alphabet is small enough. From now on, and throughout the rest of this work, unless otherwise stated, we assume that the message is finite, and composed of 2Nq + 1 states, with Nq = 2(q−1) −1. Hence, AC = {−Nq,−(Nq −1), …,−1,0,+1, …,+(Nq −1),+Nq}, i.e. the messages are quantized on q bits of precision.

READ  Basic description of the Dynamic Shear Rheometer (DSR)

Channel Value Quantization

According to Section 2.2.2, for the BSC we have LLR(yn) = (1 −2yn)log((1 −ϵ)/ϵ) ∈ R, with yn ∈ Ay = {0,1}. For a given BSC probability ϵ, the decoder input alphabet is composed of two values in AL = {+|LLR(yn)|,−|LLR(yn)|}. Fig. 2.5 shows the LLR LLR(yn) as a function of the cross-over probability ϵ, we can see that LLR(yn) is a real number.
As we deal with quantized decoders with message alphabet AC , we can consider without loss of generality that AL ⊆ AC . In the sequel, we denote by C the channel value which is a positive integer, and consider that |LLR(yn)| is mapped to C. As a result, the decoder input alphabet becomes AL = {+C,−C}, with C ∈ {+1,+2, …, Nq}. In other words yn = 0 is mapped to +C and yn = 1 is mapped to −C. The value of C can be seen as an extra degree of freedom in the decoder definition. For example, a MS decoder with C = 1 and a MS decoder with C = 2 are interpreted as two different decoders. The value of C can be optimized in order to improve the decoder performance.

Table of contents :

List of figures
List of tables
1 Introduction 
2 Background 
2.1 Generalities on LDPC codes
2.1.1 Ensemble of Regular LDPC codes
2.1.2 Ensemble of Irregular LDPC codes
2.2 Binary LDPC Decoders
2.2.1 Definitions
2.2.2 Message-Passing Decoders
2.3 Quantized Min-Sum-Based Decoders and Density Evolution
2.3.1 Channel Value Quantization
2.3.2 Quantized Min-Sum-Based Decoders
2.3.3 Density Evolution for Quantized Min-Sum-Based Decoders
2.3.4 Asymptotic Bit Error Probability
2.3.5 Density Evolution threshold
3 Noise-Against-Noise Min-Sum Decoders 
3.1 Noiseless Offset Min-Sum-Based Decoders
3.2 Noise Model for Noise-against-Noise Min-Sum Decoders
3.2.1 Probabilistic Error Model for Noise-against-Noise Min-Sum Decoders
3.3 Noise-against-Noise Min-Sum Decoders
3.3.1 Noise Outside the Variable Node Update
3.3.2 Noise Inside the Variable Node Update
3.3.3 Noise Outside the Check Node Update
3.4 Noisy Density Evolution for NAN-MS decoders
3.4.1 Noisy Density Evolution for the NOV model
3.4.2 Noisy Density Evolution for the NIV model
3.4.3 Noisy Density Evolution for the NOC model
3.4.4 Noisy DE recursion
3.4.5 Bit Error Probability
3.4.6 Noisy Density Evolution threshold
3.5 Asymptotic Analysis of NAN-MS Decoders for Regular LDPC codes
3.5.1 Optimization of the Channel Gain Factor of Noiseless Decoders
3.5.2 Localization of the Noise Injection in NAN-MS Decoders
3.5.3 Joint Optimization of the Noise Model Parameters and the Channel Gain Factor of NAN-MS Decoders
3.6 Asymptotic Analysis of NAN-MS Decoders for Irregular LDPC codes .
3.7 Finite Length Performance of NAN-MS Decoders
3.8 Implementation of NAN-MS Decoders as Deterministic Decoders
3.8.1 Density Evolution thresholds
3.8.2 Finite Length Performance
3.9 Implementation of NAN-MS Decoders Using an Offset Vector
3.9.1 Density Evolution thresholds
3.9.2 Finite Length Performance
3.10 Conclusions
4 Sign-Preserving Min-Sum Decoders 
4.1 classical OMS-based Decoders
4.2 Quantization used for SP-MS Decoders
4.3 Sign-Preserving Min-Sum Decoders
4.4 Optimization of Sign-Preserving Min-Sum Decoders
4.4.1 Probabilistic Error Model to Optimize SP-MS Decoders
4.5 Density Evolution for Sign-Preserving Decoders
4.5.1 Initialization
4.5.2 DE update for CNU
4.5.3 DE update for VNU
4.6 Asymptotic Bit Error Probability
4.7 Density Evolution threshold
4.8 Asymptotic Analysis of Sign-Preserving Min-Sum Decoders
4.8.1 Asymptotic Analysis of SP-MS Decoders for Regular LDPC codes
4.8.2 Asymptotic Analysis of SP-MS Decoders for Irregular LDPC codes
4.9 Finite Length Performance of Sign-Preserving Min-Sum Decoders .
4.10 Convergence Performance Analysis
4.11 Conclusion
5 Implementation of Sign-Preserving Min-Sum Decoders 
5.1 Overview of Existing Hardware Implementations
5.2 Hardware Architecture of Sign-Preserving Min-Sum Decoders
5.2.1 Count of Iterations
5.2.2 Compute Syndrome
5.2.3 VNU Component
5.2.4 Check Node Component
5.3 Implementation Results of SP-MS Decoders and MS-Based Decoders .
5.3.1 Synthesis results on FPGA
5.3.2 ASIC synthesis results
5.4 The (8,8) absorbing set in SP-MS decoders
5.5 SP-MS Decoder with Post-Processing Using the Same Precision for Messages and LLRs
5.5.1 Post-Processing
5.5.2 Post-Processing Architecture
5.5.3 Performance results
5.6 SP-MS Decoder with Post-Processing Using Different Precision for Messages and LLRs
5.6.1 Post-Processing
5.6.2 Post-Processing Architecture
5.6.3 Performance results
5.7 Implementation Results of SP-MS Decoder with Post-Processing .
5.7.1 Synthesis results on FPGA
5.7.2 ASIC synthesis results
5.7.3 Place and Route results
5.7.4 Comparison with other works
5.8 Conclusion
6 Conclusions 


Related Posts