Get Complete Project Material File(s) Now! »

## Algorithms for iterative (turbo) data processing

Some of the major SISO decoding approaches, developed for turbo decoding are: Maxium Aposteriori Probability (MAP), Log-MAP, Max-Log-MAP and Soft Output Viterbi Algorithm (SOVA).

### BCJR or MAP algorithm

Bahl, Cocke, Jelinek and Raviv presented an optimal algorithm [7], often referred to as the BCJR algorithm, for estimating the a posteriori probabilities of states and state transitions of a Markov source observed through a discrete memoryless channel. This algorithm was later adapted by Berrou et al. to produce a posteriori probabilities in the case of the iterative decoding of convolutional codes. The BCJR algorithm is often referred to as the maximum a posteriori (MAP) algorithm. With regard to the Viterbi algorithm which determines the most probable information sequence [50, 108, 109], the MAP algorithm associates with every decoded bit an estimation of the reliability of this decision. This algorithm is used in general for the iterative decoding of convolutional codes. To simplify the description of the MAP algorithm, we consider a RSC encoder at coding rate R = 1=2 with memory length M (2M internal states). Let uk be the kth information bit and ck the corresponding parity bit. At the output of the channel, we receive the following sequence : RN 1 = (R1; :::;Rk; :::;RN) where Rk = (xk; yk), xk and yk are the observations received at time k corresponding respectively to the information bit uk and the redundancy bit ck.

### Convergence and asymptotic performance

Two essential parameters allow estimating the performance of a concatenated errorcorrecting code and its decoder:

The convergence threshold dened as the minimum signal to noise ratio where the performance of a coded system becomes better compared with the non coded transmission system. A good convergence corresponds to a low convergence threshold, because the performance of the system at high and average levels of noise are close to the theoretical limit.

The asymptotic gain GA indicates the maximum gap between the coded and noncoded error rate curves. When GA is reached, the error rate curve with coding becomes parallel to the curve without coding. GA can be expressed in the following way: GA 10 log (Rdmin) where R is the coding rate and dmin the Minimum Hamming Distance (MHD). In fact, there is a compromise between good convergence and MHD. For average or high error rates, it is better to privilege the convergence threshold to the detriment of the minimum distance of the code; whereas at low error rates it is better to have a high minimum distance.

#### Graphical analysis of the convergence using the EXIT diagrams

To evaluate the performance of a coding scheme with iterative decoding, Monte Carlo simulations of the bit error rate can be carried out. However this means may require a lot of computing time and resources. The study of the evolution of the extrinsic information is rather an attractive solution for ner comparisons between dierent coding schemes. It allows to analyze the performance of the coding scheme, but also to supply an estimation of the bit error rate. This method was proposed at rst by Stefan Ten Brink in [103]. Berrou et al. showed in [18] that when the decoding process converges, the extrinsic information can be modelled by a Gaussian distribution where the mean and the variance increase as the number of iterations increases. This modelling allows the extrinsic information z to be characterized only by its mean µz and its variance ². The Gaussian approximation is veried up to a certain number of iterations, this number increases for the low values of Eb=N0. The Mutual Information (MI) I(z; x), as dened below, is used in EXtrinsic Information Transfer (EXIT) charts to predict the convergence behavior of iterative decoding [103, 106, 107]. It measures the quantity of information provided on average by the extrinsic information z on the information bits x.

**Table of contents :**

Acknowledgements

Abstract

Résumé

Contents

List of Figures

List of Tables

Glossary of Acronyms

Glossary of Symbols

Introduction

**1 Turbo codes: a breakthrough in digital communications **

1.1 Parallel convolutional turbo codes

1.1.1 Turbo encoder structure

1.1.2 Design of turbo code interleavers

1.1.3 Trellis termination

1.1.4 Puncturing

1.2 Turbo decoding

1.3 Algorithms for iterative (turbo) data processing

1.3.1 BCJR or MAP algorithm

1.3.2 LogMAP algorithm

1.3.3 Max-Log-MAP algorithm

1.4 Convergence and asymptotic performance

1.4.1 Graphical analysis of the convergence using the EXIT diagrams

1.4.2 EXIT charts for turbo codes

1.4.3 Distance measurement methods for turbo codes

1.5 Cdma2000 turbo code

1.5.1 Cdma2000 interleaving

1.5.2 Performance of cdma2000 turbo code

1.6 How to combate the error oor?

1.7 Conclusion

**2 Exploring 3-Dimensional turbo codes **

2.1 Properties of 3-dimensional turbo codes

2.1.1 3D turbo encoder structure

2.1.2 3D turbo decoder

2.1.3 Choice of the permeability rate

2.1.4 Choice of the post-encoder

2.1.4.1 EXIT analysis

2.1.4.2 Statistics about the predecoder corresponding to the selected post-encoder

2.1.5 Permutations of a 3D turbo code

2.2 Performance of cdma2000 3D turbo codes

2.3 3D turbo codes hardware implementation issues: decoder architecture and complexity analysis

2.3.1 3D turbo decoder architecture

2.3.2 Max-Log-MAP decoder complexity analysis

2.3.3 Memory requirements for the 3D turbo decoder

2.3.4 Summary

2.4 Conclusion

**3 Improving 3-Dimensional turbo codes **

3.1 Improving the asymptotic performance of the 3D turbo code

3.1.1 Optimization method

3.1.2 Example 1: optimization results for k = 1530 bits, R = 1=2 and = 1=8

3.1.3 Example 2: optimization results for k = 1146 bits, R = 2=3 and = 1=4

3.1.4 Conclusion

3.2 Improving the convergence threshold of the 3D turbo code

3.2.1 Convergence threshold of binary 3D turbo codes

3.2.2 Time varying 3D turbo codes

3.2.2.1 Choice of the time varying post-encoder

3.2.2.2 EXIT analysis of time varying 3D turbo codes

3.2.2.3 Error rate performance of time varying 3D turbo codes .

3.2.3 3D turbo codes for high spectral eciency transmissions

3.2.3.1 Transmission scheme

3.2.3.2 Example 1: 3D TCs associated with a 16-QAM modulator for k = 2298 bits, R = 1=3 and = 1=8

3.2.3.3 Example 2: 3D TCs associated with an 8-PSK modulator for k = 1146 bits, R = 4=5 and = 1=8

3.2.3.4 Design rules for 3D turbo coded modulations

3.3 Conclusion

**4 Irregular turbo codes **

4.1 Basics of irregular turbo codes

4.1.1 Another representation of turbo codes

4.1.2 Irregular turbo encoder

4.1.3 Irregular turbo decoder

4.2 Selecting the degree prole of irregular turbo codes

4.2.1 Determination of the degree prole using hierarchical EXIT charts .

4.2.2 Performance example of irregular turbo codes

4.3 Design of suitable permutations for irregular turbo codes

4.3.1 Permutations with uniform distribution of the pilot bits

4.3.2 Designing permutations using the Dijkstra’s algorithm

4.3.3 Error rate performance of irregular turbo codes with an optimized interleaver

4.4 Adding a post-encoder to irregular turbo codes

4.4.1 Proposed modied encoding procedure

4.4.2 Performance of irregular TCs with post-encoding

4.5 Conclusion

Conclusions and perspectives

Contributions to the literature

**A Cdma2000 interleaver **

**B State mapping encoding **

B.1 Circular (tail-biting) encoding

B.2 State mapping encoding

B.3 Example

**C About 16-QAM and 8-PSK modulations **

**D Basics and properties of LDPC codes **

D.1 Encoding and iterative decoding of LDPC codes

D.2 Irregular LDPC codes

**Bibliography**