Output from the previous iteration

Extrinsic information from the previous iteration

Figure 2.4: The principle of iterative decoding.

An iterative decoder at each iteration uses two sources of knowledge about the trans­mitted codeword: the information from the channel (the intrinsic information) and the information from the previous iteration (the extrinsic information). From these two sources of information, the decoding algorithm attempts to obtain a better knowledge about the transmitted codeword, using this knowledge as the extrinsic information for the next iter­ation (see Fig. 2.4). In a successful decoding, the extrinsic information gets better and better as the decoder iterates. Therefore, in all methods of analysis of iterative decoders, the statistics of the extrinsic messages at each iteration are studied.

Studying the evolution of the pdf of extrinsic messages iteration by iteration is the most complete analysis (known as density evolution). However, as an approximate analysis, one may study the evolution of a representative of this density.

2.4.1 Gallager's analysis of LDPC codes

One Iteration of Algorithm
Extrinsic information to the next iteration

In Gallager's initial work, the LDPC codes are assumed to be regular and the decoding is assumed to use binary messages (Algorithms A and B). Gallager provided an analysis of the decoder in such situations [11]. The main idea of his analysis is to characterize the error rate of the messages in each iteration in terms of the channel situation and the error rate of messages in the previous iteration. In other words, in Gallager's analysis, the evolution of message error rate is studied, which is also equivalent to density evolution because the pmf

L

Input to the

Next iteration


 

Output from the previous iteration

Figure 2.5: The depth-one decoding tree for a regular (3, 6) LDPC code.

of binary messages is one-dimensional, i.e., it can be described by a single parameter.

Gallager's analysis is based on the assumption that the incoming messages to a variable (check) node are independent. Although, this assumption is true only if the graph is cycle- free, it is proved in [12] that the expected performance of the code converges to the cycle-free case as the code blocklength increases.

2.4.2 The decoding tree

Consider an updated message from a variable node v of degree dv to a check node in the decoder. This message is computed from dv — 1 incoming messages and the channel message to v. Those dv — 1 incoming messages are in fact the outgoing messages of some check nodes, which are updated previously. Consider one of those messages with its check node с of degree dc. The outgoing message of this chcck node is computed from dc — 1 incoming messages to c. One can repeat this for all the check nodes connected to v to form a decoding tree of depth one. An example of such a decoding tree for a regular (3, 6) LDPC code is shown in Fig. 2.5. Continuing on the same fashion, one can get the decoding tree of any depth. Fig. 2.6 shows an example of a depth-two decoding tree for an irregular LDPC code. Clearly, for an irregular code, decoding trees rooted at different variable nodes are different.

Notice that when the factor graph is a tree, the messages in the decoding tree of any

Figure 2.6: A depth-two decoding tree for an irregular LDPC code.

 

depth are independent. If the factor graph has cycles and its girth is I, then up to depth [|J the messages in the decoding tree are independent . Therefore the independence assumption is correct up to [|J iterations and is an approximation for further iterations.

2.4.3 Density evolution for LDPC codes

In 2001, Richardson and Urbanke extended the main idea of LDPC code analysis used for Algorithm A and В and also BEC-decoding to other decoding algorithms [13]. Considering the general case, where the message alphabet is the set of real numbers, they proposed a technique called density evolution, which tracks the evolution of the pdf of the messages, iteration by iteration.

To be able to define a density for the messages, they needed a property for channel and decoding, called the symmetry conditions. The symmetry conditions require the channel and the decoding update rules to satisfy some symmetry properties as follows. Channel symmetry: The channel is said to be output-symmetric if

fv\x(y\x = 1) = fY\x(~y\x = -1),

where fY\x is the conditional pdf of Y given X.

Check node symmetry: The check node update rule is symmetric if

dc-l

CHK(6xmi, b2m2,..., bdc-imdc-i) = CHK(mx, ra2,..., mdc_ i) ( Д k), (2.16)

i=0

for any ±1 sequence (6i, 62, ■ • •, 'Ч-О- Here, CHK() is the check update rule, which takes dc — 1 messages to generate one output message.

Variable node symmetry: The variable node update rule is symmetric if

VAR(-m0,-mi,-m2,...,-mdt,_i) = -VAR(mbm2,... ,md„_i), (2.17)

Here, VAR() is the variable node update rule, which takes dv — 1 messages together with the channel message m0 to generate one output message.

The symmetry conditions arise because, under the symmetry conditions, the convergence behaviour of the decoder is independent of the transmitted codeword, assuming a linear code. Therefore, one may assume that the all-zero codeword is transmitted. Under this assumption, a message carrying a belief for '0' is a correct message and a message carrying a belief for '1' is an error message, an error rate can be defined for the messages.

The analytical formulation of this technique can be found in [13]. but in many cases it is too complex to be useful for direct use. In practice, discrete density evolution [28] is used. The idea is to quantize the message alphabet and use pmfs instead of pdfs to make a computer implementation possible. A qualitative description of density evolution and the formulation of discrete density evolution for the sum-product algorithm is provided in Appendix B.

Density evolution is not specific to LDPC codes. It is a technique which can be adopted for other codes defined on graphs associated with iterative decoding. However, it becomes intractable when the constituent codes are complex, e.g., in turbo codes. Even in the case of LDPC codes with the simplest constituent code (simple parity checks), this algorithm is quite computationally intense.

2.4.4 Decoding threshold of an LDPC code

From the formulation of discrete density evolution it becomes clear that, for a specific variable and check degree distribution, the density of messages after I iterations is only a function of the channel condition. In [13], Richardson and Urbanke proved that there is a worst case channel condition for which the message error rate approaches zero as the number of iterations approaches infinity. This channel condition is called the threshold of the code. For example the threshold of the regular (3, 6) code on the AWGN channel under the sum- product decoding is 1.1015 dB. which means that if an infinitely long (3, 6) code were used on an AWGN channel, convergence to zero error rate is guaranteed when ^ is more than 1.1015 dB. If the channel condition is worse than the threshold, a non-zero error rate is guaranteed. In practice, when finite-length codes are used, there is a gap from performance to the threshold which grows as the code length decreases.

If the threshold of a code is equal to the Shannon limit, then the code is said to be capacity-achieving.

2.4.5 Extrinsic information transfer chart analysis

Another approach for analyzing iterative decoders, including codes with complicated con­stituent codes, is to use extrinsic information transfer (EXIT) charts [22,38-40].

In EXIT chart analysis, instead of tracking the density of messages, we track the evolu­tion of a single parameter—a measure of the decoder's success—iteration by iteration. For example one can track the SNR of the extrinsic messages [22.40], their error probability [41] or the mutual information between messages and decoded bits [38]. In literature, the term "EXIT chart" is usually used when mutual information is the parameter whose evolution is tracked. Here, we have generalized this term to tracking evolution of other parameters. As it will become clear until the end of this thesis, EXIT charts based on tracking the message error rate axe if not the most, among the most useful ones.

To make our short discussion on EXIT charts more comprehensible we consider an EXIT chart based on tracking the message error rate, i.e., one that expresses the message error rate at the output of one iteration pout in terms of the message error rate at the input of the

Figure 2.7: An EXIT chart based on message error rate.

 

iteration pin and the channel error rate p0, i.e.,

Pout = f(Pin,Po)-

For a fixed p0 this function can be plotted using Pin-Pout coordinates. Usually EXIT charts are presented by plotting both / and its inverse f"1. This makes the visualization of the decoder easier, as the output p^t from one iteration transfers to the input pin of the next. Fig. 2.7 shows the concept. Each arrow in this figure represents one iteration of decoding. It can be seen that using EXIT charts, one can study how many iterations are required to achieve a target message error rate.

If the "decoding tunnel" of an EXIT chart is closed, i.e., if for some pin we have pout > pin, convergence does not happen. In such cases we say that the EXIT chart is closed. If the EXIT chart is not closed we say it is open. An open EXIT chart is always below the 45-degree line. The convergence threshold p*a is the worst channel condition, for which the tunnel is open, i.e.,

p*0 = argsup{/(pj„,po) < Pin, for all 0 < pin < p0}.

Po

Similar formulations and discussions can be made for EXIT charts based on other mea­sures of message pdf.

EXIT chart analysis is not as accurate as density evolution, because it tracks just a single parameter as the representative of a pdf. For many applications, however, EXIT charts are very accurate. For instance in [38], EXIT charts are used to approximate the behaviour of iterative turbo decoders on a Gaussian channel very accurately. In Chapter 4, using EXIT charts, we show that the threshold of convergence for LDPC codes on AWGN channel can be approximated within a few thousandths of a dB of the actual value. In the same chapter, we use EXIT charts to design irregular LDPC codes which perform not more than a few hundredths of a dB worse than those designed by density evolution. One should also notice that when the pdf of messages can truly be described by a single parameter, e.g., in the ВЕС, EXIT chart analysis is equivalent to density evolution.

Methods of obtaining EXIT charts for turbo codes are described in [38,39]. We leave a formal definition and methods of obtaining EXIT charts for LDPC codes over the AWGN channel for Chapter 4. We finish this section by a brief comparison between density evolution analysis and EXIT chart analysis.

Density evolution provides exact analysis for infinitely long codes, and is an approximate analysis for finite codes. EXIT chart analysis is an approximate analysis even for infinitely long codes, unless the pdf of the messages can be specified by a single parameter. Hence, EXIT chart analysis is recommended only if a one-parameter approximation of message densities is accurate enough.

On the other hand, density evolution is computationally intense and in some cases in­tractable. EXIT chart analysis is fast and applicable to many iterative decoders. EXIT charts visualize the behaviour of iterative decoder in a simple manner and reduce the pro­cess of designing LDPC codes to a linear program [42].

As an example and for clarifying the discussion of this chapter, the analysis of Algorithm A is provided in Appendix C.


1 if £**(„) > 0 (21Q) ^ 0 if Eh€n(v) mh^v < 0

If Ylhen(v) mh-*v — 0 both V — 0 and V = 1 have equal probability and the decision can be done randomly.

In the remainder of this thesis, when we refer to the sum-product algorithm, we mean the simplified case for binary parity-check codes, unless otherwise stated.


[1] Graphs which have one or more cycles.