Performance Improvement of Multi-Stage ... - Semantic Scholar

Report 1 Downloads 191 Views
IEICE TRANS. FUNDAMENTALS, VOL.E94–A, NO.6 JUNE 2011

1449

PAPER

Performance Improvement of Multi-Stage Threshold Decoding with Difference Register Muhammad Ahsan ULLAH†a) , Nonmember and Haruo OGIWARA†b) , Member

SUMMARY This paper presents an improved version of multistage threshold decoding with a difference register (MTD-DR) for selforthogonal convolutional codes (SOCCs). An approximate lower bound on the bit error rate (BER) with the maximum likelihood (ML) decoding is also given. MTD-DR is shown to achieve an approximate lower bound of ML decoding performance at the higher Eb /N0 . The code with larger minimum Hamming distance reduces the BER in error floor, but the BER in waterfall shifts to the higher Eb /N0 . This paper gives a decoding scheme that improves the BER in both directions, waterfall and error floor. In the waterfall region, a 2-step decoding (2SD) improves the coding gain of 0.40 dB for shorter codes (code length 4200) and of 0.55 dB for longer codes (code length 80000) compared to the conventional MTD-DR. The 2-step decoding that serially concatenates the parity check (PC) decoding improves the BER in the error floor region. This paper gives an effective use of PC decoding, that further makes the BER 1/8 times compared to the ordinary use of PC decoding in the error floor region. Therefore, the 2SD with effective use of parity check decoding improves the BER in the waterfall and the error floor regions simultaneously. key words: threshold decoding, lower bound of ML decoding, convolutional code, self-orthogonal code

1.

Introduction

A least complex iterative decoding method, based on the threshold decoding (TD) [1], called multi-stage threshold decoding with difference register (MTD-DR) is presented [2]–[4]. It is a kind of bit flipping decoding method. MTDDR uses self-orthogonal convolutional codes (SOCCs) due to their limited error propagation property [5]. The minimum Hamming distance of the codes depends on the number of taps connected to a shift register in the encoder [1], [6]. MTD-DR experiences an error floor due to the minimum Hamming distance of the code [4]. A code with larger number of taps (i.e. larger minimum Hamming distance) reduces bit error rate (BER) in the error floor region, but the BER in waterfall shifts to the higher Eb /N0 [2], [4]. The Eb /N0 is defined as the ratio of the energy per information bit (Eb ) to the one-sided noise power spectral density (N0 ). Table 1 shows all the abbreviated words which are used in this paper. This paper proposes a 2-step decoding (2SD) scheme based on the MTD-DR. The 1st step of decoding uses only a part of the parity sequences, so that the decoding is done Manuscript received September 30, 2010. Manuscript revised January 28, 2011. † The authors are with the Department of Electrical Engineering, Nagaoka University of Technology, Nagaoka-shi, 940-2188 Japan. a) E-mail: [email protected] b) E-mail: [email protected] DOI: 10.1587/transfun.E94.A.1449

Table 1 Abbreviation AWGN BER BPSK CMTDF DR LDPC ML MTD-DR OPEs PC PC+IT

SMTD SOCC TD WMTD 2SD 2SD:TP1 2SD:TP2 2SD:TP2+CF

Meanings of abbreviated words in this paper. Meaning Additive white Gaussian noise Bit error rate Binary phase shift keying Combined soft decoding multi-stage threshold decoding with feedback Difference register Low density parity check Maximum likelihood Multi-stage threshold decoding with difference register Orthogonal parity-check equations Parity check Effective use of parity check decoding that includes the parity check decoding in an iterative decoding Soft decoding multi-stage threshold decoding with difference register Self-orthogonal convolutional code Threshold decoding Weighted bit flipping multi-stage threshold decoding with difference register 2-step decoding 2-step decoding type 1 2-step decoding type 2 2-step decoding type 2 concatenation with combined soft decoding multi-stage threshold decoding with difference register

with the smaller number of orthogonal parity-check equations (OPEs). Each parity-check equation makes from a set of information bits given by the code. The OPEs is a set of parity-check equations where only one bit is common. The 2nd step of decoding uses full part of parity sequences. i.e. this step of decoding works just like the conventional MTDDR. A similar decoding idea, called parallel decoding, is shown in [7], [8]. Except for simulation results, the authors do not show how the system works in parallel. Moreover, necessary information to rebuild the system is absent. The SOCC type 2 [4] prevents making error grouping in the decoded information bit sequence and gives better error performance with MTD-DR. The SOCC type 2 has m information bit sequences for generating n ≥ 2 parity bit sequences. The encoder of the SOCCs type 2 contains n sets of tap-connection positions connected in each information shift register and the code gives the code rate R = m/(m+n). Reference [4] gives the SOCCs type 2 with m = n = 2. This paper shows the SOCCs with m = n = 3 to 6 for the 2SD, i.e. R = 1/2. In case of code rate R = 4/5, the configuration and the bit error performance of the 2SD will be presented

c 2011 The Institute of Electronics, Information and Communication Engineers Copyright 

IEICE TRANS. FUNDAMENTALS, VOL.E94–A, NO.6 JUNE 2011

1450

in [9]. In the waterfall region, the 2SD improves the coding gain of 0.40 dB and of 0.55 dB compared to the conventional MTD-DR for shorter codes (code length 4200) and for longer codes (code length 80000), respectively. This paper formulates an approximate lower bound on the bit error rate for the maximum likelihood (ML) decoding. The 2-step decoding achieves the approximate lower bound of ML decoding performance at the higher Eb /N0 . To improve the bit error performance in the error floor region, the MTD-DR has been serially concatenated with parity check (PC) decoding [4]. For further improving the error performance in the error floor region, this paper proposes an effective use of PC decoding (PC+IT) that includes the PC decoding in iterative decoding. It makes the BER 1/8 times compared to the BER of ordinary use of PC decoding. Therefore, the 2-step decoding with effective use of parity check decoding improves the BER in the waterfall and the error floor regions simultaneously. There is a paper [10] with a similar title to the present paper. Reference [10] uses min-sum decoding algorithm instead of bit flipping decoding. The detailed difference is given in [4]. The paper is organized as follows. Section 2 gives the basics of the MTD-DR. The hard and the soft decoding MTD-DR are described in this section. Section 3 gives the approximate lower bound on the BER of ML decoding. The bit error performance comparison between the combined soft decoding MTD-DR with feedback and the approximate lower bound of ML decoding for the codes is also shown. Section 4 describes the 2-step decoding scheme and gives the bit error performance of it. Section 5 provides an effective use of the parity check decoding scheme and gives the bit error performance of the 2SD with PC decoding. Section 6 gives the decoding complexity of the proposed system. Section 7 concludes this paper. 2.

Threshold Decoding Concept

2.1 Threshold Decoding A systematic self-orthogonal convolutional code (SOCC) with the code rate R = 1/2, constraint length M and the number of OPEs J is considered. The code is determined by the tap-connection sets in the encoder. Let ga , a=1, 2, · · · , J, denote tap-connection positions in the shift register of the encoder involved in generating a parity bit sequence. The minimum Hamming distance (dmin ) depends on the number of OPEs of the codes [6]. The dotted section in Fig. 1 shows the SOCC encoder with R = 1/2, M = 6, J = 4, dmin = J +1 = 5 and the tap-connection positions are g1 = 0, g2 = 1, g3 = 4 and g4 = 6. The information bit sequence is defined by U = {u0 , u1 , . . .}. The encoder generates a parity bit by the help of a set of information bits with J elements. The i-th parity bit is defined by

vi =

J  ⊕ ui−ga , i = 0, 1, . . .

(1)

a=1

where ⊕ is the modulo-2 addition operator in this paper. The information sequence U and the parity sequence V  {v0 , v1 , . . .} make a systematic codeword c  {U, V}. A binary codeword bit is modulated to a binary phase shift keying (BPSK) signal and transmitted through the additive white Gaussian noise (AWGN) channel. The channel output is defined as Y = {Yu , Yv }, where Yu = {yu0 , yu1 , . . .} and Yv = {yv0 , yv1 , . . .} are the information and the parity parts in the channel output signals, respectively. The tail biting termination is used in the system. At the receiving end, a received word is formed by the hard decision of received signals Y. The information and ˜ = the parity parts of the received word are defined as U ˜ {˜u0 , u˜ 1 , . . .} and V = {˜v0 , v˜1 , . . .}, respectively. The threshold decoding first generates a syndrome bit sequence by the help of received information and parity bit sequences. The i-th syndrome bit is defined by si = v˜i ⊕

J  ⊕ u˜ i−ga

(2)

a=1

The hard decoding decision of threshold decoding (TD) depends on the checksum value calculated from a set of syndrome bits with J elements related to the information bit under decoding [1]. For decoding j-th information bit, the checksum value L j is calculated by Lj =

J 

s j+ga

(3)

a=1

When the checksum value exceeds the threshold value T =  J+1 2 , i.e. L j > T , the decoding decision is made by flipping the information bit. The decoding is terminated after checking all the information bits in the received word. 2.2 Multi-Stage Threshold Decoding with Difference Register Multi-stage threshold decoding with difference register (MTD-DR) is an iterative threshold decoding scheme. Each decoding stage of the decoder indicates an iteration of the scheme. Figure 1 shows a hard decision MTD-DR decoding scheme. The decoder contains an extra shift register against an information shift register called difference register (DR). The DR holds pairwise difference value (binary value) between the information sequence of the received word and the decoded information sequence. Hard decoding decision of MTD-DR depends on the checksum value calculated from a set of syndrome bits with J elements and a DR bit related to the information bit under decoding. For decoding j-th information bit, the checksum value L j is calculated by Lj =

J  a=1

s j+ga + d j

(4)

ULLAH and OGIWARA: PERFORMANCE IMPROVEMENT OF MULTI-STAGE THRESHOLD DECODING WITH DIFFERENCE REGISTER

1451

Fig. 1 MTD-DR for the SOCC with J = 4, M = 6, dmin = 5 and R = 1/2, threshold value T =  J+1 2  for hard decoding. Tap-connection positions are g1 = 0, g2 = 1, g3 = 4 and g4 = 6.

where d j is the j-th bit in the difference register. When the checksum value exceeds the threshold value, i.e. L j > T =  J+1 2 , decoding decision is made by flipping the information bit and the related DR and syndrome bits are inverted. After flipping each information bit, the Hamming distance between the received word and the decoded codeword, generated from decoded information bits, becomes shorter [4]. Because the contents of the syndrome register give the Hamming distance information between the parity parts of the received word and that of the decoded codeword, and the contents of the DR give the Hamming distance information between the information parts of them. The decoding decision is flipped when more than (J + 1)/2 among J + 1 bits from DR and syndrome registers become 1. The flipping decision makes more than (J + 1)/2 bits to 0. Therefore the Hamming distance between the received word and the decoded codeword becomes closer after each flipping. The soft decoding decision of MTD-DR depends on the checksum value calculated by the magnitude of J parity signals related to the information signal under decoding and the magnitude of the information signal itself [4]. For decoding j-th information bit, the soft decoding MTD-DR (SMTD) calculates the checksum value L j by Lj =

J 

w j+ga (1 − 2s j+ga ) + wd j (1 − 2d j )

(5)

a=1

where wk represents the magnitude of the parity signal yvk and wd j represents the magnitude of the information signal yuj . When the checksum value is negative, i.e. L j < 0, the decoding decision is made by flipping the information bit. At the same time, corresponding syndrome bits and difference register bit are flipped. After flipping each information bit, the Euclidean distance between the received signal sequence and the decoded codeword, where bits are represented by ±1, becomes shorter [4]. The weighted bit flipping (WBF) algorithm is proposed for decoding low density parity check (LDPC) codes [11]. This paper gives the weighted bit flipping MTD-DR (WMTD) which finds the value of wk by

Fig. 2 A schematic diagram of combined soft decoding MTD-DR with feedback (CMTDF) decoding scheme.

wk = min (|yuk−ga |, |yvk |) a=1,2,...,J

(6)

and wd j is the same as in SMTD. Then the checksum value is calculated by using Eq. (5) and the decoding decision is made accordingly. The WMTD and the SMTD can not give attractive error performance individually. So, the combined soft decoding MTD-DR with feedback (CMTD.Feed) has been proposed in [4] that achieves attractive error performance. Figure 2 shows a schematic diagram of CMTD.feed decoding scheme. The CMTD.Feed is represented by CMTDF in this paper. 2.3 Suitable Codes for MTD-DR The SOCC is categorized into two types: 1) SOCC type 1 and 2) SOCC type 2 [4]. The SOCC type 1 generates a parity bit sequence by the m tap-connection sets connected to m shift registers (i.e. each shift register contains one tapconnection set) of the encoder and provides the code rate R = m/(m + 1). The SOCC type 2 generates n ≥ 2 parity bit sequences of a codeword by m × n tap-connection sets, those are connected to m shift registers (i.e. one shift register contains n tap-connection sets) of the encoder and provides the code rate R=m/(m + n). MTD-DR for the SOCCs type 1 makes unavoidable error grouping in the decoded information sequences and degrades the error performance, but the SOCCs type 2 prevents the error grouping and gives better error performance than SOCCs type 1 [4]. Thus, the SOCC type 2 is a suitable code for MTD-DR. This paper shows the SOCCs type 2 with m = n = 3 to 6 for the 2-step decoding. Let J (b) p denote the number of taps in the p-th tap-

IEICE TRANS. FUNDAMENTALS, VOL.E94–A, NO.6 JUNE 2011

1452 Table 2 SOCCs with m = n = 2, R = 2/4, M = 500 for the shorter codes (code length N = 4200). Code parameters

info. reg.

J=8 dmin =9 J=10 dmin =11

1 2 1 2

J=12

1

dmin =13

2

Fig. 3 A structure of an encoder for the SOCC type 2, with m=n=3, J = 8, dmin = 9, M = 9 and R = 3/6. Tap-connection positions are g(1) 1,1 = 0,

J=14

1

(1) (1) (1) (1) (1) (1) (2) g(1) 1,2 = 4, g2,1 = 1, g2,2 = 6, g3,1 = 0, g3,2 = 3, g3,3 = 7, g3,4 = 9, g1,1 = 1,

dmin =15

2

g(2) = 7, g(2) = 4, g(2) = 6, g(2) = 0, g(2) = 1, g(2) = 4, g(2) = 9, g(3) = 0, 1,2 2,1 2,2 3,1 3,2 3,3 3,4 1,1

Tap-connection position sets {0, 15, 295, 434}, {233, 265, 413, 432} {126, 194, 388, 398}, {29, 236, 406, 499} {0, 51, 198, 251, 465}, {23, 187, 247, 370, 371} {40, 76, 176, 200, 259}, {161, 230, 281, 328, 483} {0, 33, 191, 225, 288, 451}, {41, 173, 369, 422, 458, 489} {1, 14, 61, 76, 357, 470}, {87, 115, 194, 352, 358, 422} {0, 30, 56, 105, 165, 263, 291}, {19, 69, 77, 80, 111, 299, 461} {164, 191, 246, 255, 303, 338, 499}, {19, 114, 133, 140, 176, 236, 485}

(3) (3) (3) (3) (3) (3) g(3) 1,2 = 9, g2,1 = 2, g2,2 = 6, g3,1 = 3, g3,2 = 4, g3,3 = 7, g3,4 = 9.

connection set connected to the b-th shift register, and (b) g(b) p,a , (a = 1, 2, · · · , J p ), denote the tap-connection positions those are involved in generating the p-th parity sequence,  where b = 1, 2, · · · , m, p = 1, 2, · · · , n and J = np=1 J (b) p . Figure 3 shows an example of the SOCC type 2 that has m = n = 3, J = J1(b) + J2(b) + J3(b) = 2 + 2 + 4 = 8, M = 9, R = 3/6 and dmin = 9. The encoder for the SOCCs type 2 generates i-th parity bit in the p-th parity sequence by (b)

v(p) i

Jp m   = ⊕ ⊕ u(b) (b) b=1

a=1

i−g p,a

(7)

where ui(b) represents the i-th information bit in the b-th information shift register of the encoder. For decoding each information bit of each information sequence, necessary operations are done by the same manner given in Eq. (2) to Eq. (5). 3.

Approximate Lower Bound on the Bit Error Rate

3.1 Approximate Lower Bound on the BER of ML Decoding Let F be the number of information bits in a codeword and A(w, dw ) be the number of codewords with the Hamming weight dw , provided that the information weight is w. The BER of ML decoding is union upper bounded by [12], [13] ⎛ ⎞ F ⎜⎜⎜ 2Rdw Eb ⎟⎟⎟ 1 ⎜ ⎟⎠ wA(w, dw )Q ⎝ Pb ≤ (8) F w=1 N0 ∞ y2 where Q(x)  √12π x e− 2 dy. At the higher Eb /N0 , the equality approximately holds, and is known as a rule of thumb. Therefore, in the higher Eb /N0 , the bit error rate of ML decoding is given by ⎛ ⎞ F ⎜⎜⎜ 2Rdw Eb ⎟⎟⎟ 1 wA(w, dw )Q ⎝⎜ Pb ≈ (9) ⎠⎟ F w=1 N0 Since the code is systematic, at least F codewords having

Fig. 4 Bit error performance of the CMTDF and the approximate lower bound of ML decoding for SOCCs in the Table 2.

the information weight w = 1 and the Hamming weight d1 = J + 1, i.e. A(1, d1 ) ≥ F. Then, the BER of ML decoding is approximately lower bounded by ⎞ ⎞ ⎛ ⎛ ⎜⎜⎜ 2R(J + 1)Eb ⎟⎟⎟ F ⎜⎜⎜ 2Rd1 Eb ⎟⎟⎟ Pb  Q ⎝⎜ (10) ⎠⎟ = Q ⎝⎜ ⎠⎟ F N0 N0 3.2 Performance of the CMTDF and the Approximate Lower Bound of ML Decoding Figure 4 shows the BER of CMTDF and the approximate lower bound on the BER of ML decoding for the shorter codes in Table 2. The dotted lines 1 to 4 in the figure denote the approximate lower bounds on the BER of ML decoding for the codes with the number of OPEs 8, 10, 12 and 14, respectively. Figure 4 shows that, at the higher Eb /N0 , the BER of CMTDF coincides with the approximate lower bound of ML decoding. The code with the larger number of OPEs reduces

ULLAH and OGIWARA: PERFORMANCE IMPROVEMENT OF MULTI-STAGE THRESHOLD DECODING WITH DIFFERENCE REGISTER

1453

the BER in the error floor region, but, the BER in the waterfall region shifts to the higher Eb /N0 . It means that the number of OPEs of a code makes a trade-off between the BER of the waterfall and that of the error floor regions. 4.

2-Step Decoding Based on CMTDF

4.1 2-Step Decoding The CMTDF for the codes with smaller J gives attractive error performance at the waterfall region. The 2-step decoding (2SD) works on the basis of this phenomenon. At the 1st step of decoding, CMTDF uses a part of the parity sequences, so that the decoding is done with the smaller number of OPEs of the code. In the 2nd step of decoding, it uses all the parity sequences, i.e. the 2nd step of decoding is identical to the conventional CMTDF. Figure 3 shows a local encoder used for the 2SD scheme. In the 1st step of decod˜(3) ing, one parity sequence, say v˜ 3  {˜v(3) 0 ,v 1 , . . .}, which has number of OPEs (i.e. number of taps in a tap-connection set) Jk=3 = Jk(1) = Jk(2) = Jk(3) = 4, is not used. Then, the 1st step of decoding is done by the J s = J − Jk OPEs. Each decoding step terminates its decoding when no information bit is flipped in each component decoding (WMTD and SMTD) or by the maximum number of iterations. After terminating 1st step of decoding, 2nd step of decoding is done with the J OPEs of the code. At the 2nd step of decoding, it is expected to give similar error performance to the conventional CMTDF which realizes the approximate lower bound of ML decoding performance. Figure 5(a) shows a schematic diagram of 2SD based on CMTDF called 2-step decoding type 1 (2SD:TP1). The 2SD based on the modified version of CMTDF is shown in Fig. 5(b), which is called 2-step decoding type 2 (2SD:TP2). The type 2 decoding achieves larger coding gain than the type 1 decoding in the waterfall region. Unfortunately, type 2 decoding scheme degrades the error performance in the error floor region. To improve the BER in the error floor region, the 2SD:TP2 concatenated with the CMTDF (2SD:TP2+CF), which is shown in Fig. 5(c), is proposed.

Fig. 5

Schematic diagrams of the 2-step decoding and its modifications.

The 2SD:TP2+CF achieves the approximate lower bound of ML decoding performance in the error floor region at the higher Eb /N0 . Figure 5(c) shows that the 2SD:TP2+CF has two decoding sections: i) the 2SD:TP2 and ii) the CMTDF. The maximum number of iterations for the 2SD:TP2 is set as follows: the number of inner iterations of the 1st decoding step is set to 2 (for WMTD and SMTD and the decoding is done by the J s OPEs) and the number of inner iterations of the 2nd decoding step is set to 4 (for WMTD and SMTD and the decoding is done by the J OPEs) and the number of outer iterations is set to 12. The maximum number of iterations for the CMTDF is set as: the number of inner iterations of WMTD and SMTD are set to 2 and 4, respectively, and the number of outer iterations is set to 8. Therefore, the 2SD:TP2+CF is completed its decoding by the maximum number of iterations 12 × 2(2 + 4) + 8(2 + 4) = 192. When the maximum number of iterations is set to more than that number in each decoding section, the bit error performance improvement is not observed. The SOCC type 2 with n = 2 is not a good code for the 2SD, because when one parity sequence is omitted at the 1st decoding step, the decoding is done by the SOCC type 1 and degrades the error performance [4]. The code with n ≥ 3 is a good consideration for the 2SD. The conventional CMTDF uses a symmetric self-orthogonal convolutional code. The code is called symmetric if every parity bit is made from the same number of information bits. It means that each tap-connection set has the same number of tap-connection positions, i.e. J1(1) = J1(2) = · · · = Jn(m) . Table 2 shows some symmetric codes suitable for CMTDF. When the 1st step of decoding of a 2SD omits one parity sequence from the symmetric code then the J s will be Jn(m) (n − 1)/n which is more than 50% of J for the code with n ≥ 3. In experiment, we observed that the 2SD gives better performance when 1st step of decoding is done by the 40% to 50% of OPEs of the code. Therefore asymmetric code is considered. The tap-connection parameter of the asymmetric code is set such a way that one parity sequence is made from the mJk information bits where Jk is 50% to 60% of J, since Jk = J − J s . The 1st decoding step omits the parity sequence of Jk OPEs and expected to improve the bit error performance in the waterfall region. Table 3 and Table 4 show some asymmetric codes suitable for the 2SD. Let us consider the code in Table 4 with J = 10 and J s = 4. This code has information shift registers m = 3, and generates n = 3 parity sequences. The 1st and the 2nd parity sequences are made from the 3 × 2 = 6 information bits and the 3rd parity sequence is made from the 3 × 6 = 18 information bits. So, this is an asymmetric code. The 1st step of decoding is done by the J s = 4 OPEs (i.e. 40% of J) and the 2nd step of decoding is done by the J = 10 OPEs of the code. The code searching algorithm is straight forward and brute force. First, code parameters (such as, M, m, n and the number of taps in each tap-connection set, etc.) are set. Computer generates mn sets of tap-connection posi-

IEICE TRANS. FUNDAMENTALS, VOL.E94–A, NO.6 JUNE 2011

1454

tions pseudo randomly according to the tap-connection parameters. The generated tap-connection sets are checked whether they generate an SOCC or not and the generated SOCC is selected. For fixed M value of the code, the computer searching time increases rapidly when the m, n and J values of the code are increased. The symmetric codes in [4] have m = n = 2. A personal computer searches a code with J = 12 within 2 hours for the shorter code (code length 4200). The searching time becomes about 10 minutes for the longer code (code length 80000). For the asymmetric codes with m = n = 6 and J = 12, the computer searching time was 10 hours for the longer codes given in this paper and the computer does not find any shorter code with m = n = 6 and J = 12 within 2 weeks. Computer takes more code searching time for shorter codes either symmetric or asymmetric codes compared to longer codes. Since the code searching algorithm is brute force, in future it may be possible to improve the code searching algorithm that can find some shorter codes with large m, n and J values. Nevertheless, note that the code searching is carried out once only in the system design process, and the searching complexity is independent from the complexity of the encoding and the decoding of the system. Although the MTD-DR based decoding achieves the approximate lower bound of ML decoding performance in the higher Eb /N0 , we have opportunity to improve the error performance in the waterfall region with some good codes. In experiment, we tried several codes with fixed M, m, n, J, J s and N values but the bit error performance of them were the same. However, we tried very few codes compared to the all codes possible by the given parameters, so it may be possible to find some good codes that will improve the bit error performance in the waterfall region. 4.2 Performance of 2-Step Decoding Figure 6 shows the bit error performance of the shorter codes (code length N = 4200) given in Table 3. The dotted lines without marks in the figure show the approximate lower bound of ML decoding performance for the codes with the number of OPEs mentioned along with them. The 2SD:TP2+CF achieves the approximate lower bound of ML decoding performance in error floor region at the higher Eb /N0 . The decoding scheme achieves additional coding gain of 0.40 dB compared to the conventional CMTDF at the waterfall region. For the code with J = 12, the 2SD:TP2+CF and the conventional CMTDF consume average number of iterations 62 and 14, respectively at Eb /N0 = 4.0 dB. Figure 7 shows the bit error rate for the longer codes (code length N = 80000) in Table 4. For longer codes, the 2SD:TP2+CF achieves 0.55 dB more coding gain than the conventional CMTDF in the waterfall region. For the code with J = 12, the 2SD:TP2+CF and the conventional CMTDF expend 174 and 38 average number of iterations, respectively, at Eb /N0 = 3.0 dB.

Fig. 6 Bit error performance of the CMTDF and the 2SD:TP2+CF for the shorter codes in Table 3 and the approximate lower bound of ML decoding for the codes.

4.3 Iteration Reduction Decoding Scheme The average number of iterations of the 2SD:TP2+CF can be reduced by reducing the inner and the outer iterations of each decoding section of the scheme. In this case experiment shows that the waterfall performance is shifted to the higher Eb /N0 and the approximate lower bound of ML decoding performance does not achieve. Therefore, the inner and the outer iterations of the 2SD:TP2+CF is adjusted in such a way that the coding gain in the waterfall does not degrade more than 0.1 dB. The error floor degradation from the approximate lower bound of ML decoding is recovered by concatenating the SMTD with the 2SD:TP2+CF. The SMTD in this case recovers the approximate lower bound of ML decoding performance in the error floor region within additional 3 iterations. The performance curve ‘Reduce ITR’ in Fig. 7 shows the BER performance of the iteration reduction scheme of 2SD:TP2+CF for the code with m = n = 6, J s = 6 and J = 12. The maximum number of iterations is set for the iteration reduction decoding scheme of the 2SD:TP2+CF as: 12 × 2(1 + 2) + 5(1 + 2) + 3 = 90. The decoding scheme achieves the approximate lower bound of ML decoding performance in the error floor region at Eb /N0 = 2.3 dB and the waterfall degradation is less than 0.1 dB compared to the 2SD:TP2+CF without iteration reduction. The iteration reduction scheme of 2SD:TP2+CF for the code with J = 12 expends 85 average number of iterations at the Eb /N0 = 3.0 dB. That means the iteration reduction scheme of 2SD:TP2+CF reduces decoding complexity and latency more than 50% compared to the decoding scheme without iteration reduction.

ULLAH and OGIWARA: PERFORMANCE IMPROVEMENT OF MULTI-STAGE THRESHOLD DECODING WITH DIFFERENCE REGISTER

1455 Table 3 SOCCs for 2SD with m = n = 3, R = 3/6 and M = 333 for the shorter codes (code length N = 4200). Code parameters

Info. reg.

J=8 J s =4 dmin =9

1 2 3

J=10

1

J s =4

2

dmin =11

3

J=12

1

J s =6

2

dmin =13

3

Tap-connection position sets {0, 332}, {9, 142}, {87, 216, 222, 245} {117, 131}, {75, 129}, {194, 212, 238, 313} {148, 292}, {114, 215}, {33, 48, 166, 203} {0, 304}, {157, 328}, {80, 82, 120, 217, 272, 311} {152, 237}, {159, 254}, {106, 123, 139, 242, 277, 326} {206, 253}, {1, 299}, {49, 226, 231, 274, 301, 332} {0, 237, 254}, {17, 78, 169}, {47, 55, 167, 196, 210, 285} {189, 229, 268}, {44, 63, 332}, {26, 35, 72, 77, 206, 308} {97, 124, 328}, {7, 225, 330}, {32, 68, 93, 106, 327, 330}

Table 4 SOCCs for the 2SD with M = 1000, dmin = J + 1 for longer codes (code length N = 80000). Code parameters

Info. reg.

J=8 J s =4 R=4/8 m=n=4

1 2 3 4

J=10

1

J s =5

2

R=5/10

3

m=n=5

4 5

J=12

1 2

J s =6

R=6/12

3 4 5

m=n=6

J=14

6 1 2

J s =8

3

R=5/10

4

m=n=5

5

tap-connection position sets {0, 643}, {295}, {238}, {129, 360, 438, 794} {64}, {727, 861}, {806}, {12, 37, 130, 383} {661}, {74}, {93, 833}, {9, 356, 558, 709} {732}, {54, 884}, {519}, {167, 302, 638, 999} {0, 886}, {690}, {999}, {873}, {120, 266, 305, 749, 812} {852}, {57, 793}, {134}, {797}, {43, 182, 348, 534, 846} {593}, {463}, {677, 941}, {136}, {192, 384, 426, 482, 877} {333}, {188}, {894}, {224, 737}, {123, 517, 840, 941, 991} {47, 524}, {451}, {633}, {626}, {65, 392, 466, 784, 851} {0}, {76}, {292}, {664}, {145, 161}, {113, 147, 207, 381, 852, 924} {691}, {740}, {364}, {441, 700}, {763}, {22, 119, 192, 211, 240, 763} {38}, {541}, {618, 824}, {276}, {722}, {61, 272, 391, 577, 712, 957} {999}, {361, 761}, {522}, {550}, {30}, {2, 222, 290, 561, 574, 823} {533, 944}, {868}, {135}, {321}, {980}, {87, 112, 421, 748, 775, 780} {955}, {789}, {487}, {372}, {123, 975}, {262, 270, 494, 626, 649, 936} {0, 912}, {411, 624}, {417, 955}, {427, 859}, {183, 384, 507, 593, 677, 744} {807, 817}, {380, 828}, {182, 226}, {281, 701}, {2, 280, 374, 540, 744, 823} {34, 780}, {278, 537}, {247, 744}, {178, 703}, {58, 127, 477, 580, 764, 928} {368, 585}, {44, 430}, {271, 708}, {623, 953}, {45, 362, 721, 840, 874, 904} {330, 957}, {160, 219}, {638, 999}, {245, 459}, {300, 322, 432, 644, 707, 867}

Fig. 7 Bit error performance of the CMTDF and the 2SD:TP2+CF for longer codes in the Table 4 and the approximate lower bound of ML decoding for the codes.

Fig. 8

Schematic diagrams of CMTDF with parity check decoding.

coding achieves attractive bit error performance at the error floor region [4]. The PC encoding adds a parity check bit in the information bit sequence after each n1 bits. When parity check is failed, the PC decoding searches a minimum absolute checksum value among the checksum values related to each n1 information bits and 1 parity check bit, those are already calculated by the component decoder (WMTD or SMTD) of the system, and the decoding is done by flipping the bit related to the minimum absolute checksum value. The PC decoding in Fig. 8(a) is presented in [4]. Figure 8(b) shows the effective use of PC decoding, where PC decoding works inside of the outer iterations. 5.2 Performance of 2-Step Decoding with Parity Check Decoding

5.

2-Step Decoding with Parity Check Decoding

5.1 Effective Use of Parity Check Decoding The MTD-DR based decoding with parity check (PC) de-

In this paper, n1 is set to 50 bits. The overall coding rate, in this case, becomes 0.49 < 0.50. Figure 9 shows the BER of the 2SD:TP2+CF with PC decoding for the shorter codes. The curves ‘PC+IT’ and ‘PC’ represent the BER with effec-

IEICE TRANS. FUNDAMENTALS, VOL.E94–A, NO.6 JUNE 2011

1456

tive use of parity check decoding and the BER with ordinary use of parity check decoding, respectively. The ‘PC+IT’ for the shorter codes makes the BER 1/6 times compared to ‘PC’ decoding at the Eb /N0 = 2.9 dB. At the same time, the ‘PC+IT’ decoding reduces 6 average number of iterations compared to the ‘PC’ decoding scheme. The error floor for the shorter code with J = 12 can not be observed till the BER is 10−9 . Figure 10 shows the BER of 2SD:TP2+CF with PC decoding for the longer codes. At the Eb /N0 = 2.3 dB, ‘PC+IT’ makes the BER 1/8 times compared to the ‘PC’ decoding in the error floor

Fig. 9 Bit error performance of the 2SD:TP2+CF with parity check decoding for the shorter codes in Table 3.

region and reduces the average number of iterations 7 compared to the ‘PC’ decoding. 6.

Decoding Complexity

Decoding complexity is defined by the total number of operations (e.g. modulo 2 addition, real number summation, etc.) necessary to decode per information bit. At the beginning of decoding, syndrome bit is calculated by the modulo 2 additions. There are J 2 modulo 2 additions necessary to calculate a set of syndrome bits related to decode one information bit. When decoding decision is flipped, an additional modulo 2 addition is necessary to flip the information bit. At the same time, J syndrome bits and one DR bit are also flipped. Let the 1st and the 2nd step of decoding terminate their decoding by the average number of iterations I1 and I2 (the sum of the average number of iterations of WMTD and SMTD), respectively. Therefore, the total number of modulo 2 addition operations does not exceed the value J 2 + (I1 + I2 )(J + 2). The WMTD finds the minimum magnitude of received signals as weight value related to a syndrome bit. For decoding each information bit, J weight values are necessary. So, the minimum search operation is done J times. The checksum value is calculated by the J received signals from parity part and one received signal from the information part of the received signals. The total number of real number summation operations is I1 J s +I2 J. When the decoding decision is flipped, one difference register bit and J syndrome bits are flipped and they invert the signs of wd j and wk , (k = 1, 2, . . . , J) in Eq. (5), respectively. So, the number of sign changing operations of real numbers does not exceed the value (I1 + I2 )(J + 1). For every n1 + 1 bits, the parity check decoding uses at most n1 + 2 modulo 2 additions (n1 + 1 for parity checking and 1 for correcting) and one minimum searching operation, which are added with the 2-step decoding complexity for the 2-step decoding with PC decoding scheme. The decoding complexity of the 2-step decoding is summarized in Table 5. A min-sum decoding based algorithm for iterative threshold decoding with the self-doubly orthogonal codes is presented in [10]. The decoding decision depends on the log likelihood ratio (LLR) value. The LLR value updates after decoding each information bit. In this case all types of operations (such as modulo 2 operation, real number summation, multiplication, etc) are done repeatedly in each iteration. As a result decoding complexity increases. Instead of LLR value, the soft decoding decision of MTD-DR is determined by the magnitude of a set of received signals. In this case, MTD-DR updates the related syndromes and difTable 5

Decoding complexity of 2-step decoding.

Name of operations

Fig. 10 Bit error performance of the 2SD:TP2+CF with parity check decoding for longer codes in the Table 4.

Modulo 2 addition Min. Wet. searching Real number summation Real number sign changing

Number of operations per decoded information bit ≤ J 2 + (I1 + I2 )(J + 2) J I1 J s + I2 J ≤ (I1 + I2 )(J + 1)

ULLAH and OGIWARA: PERFORMANCE IMPROVEMENT OF MULTI-STAGE THRESHOLD DECODING WITH DIFFERENCE REGISTER

1457

ference register’s bits only. Moreover, the MTD-DR uses the magnitude of received signals and the decoding decision is made by the modulo 2 addition, real number summation, and sign changing operation of real numbers, which are not complicated such as LLR calculations. Thus, the MTD-DR based decoding is a simpler and a less complex decoding scheme compared to the min-sum decoding algorithm. 7.

and Coding, McGraw-Hill Book Co., 1985. [13] S.J. Johnson, Iterative Error Correction: Turbo, Low-Density ParityCheck and Repeat-Accumulate Codes, First ed., Cambridge University Press, 2010.

Conclusion

This paper gives an improved version of MTD-DR. Performance improvement is achieved in both waterfall and error floor regions. The conventional CMTDF and the 2SD:TP2+CF achieve the approximate lower bound of ML decoding performance in the error floor region at higher Eb /N0 . It is ensured that the error floor occurs due to the number of orthogonal parity-check equations, i.e. the minimum Hamming distance, of the code. The 2-step decoding improves the coding gain of 0.40 dB for the shorter codes and of 0.55 dB for the longer codes compared to the conventional CMTDF in the waterfall region. The 2SD:TP2+CF with parity check decoding improves error floor with insignificant additional decoding complexity. By the effective use of parity check decoding, the 2SD:TP2+CF makes the BER 1/8 times compared to the ordinary use of PC decoding in the error floor region. References [1] J. Massey, Threshold Decoding, MIT Press, 1963. [2] V.V. Zolotarev and G.V. Ovechkin, “An effective algorithm of noiseproof coding for digital communication systems,” Electrosvaz, no.9, pp.34–36, 2003. [3] V.V. Zolotarev, “The multithreshold decoder performance in Gaussian channels,” 7th Int. Symp. on Commun. Theory and Applications (ISCTA’03), pp.18–22, July 2003. [4] M.A. Ullah, K. Okada, and H. Ogiwara, “Multi-stage threshold decoding for self-orthogonal convolutional codes,” IEICE Trans. Fundamentals, vol.E93-A, no.11, pp.1932–1941, Nov. 2010. [5] J.P. Robinson, “Error propagation and definite decoding of convolutional codes,” IEEE Trans. Inf. Theory, vol.IT-14, no.1, pp.121–128, Jan. 1968. [6] S. Lin and D.J. Costello, Jr., Error Control Coding: Fundamentals and Applications, chap.13, Prentice-Hall, Englewood Cliffs, N.J., 1983. [7] V.V. Zolotarev, G.V. Ovechkin, and S.V. Averin, “Algorithm of multithreshold decoding for self-orthogonal codes over Gaussian channels,” 10th Int. Symp. Commun. Theory and Applications, July 2009. [8] http://www.mtdbest.iki.rssi.ru [9] M.A. Ullah, R. Omura, T. Sato, and H. Ogiwara, “Multi-stage threshold decoding of high rate convolutional codes for optical communications,” 7th Advanced Int. Conf. on Telecommunications, St. Maarten, The Netherlands Antilles, March 2011. [10] C. Cardinal, D. Haccoun, and F. Gagnon, “Iterative threshold decoding without interleaving for convolutional self-doubly orthogonal codes,” IEEE Trans. Commun., vol.51, no.8, pp.1274–1282, Aug. 2003. [11] Y. Kou, S. Lin, and M.P.C. Fossorier, “Low-density parity-check codes based on finite geometries: A rediscovery and new results,” IEEE Trans. Inf. Theory, vol.47, no.7, pp.2711–2736, Nov. 2001. [12] A.J. Viterbi and J.K. Omura, Principles of Digital Communication

Muhammad Ahsan Ullah was born in Jamalpur, Bangladesh on March 1, 1979. He received B.E. degree in Electrical and Electronic Engineering from Chittagong University of Engineering and Technology, Bangladesh in 2002 and M.E. degree in Electronic and Information Engineering from Kyung Hee University, Republic of Korea in 2007. Now, he is pursuing doctoral degree in Electrical Engineering at Nagaoka University of Technology from September, 2008. He is interested in convolutional codes and multi-stage threshold decoding.

Haruo Ogiwara was born in Tochigi Prefecture, Japan on February 4, 1947. He received B.E. and M.E. in Control Engineering from Tokyo Institute of Technology in 1969 and 1971, respectively, and Dr. Eng. in Electronics from Osaka University in 1984. Since 1971 to 1986, he was a research engineer at Electrical Communication Laboratories of Nippon Telegraph and Telephone corporation, where he worked on researches and developments of Hologram memories, optical switching system, a digital subscriber loop, and communication theory. In 1986 he joined the Nagaoka University of Technology and he is now Professor in Department of Electrical Engineering. His current research interest includes turbo code, coded modulation especially for a non-Gaussian channel and a fading channel and adaptive equalization in digital mobile communication. He is a member of IEEE and SITA.