Low Density Lattice Codes for the Relay Channel - CiteSeerX

Report 0 Downloads 88 Views
Low Density Lattice Codes for the Relay Channel Nuwan S. Ferdinand∗ , Matthew Nokleby† , Behnaam Aazhang‡ Centre for Wireless Communications, University of Oulu, Finland ∗‡ Rice University, Texas, USA †‡ [email protected]∗ , {nokleby† , aaz‡ }@rice.edu

Abstract—We study practical, efficient codes for the Gaussian relay channel. It has been demonstrated that low-density lattice codes (LDLCs) can provide near-capacity performance for pointto-point Gaussian channels. We present an LDLC formulation that provides performance near the decode-and-forward inner bound of the relay channel capacity. We employ a superposition block Markov strategy tailored to LDLCs and design an appropriate iterative decoder. We characterize the error performance via simulations, showing that our scheme achieves performance only 2dB away from the decode-and-forward bound.

I. I NTRODUCTION The relay channel has emerged as a promising cooperative modality for wireless communications. The relay facilitates communication between the source and destination, providing increased robustness, higher transmission efficiency, and/or larger coverage range. The relay channel was studied by Cover and El Gamal [2], who proposed the decode-and-forward (DF) encoding strategy, in which the relay decodes the entirety of the source’s message in order to assist. While this approach has seen numerous applications [3], [4], it achieves capacity only in a few special cases. The bulk of these approaches is based on random Gaussian coding, which precludes practical implementation. Lattice codes are the Euclidean-space analog of linear codes. It has been shown that lattice codes can achieve the capacity of AWGN channels [5]–[7]. Recently, the use of lattice codes in relay network has received significant interest [13]–[17], and it was shown in [15], [17] that lattice codes can achieve the DF rates for the relay channel. However, these achievable schemes rely on asymptotic code lengths, which again precludes practical implementation. Low-density lattice codes (LDLC) [1] are a family of practical, low-complexity lattice codes inspired by low-density parity check (LDPC) codes. In addition to having linear encoding and decoding complexity, LDLCs have been shown to approach the capacity of the AWGN channel. A few other practical lattice schemes have been proposed, such as multilevel LDPC codes [9] or non-binary LDPCs [10], however, LDLC has become a viable solution due to the fact that LDLC uses same real algebra in both the encoder and channel, which is natural for the continuous-valued AWGN channel [1]. Power shaping methods for LDLC are proposed in [11] and efficient LDLC decoding is studied in [12]. In this work we propose practical lattice codes for the relay channel. Based on the scheme of [15], we construct an LDLC encoding based on superposition block Markov encoding. We

develop a decode-and-forward style scheme, and adapt the LDLC decoder of [11] to our approach. Simulation results indicate that our approach is particularly effective, achieving rates within 2dB of the decode-and-forward inner bound using practical-length codes. II. S YSTEM MODEL We consider a three-terminal relay channel, as depicted in Figure 1. The source transmits a message to both relay and destination and in the next time phase relay aids the destination by sending the part of the information of the previous time slot. We assume a full-duplex relay which can simultaneously transmit and receive. In the relay channel, the source and the relay transmit messages xS and xR , respectively. The relay and the destination receive yR and yD as p yR = hSR P1 xS + zR (1) yD = hSD

p p P1 xS + hRD P2 xR + zD

(2)

where P1 E[x2S ] and P2 E[x2R ] are the transmit powers at source and relay, and zR ∼ N (0, Nr ) and zD ∼ N (0, Nd ). Further, 1 hSR = d−α , hSD = 1 and hRD = d2−α2 are the path loss 1 channel from source to relay, source to destination and relay to destination. Here we consider the distance between source to destination is 1 and d1 and d2 represent the distance between source to relay and relay to destination respectively. Further, α1 and α2 are being corresponding path loss exponents. The capacity of this channel is unknown in general; however, in [2] the decode-and-forward scheme is proposed, which achieves the following inner bound: (   1 h2 P1 E[x2S ] R ≤ min log 1 + SR , (3) 2 Nr  ) h2SD P1 E[x2S ] + h2RD P2 E[x2R ] 1 log 1 + 2 Nd This rate is achieved via block Markov encoding. The sequel we adapt this approach to LDLCs to construct a practical coding scheme for the relay channel. III. L OW DENSITY L ATTICE CODES A lattice Λ ⊂ Rn is a discrete additive subgroup of the Euclidean space Rn . For any two lattice points λ1 , λ2 ⊂ Λ, then λ1 ± λ2 ⊂ Λ. An n-dimensional lattice Λ is defined by

Source

hsd$

!

information for less-protected integers by assigning a smaller constellation.

Destination

hsr$

IV. R ELAY NETWORK

hrd$ Relay

Fig. 1.

The relay channel.

n × n generator matrix G. The lattice consists of discrete set of points x = (x1 , x2 , . . . xn ) ∈ Rn such that, x = Gb

(4)

where b is n × n information integer vector. An LDLC is a ndimensional lattice with non-singular generator matrix G such that the inverse of the generator matrix is sparse, i.e. H = G−1 . LDLCs are decoded by an iterative message passing scheme over a bipartite graph, similar to LDPC codes. To design practical lattice codes, it is necessary to combine infinite lattice with shaping algorithm by mapping the information bits to lattice points such that lattice codewords have constrained power. As in [11], we use codes based on a lowertriangular parity-check matrix H, which is more convenient for encoding and decoding and to enforce the shaping to the lattice with less complexity [11]. As an example following H has maximum degree = 3 with n = 6.   h1 0 0 0 0 0  0 h1 0 0 0 0    0 0 h1 0 0 0  H=  h2 0 h3 h1 0 0   −h3 h2 0 0 h1 0  0 −h3 0 −h2 0 h1 We assume h1 = 1 for simplicity; however, it can be easily generalized. Selecting the integer vector b from a finite constellation does not guarantee that the lattice codeword lies in a shaping region. Hence, instead of mapping the integer vector b to a lattice point x = Gb, it is mapped to some other lattice point x0 = Gb0 such that the new lattice point is in the shaping region [11]. This mapping scheme is explained in later sections. Since, the generator G is not sparse, computing lattice codeword x = Gb has complexity of O(n2 ). Therefore, we exploit the sparsity and lower-triangular structure of H to compute the lattice codeword x. This method start from computing the first element of x and continues to last element as xi = bi −

i−1 X

Hi,j xj

(5)

j=1

where Hi,j is the (i, j)th element of H. This reduces the complexity of encoding to o(n). One of the drawbacks in the lower-triangular structure is that the codeword components which are related to the low degree columns of H are less protected. To overcome this effect, we allocate less amount of

Our strategy is an LDLC implementation of block Markov encoding. As in [15], this is accomplished via the decomposition of the lattice codebook into lower-rate sub-codebooks as depicted in Fig. 2. Let b ∈ Zn be the information vector such that ith element; bi is drawn from a finite constellation {0, . . . , Li } where Li is the constellation size for ith integer. Then, we define the resolution component br as   bi , i ∈ χ bri = (6)  0, i ∈ /χ where χ is a k×1 vector whose elements are chosen randomly from the set {1, 2 . . . n} with χi 6= χj , ∀i 6= j. Let br ∈ Zn be the vestigial component vector and the ith element of bv can be given as 1   0, i ∈ χ bvi = (7)  bi , i ∈ /χ Then we define the resolution lattice codeword xr and the vestigial lattice codeword xv as, xr = Gbr

(8)

xv = Gbv

(9)

and similarly,

It is straightforward to verify that the original lattice codeword x = Gb is the sum of its resolution and vestigial components: x = Gb = G[br + bv ]

(10)

= Gbr + Gbv = xr + xv , where the first equality follows from the definitions of br and bv . A. Power constraint for decomposition It is observed from the above that the codewords i.e. x, xr and xv have unconstrained power due to the fact that we still have not enforced shaping for the lattice. Although linear decomposition is possible in the unconstrained power case, it is not trivial with constrained power scenario. We propose the following method to enforce the power constraint. First, we consider the hypercube shaping in which the elements of lattice codeword are uniformly distributed over finite length, hence, the power constraint of the lattice code is preserved. In order to obtain hypercube shaping, we first 1 Alternatively, we could choose χ = {1, · · · , k}. However, in this case the lattice codeword generated from bv contains zeros in first k elements due the lower-triangular structure of H.

map the information integer vector b to b0 such that the ith element of b0 is given as, b0i = bi − Li si

(11)

The special case of hypercube shaping si is given by [11],     i−1 X 1 si =  bi − Hi,j x0j  (12)  Li  j=1

x

xv

xr

Then the ith element of the lattice codeword x0 is given by x0i

=

b0i



i−1 X

Hi,j x0j .

(13)

j=1

Now we decompose the original integer information vector b as in (6) and (7). Then we map the resolution component br to b0r such that the new integer vector results in a powerconstrained codeword:    bi − Li sri , i ∈ χ  b0ri = bri − Li sri = , (14)   −Li sri , i∈ /χ where bri and b0ri are the ith elements of br and b0r , respectively. For hypercube shaping, sri can be written as  j  m Pi−1 1 0  b − H x i∈χ  j=1 i,j rj  Li i (15) sri = j  P m    1 − i−1 Hi,j x0 i∈ /χ j=1

Li

rj

Then the ith element of mapped lattice codeword x0r is given by x0ri = b0ri −

i−1 X

Hi,j x0rj

Fig. 2. Lattice subspace decomposition. Full (x), vestigial (xv ) and resolution (xr ) lattices are shown in subfigures respectively. We used G = 1 0 )−1 for full lattice then first and second columns of G are H−1 =( 0.5 0.5 used for resolution and vestigial lattices respectively. Shaping regions are shown in shaded area.

From (11), (14) and (17), we observe that the original information integers bi , bri and bvi can be recovered from b0i , b0ri and b0vi by modulo Li operation. Note that the lattice codeword x and the resolution lattice codeword xr obey the power constraint. For the case of hypercube shaping, magnitude of each element of x and xr are less that L/2 (i.e. |xi |, |xri | ≤ Li /2, ∀i ). Further, it is noticed that the vestigial lattice codeword xv does not obey the power constraint generally2 . However, this is not a problem for the considered relay network which is explained in later section. Although, hypercube shaping has low complexity, further shaping gain of 1.53dB [19] can be achieved by using hypersphere shaping or in other words nested lattice shaping, however, mapping to hypersphere domain is complex and we use the approximation method proposed in [11]. Here, we find the vectors s and sr such that

(16)

j=1

=

i∈χ

  −L s − Pi−1 H x0 , i ri j=1 i,j rj

i∈ /χ

b0vi = b0i − b0ri = bri − Li (si − sri ) (17)   i∈χ   Li (si − sri ), = ,   bi + Li (si − sri ), i ∈ /χ where si and sri is given in (12) and (15) respectively. Then the ith element of the vestigial codeword x0v can be written as i−1 X

Hi,j x0vj

(18)

j=1

=

(19)

s∗r = arg minn ||x0r ||2

(20)

sr ∈Z

In order to preserve the linearity of the lattice decomposition, we map the vestigial information integer vector bv to b0v such that the ith element of b0v is given by

x0vi = b0vi −

s∗ = arg minn ||x0 ||2 s∈Z

 Pi−1   bi − Li sri − j=1 Hi,j x0rj ,

 Pi−1   Li (si − sri ) − j=1 Hi,j x0vj ,

i∈χ

  b + L (s − s ) − Pi−1 H x0 , i i i ri j=1 i,j vj

i∈ /χ

where x = G(b − Ls) and x0r = G(br − Lsr ). We use the triangularity of party check matrix H to find the suboptimal solution of M −algorithm [18], where we start from first row of H and sequentially goes down with tree search by keeping up to M number of sequences [11], [18]. After finding s∗ and s∗r , the vestigial codeword can be found as x0v = G[bv − L(s∗ − s∗r )]. B. Encoding The source transmits its signal to both relay and destination and relay also transmits its own signal to destination. The destination therefore receives the superposition of signals from the source and relay. Our encoding scheme is block Markov, meaning that we will encode the message over T + 1 blocks of n symbol times each. We first show the encoding for the first two blocks, after which we will generalize to the rest of 2 As an example it is noted in Fig. 2 that x needs a larger shaping area in v order to obtain the full lattice codebook from the sum of xr and xv lattice codebooks. Hence, xv does not meet the same power constraint as x, xr .

TABLE I LDLC B LOCK M ARKOV ENCODING

Source Relay

t=1 x0 (1) -

t=2 x0 (2) x ˆ0r (1)

... ... ...

be written as yD (i + 1) = hSD xS (i + 1) + hRD xR (i + 1) + zD (i + 1) (26)

t=T +1 x ˆ0r (T )

the T + 1 blocks. Let x0 (i) be the lattice codeword associated with the message of ith block as given in √ (13). During the first block, the source transmits xS (1) = P1 x0 (1). Now, suppose the decoded resolution lattice codeword at relay is x ˆ0r (1). Then, during the second block, the relay √ transmits the ˆ0r (1) to previously decoded lattice codeword xR (2) = P2 x the destination. Simultaneously, the source transmits the new lattice codeword xS (2) = x0 (2) to the relay and destination3 . The encoding continues in a similar manner for subsequent blocks, and the during the ith block the source and the relay transmit the following signals. p (21) xS (i) = P1 x0 (i) xR (i) =

p P2 x ˆ0r (i − 1).

(22)

At the last block the source has no fresh information to send, hence the destination only receives the resolution information from the relay. The source transmits nT R symbols over n(T + 1) channels, so the encoding rate R for large T . Our block Markov encoding scheme can be found in Table I. It is observed in the encoding scheme that we do not transmit the vestigial information alone, hence, the power constraint for vestigial codeword is not necessarily needed.

Now we substitute (21) and (22) in (26) to obtain p p ˆ0r (i) yD (i + 1) = hSD P1 x0 (i + 1) + hRD P2 x

(27)

+ zD (i + 1). We rewrite the received signal yD (i + 1) as, p p yD (i + 1) = hRD P2 x0r (i) + hSD P1 x0 (i + 1) (28) p − hRD P2 er (i) + zD (i + 1) √ 0 where er (i) √ is given in (25). Now we treat hSD P1 x (i + 1) − hRD P2 er (i) + zD (i + 1) as Gaussian noise and decode the   yD (i + 1) 0 (29) x ˜r (i) = LDLCdecoder2 √ P2 hRD where LDLCdecoder2, as we will show in Section V, exploits the resolution information to help decode the desired codeword. The decoding error of the resolution information at destination is given by ed1 (i) = x0r (i) − x ˜0r (i)

(30)

D. Decoding vestigial information ˜0r (i− Now the destination knows x ˜0r (i) from yD (i+1) and x th 1) from yD (i). The received signal at i block given as, p p yD (i) = hSD P1 x0 (i) + hRD P2 [x0r (i − 1) − er (i − 1)] (31) + zD (i)

C. Decoding Decoding occurs in three stages. At first stage, the relay decodes x0 ; in the next stage, the destination decodes x0r ; finally, the destination uses x0r to decode x0v . First we focus on the decoding at the relay. The received signal at the ith block at relay is p (23) yR (i) = hSR P1 x0 (i) + zR (i) Now relay performs iterative LDLC decoding to obtain the decoded information vector x ˆ0r (i) as   yR (i) x ˆ0r (i) = LDLCdecoder1 √ (24) P1 hSR

Now we use the linearity property as given in (10) to rewrite (31) as p (32) yD (i) = hSD P1 [x0v (i) + x0r (i)] p 0 + hRD P2 [xr (i − 1) − er (i − 1)] + zD (i) Now we subtract the decoded resolution information to obtain p (33) yD (i) = hSD P1 x0v (i) + ed2 (i) + zD (i), where ed2 (i) = hSD

p

P1 ed1 (i) + hRD

p

P2 [ed1 (i − 1) − er (i − 1)] (34)

where LDLCdecoder1 is the LDLC iterative algorithm which is described in the next section. Let er (i) be the decoding error, which can be written as

Now we use yD (i) in (33) to decode the vestigial information:   yD (i) x ˜0v (i) = LDLCdecoder2 √ (35) P1 hSD

er (i) = x0r (i) − x ˆ0r (i).

Once we have decoded both the resolution and vestigial lattice codeword, the destination can find the desired lattice codeword by

(25)

The received signal at the destination of block (i + 1)th can 3 This

encoding scheme is different from the block Markov encoding scheme used in [15], in which the source transmits first block resolution information in the second block where a single error occurs in either relay or destination results in error propagation.

x ˜0 = x ˜0r + x ˜0v

(36)

Then the integer information vector can be recovered from ˜ 0 = bH˜ b x0 e. Then the desired information integers can be

obtained by taking modulo; ˜b = ˜b0 mod Li . Error occurs when the destination is not able to decode the information vector correctly. We define a symbol error occurs when, ˜bi 6= bi ,

∀i = 1, . . . n

(37)

The probability of symbol error is obtained by averaging the number of error over block length of n and repeating for T number of blocks. V. L OW COMPLEXITY LDLC DECODER ALGORITHM We adapt the iterative LDLC decoder proposed in [1], [12], which has linear complexity, to our block Markov scheme. As analogue to LDPC, here the LDLC decoder is a message passing scheme over bipartite graph and the difference is that in LDPC the messages are scalar values, where in LDLC the messages are real functions over the entire interval. Basically LDLC has two phases; passing messages to the check nodes where check nodes represent the parity check equations (row of parity check matrix H) and message passing to the variable nodes which represent the received codeword. Variable nodes send probability density functions (pdfs) and check nodes send periodic extension of pdfs. At ith check node it convolves all the pdfs received from variable nodes except j th node and then stretch it by (−Hi,j ) then the result is periodically extended with period 1/|Hi,j | and send it to j th variable node. At j th variable node, it multiplies all the periodically extended pdfs received from check nodes and the original received pdf except from ith check node and then normalize the pdf and sends it to ith check node. After finishing all the iteration, the j th variable node multiplies all the received pdfs with original pdf to find the final pdf of the decoded codeword and obtain the j th element of estimated codeword by finding the peak of final pdf. We specifically use the efficient parametric LDLC decoder given in [12] for LDLCdecoder1, however, for LDLCdecoder2 we make some changes to utilize the known information at the decoder and describe as follows. When we decode the resolution and vestigial information, many of the elements are zeros; we need to adapt the decoder to exploit this information and improve performance. However, once we perform shaping in order to constraint the power, these integers may not necessarily be zeros. Fortunately, however, both encoder and decoder4 knows the locations of zeros elements in resolution and vestigial information. While the decoder does not know the exact value of these integers, it is evident from (14) and (17) that these integers are multiples of Li . We exploit these information at the check node equation of the known zero locations resolution or vestigial information as mi X 0 xi = Li si − Hi,j x0j (38) j=1,j6=i

where mi is the number of non zero elements of ith row of H. Hence, at the periodic extension step, the decoder extends 4 We fix the decomposition of location of zero elements in resolution and vestigial information vectors and are known globally.

TABLE II AVERAGE POWER VARIATION .

Hypercube Hypercube (resolution) Nested lattice Nested lattice (resolution)

n = 100 7.2673 5.3656 6.6558 4.8144

n = 1000 7.2673 6.1384 6.7210 5.4033

n = 10000 7.2673 6.2325 6.9897 5.7978

TABLE III VARIATION OF ROW DEGREE . degree 1 2 3 4 5 6 7

n = 100 0−5 6 − 10 11 − 20 21 − 35 36 − 100 -

n = 1000 0 − 10 11 − 20 21 − 50 51 − 100 101 − 150 151 − 200 200 − 1000

n = 10000 0 − 50 51 − 150 151 − 250 251 − 350 351 − 500 501 − 1000 1000 − 10000

the pdfs only for the integers that are multiplication of Li and it results in better decoding with higher performance. VI. N UMERICAL R ESULTS In this section we provide a numerical analysis of LDLC for relay channel. For the illustration of the SER curves, we assume 50% of the information integers are zero for resolution and vestigial information [i.e. k = n/2 in (6)] without loss of generality. It is important to note that when performing hypercube shaping, all the elements of whole lattice codeword are uniformly distributed over (−L/2, L/2), hence the average power of xi is E[x2i ] = L2 /12. However, for the case of resolution or vestigial case the average power is less than L2 /12 due to the fact that these information vectors contain more zeros. For numerical illustration we use the code lengths of n = 100, 1000 and 10000. We perform both hypercube (M = 1) and nested lattice shaping and fix M = 21 for tree search in nested lattice case. Average power variation in dB for different cases in given in Table II. Since we use lowtriangular parity check matrix, it has different degree for each row and TABLE III shows the degrees of each row which we used in the simulation. Moreover, it is observed that the integers which are related to higher rows of the low-triangular parity check matrix are less protected as discussed earlier, hence, we use different integer constellations and it is given in TABLE IV. SER performance of LDLC are plotted against the sum power at source and the relay (i.e. P1 E[x2S ] + P2 E[x2R ]) in Fig. 3. In this simulation we use, d1 = 0.9, d2 = 0.1 and α1 = 1, α2 = 2 and assume AWGN noise variances; Nr = Nd = 1 without loss of generality. For the case of n = 10000 we have the rate of R = (9500 ∗ 3 + 350 ∗ 2 + 150)/1000 = 2.935, in order to achieve this rate the power that need at source and relay are P1 E[x2S ]+P1 E[x2R ] = 17.1380dB for the considered network scenario according to the bound given in (3). It is noticed that SER performance for n = 10000 at 2 × 10−5 is only ∼ 2dB away from the DF inner bound. For n=100, we have used P1 = [10.8, 14.2, 20.1, 27.1, 30.4, 37] and corresponding P2 = 10−2 × [6.3, 10.9, 21.7, 39.5, 49.5, 73.4]

One can notice that, n = 1000 with relay outperforms n = 10000 without the relay case.

with relay, n=100 without relay, n=100 with relay, n=1000 without relay, n=1000 with relay, n=10000 without relay, n=10000

−1

10

VII. C ONCLUSION

−2

Symbol error rate

10

−3

10

−4

10

17

18

19

20

21

22

23

24

Sum of transmit powers (dB)

Fig. 4.

Comparison of SER for with/without relay.

R EFERENCES

−1

10

Hypercube shaping, n=100 Nested lattice shaping, n=100 Hypercube shaping, n=1000 Nested lattice shaping, n=1000 Hypercube shaping, n=10000 Nested lattice shaping, n=10000 DF bound

−2

Symbolr error rate

10

−3

10

−4

10

17

18

19

20

21

22

23

24

Sum of transmit powers (dB)

Fig. 3.

Symbol error rate for relay channel.

TABLE IV C ONSTELLATION SIZE VARIATION . Constellation size 8 4 2

n = 100 0 − 65 66 − 85 86 − 100

n = 1000 0 − 800 801 − 980 981 − 1000

We have proposed a low complexity encoder/decoder design for decode-and-forward relaying using low density lattice codes. We have studied the symbol error rate performance for our system and it is clearly evident that employment of relay enhances the performance. We have used low complexity hypercube shaping and nested lattice shaping and SER performance is given for both cases. Our encoding/decoding scheme using the combination of superposition, block Markov encoding and LDLC is viable solution for implementation of relay network with lattice codes. As future work these methods can be utilized for other relay scheme such as compress and forward, compute and forward, and multi-user networks.

n = 10000 0 − 9500 9501 − 9850 9851 − 10000

(only several points are given here), for n = 1000, P1 = [10.8, 14.4, 15.2, 16.7, 18.7, 19.7, 22] and corresponding P2 = 10−2 ×[6.3, 11.2, 12.5, 15, 18.9, 20.9, 26] are used. Finally, for n = 10000, P1 = [10.8, 13.9, 15.1, 16] and corresponding P2 = 10−2 ×[6.3, 10.4, 12.4, 13.9]. Both hypercube and nested lattice shaping (M = 21) methods are plotted and one can clearly noticed that nested lattice shaping has approximately 0.5dB gain over hypercube shaping. Fig. 4 shows the comparison of SER performance of with/without relay case. It is clearly evident that presence of relay enhances the performance, in fact the improvement of performance increases for higher n. For the case of n = 10000 we have 0.9dB performance improvement by having relay.

[1] N. Sommer, M. Feder and O. Shalvi, ‘Low Density Lattice Codes,” IEEE Trans. Info. Theory, vol. 54, pp. 1561-1585, Apr. 2008. [2] T. Cover and A. E. Gamal, “Capacity theorems for the relay channel,” IEEE Trans. Info. Theory, vol. 25, no. 5, pp. 572-584, Sept. 1979. [3] A. Sendonaris, E. Erkip, and B. Aazhang, “User cooperation diversity, part I: System description,” IEEE Trans. Commun., vol. 51, no. 11, pp. 1927-1938, Nov. 2003. [4] G. Kramer, M. Gastpar, and P. Gupta, “Cooperative strategies and capacity theorems for relay networks,” IEEE Trans. Info. Theory, vol. 51, no. 9, pp. 3037-3063, Sept. 2005. [5] R. de Buda, “The upper error bound of a new near-optimal code,” IEEE Trans. Info. Theory, vol. IT-21, pp. 441445, Jul. 1975. [6] R. Urbanke and B. Rimoldi, “Lattice codes can achieve capacity on the AWGN channel,” IEEE Trans. Info. Theory, pp. 273278, Jan. 1998. [7] U. Erez and R. Zamir, “Achieving 1/2 log(1 + SNR) on the AWGN channel with lattice encoding and decoding,” IEEE Trans. Info. Theory, vol. 50, pp. 22932314, Oct. 2004. [8] A. R. Calderbank and N. J. A. Sloane, “New trellis codes based on lattices and cosets,” IEEE Trans. Info. Theory, vol. IT-33, pp. 177195, Mar. 1987. [9] J. Hou, P. H. Siegel, L. B. Milstein, and H. D. Pfister, “Capacity approaching bandwidth efficient coded modulation schemes based on low density parity check codes,” IEEE Trans. Info. Theory, vol. 49, pp. 21412155, Sep. 2003. [10] A. Bennatan and D. Burshtein, “Design and analysis of nonbinary LDPC codes for arbitrary discrete-memoryless channels,” IEEE Trans. Info. Theory, vol. 52, pp. 549583, Feb. 2006. [11] N. Sommer, M. Feder, and O. Shalvi. “Shaping methods for low-density lattice codes”, in IEEE Inf. Theory Workshop, pp. 238242, 2009. [12] Y. Yona and M. Feder, “Efficient Parametric Decoder of Low Density Lattice Codes”, in Proceeding of the Int. Symp. on Inform. Theory, Seoul, Korea, June 2009. [13] B. Nazer and M. Gastpar, “Compute-and-forward: Harnessing interference through structured codes,” IEEE Trans. Inform. Theory, vol. 57, no. 10, pp. 6463-6486, Oct. 2011. [14] W. Nam, S.-Y. Chung, and Y. H. Lee, “Nested lattice codes for Gaussian relay networks with interference,” submitted to IEEE Trans. Info. Theory, Feb. 2009. [Online]. Available: http://arxiv.org/PScache/arxiv/pdf/0902/0902.2436v1.pdf [15] M. Nokleby and B. Aazhang, “Lattice coding over the relay channel,” in Proc. IEEE Int. Conf. Commun., Kyoto, Japan, June 2011. ¨ ur and S. Diggavi, “Approximately achieving Gaussian [16] A. Ozg¨ relay network capacity with lattice codes,” [Online]. Available: http://arxiv.org/pdf/1005.1284v2.pdf [17] Y. Song and N. Devroye, “Lattice codes for the Gaussian relay channel: Decode-and-Forward and Compress-and-Forward,” [Online]. Available: http://arxiv.org/pdf/1111.0084v1.pdf [18] T. Aulin, “Breadth-First Maximum Likelihood Sequence Detection: Basics,” IEEE Trans. Comm., vol. 47(2), pp. 208216, Feb 1999. [19] G. D. Forney Jr. and G. Ungerboeck, “Modulation and coding for linear Gaussian channels,” IEEE Trans. Inf. Theory, pp. 2384-2415, Oct. 1998.