Blind Compute-and-Forward Chen Feng1
Danilo Silva2
Frank R. Kschischang1
1 Department
of Electrical and Computer Engineering University of Toronto, Canada
2 Department
of Electrical Engineering Federal University of Santa Catarina (UFSC), Brazil
IEEE International Symposium on Information Theory Cambridge, MA, July 2, 2012
1
Compute-and-Forward: Integer Channel Gains
w1
w2
Transmitters: messages ⇒ constellation points 2
Compute-and-Forward: Integer Channel Gains y = h 1 x1 + h 2 x 2 + z (h1 , h2 ) = (2, 1) w1
w2
Wireless Channel: y = 2x1 + x2 + z 2
Compute-and-Forward: Integer Channel Gains y = h 1 x1 + h 2 x 2 + z (h1 , h2 ) = (2, 1) w1
w2
Receiver: decoding the integer combination 2x1 + x2 2
Compute-and-Forward: Integer Channel Gains y = h 1 x1 + h 2 x 2 + z (h1 , h2 ) = (2, 1) w1
w2
Receiver: integer combination ⇒ linear combination 2
Compute-and-Forward: Real-Valued Channel Gains When the channel gains are real numbers... Applying a scaling operation g (y) = αy X αy = αh` x` + αz `
=
X
a` x` +
X
a` x` + n,
`
=
X |
`
(αh` − a` )x` + αz {z effective noise
}
`
where {a` } are integers, and α ∈ R is the scalar. Thus, real-valued channel gains ⇒ integer channel gains
3
Compute-and-Forward: Real-Valued Channel Gains When the channel gains are real numbers... Applying a scaling operation g (y) = αy X αy = αh` x` + αz `
=
X
a` x` +
X
a` x` + n,
`
=
X |
`
(αh` − a` )x` + αz {z effective noise
}
`
where {a` } are integers, and α ∈ R is the scalar. Thus, real-valued channel gains ⇒ integer channel gains Intuition: choosing the scalar α andPthe coefficients {a` } to “minimize” the effective noise n = ` (αh` − a` )x` + αz
3
Compute-and-Forward: Real-Valued Channel Gains Maximizing the Computation Rate (Nazer-Gastpar) For any given scalar α and coefficients {a` }, the computation rate 1 SNR R(α, a|h) = log+ . 2 α2 + SNRkαh − ak2 Thus, an optimal (α, a) minimizes α2 + SNRkαh − ak2 Minimizing the Error Probability (FSK11) For any given scalar α and coefficients {a` }, the union bound is d 2 (Λ/Λ0 ) 0 Pe (α, a|h) . κ(Λ/Λ ) exp − 4N0 (α2 + SNRkαh − ak2 ) under hypercube shaping. Hence, an optimal (α, a) minimizes α2 + SNRkαh − ak2 4
Summary of Compute-and-Forward x1 ∈ C n
w1 ∈ W w2 ∈ W
.. .
wL ∈ W
.. .
u=
a! w!
!=1
x2 ∈ C n xL ∈ C n
L !
Gaussian MAC
y=
L !
L ! a! x! λ= (a1 , . . . , aL ) !=1 ˆ y αy λ Scaling DΛ Mapping
h ! x! + z
ˆ u
(h1 , . . . , hL )
!=1
Encoding Transmitters use a same lattice partition Λ/Λ0 Decoding 1
find an optimal (α, a) based on h and SNR
2
decode the scaled signal αy
3
map an integer combination to an linear combination 5
Topic of This Talk Question: What if the channel gains are not available at the receiver? two conventional approaches channel training training symbols and data symbols high overhead in general joint channel-estimation and decoding based on an maximum-likelihood criterion high complexity in general
6
Topic of This Talk Question: What if the channel gains are not available at the receiver? two conventional approaches channel training training symbols and data symbols high overhead in general joint channel-estimation and decoding based on an maximum-likelihood criterion high complexity in general But what is special for compute-and-forward? only α is essential for decoding (not channel gains) opportunity for heuristic methods 6
Key Idea 1: From Optimal Scalars to Good Scalars
Optimal Scalars A scalar α is optimal, if there exists some nonzero a such that R(α, a|h) ≥ R(β, b|h) for all β and all nonzero b. Thus, optimal scalars maximize the computation rate.
7
Key Idea 1: From Optimal Scalars to Good Scalars
Optimal Scalars A scalar α is optimal, if there exists some nonzero a such that R(α, a|h) ≥ R(β, b|h) for all β and all nonzero b. Thus, optimal scalars maximize the computation rate.
Good Scalars A scalar α P is good, if the decoded lattice point is correct, i.e., DΛ (αy) ∈ ` a` x` + Λ0 for some nonzero a = (a1 , . . . , aL ). Thus, good scalars ensure successful decoding.
7
An Illustration for Asymptotically-Good Lattice Partitions 3
2
1
0
−1
−2
−3
−3
−2
−1
0
1
2
3
Setup: h1 = −0.93 + 0.65i, h2 = −0.04i, SNR = 20dB
8
An Illustration for a Lattice Partition Z[i]400 /3Z[i]400
Setup: h1 = 0.28 − 0.60i, h2 = −0.26 + 0.56i, SNR = 35dB
9
Properties of Good Regions
Three properties bounded symmetric a union of disks, if asymptotically good
10
Properties of Good Regions
Three properties bounded symmetric a union of disks, if asymptotically good
10
Properties of Good Regions
Three properties bounded symmetric a union of disks, if asymptotically good Implications consider a bounded region in the first quadrant probe a discrete set of points 10
Key Idea 2: Error Detection Codes Recall that... a scalar α P is good, if the decoded lattice point is correct, i.e., DΛ (αy) ∈ ` a` x` + Λ0 for some nonzero a = (a1 , . . . , aL )
11
Key Idea 2: Error Detection Codes Recall that... a scalar α P is good, if the decoded lattice point is correct, i.e., DΛ (αy) ∈ ` a` x` + Λ0 for some nonzero a = (a1 , . . . , aL )
Thus... embed a linear detection code C into the message space W each valid message in W is a codeword in C if α is good ⇒ decoding is correct ⇒ obtain a linear combination of messages ⇒ obtain a codeword in C ⇒ pass the error detection
11
Key Idea 2: Error Detection Codes Recall that... a scalar α P is good, if the decoded lattice point is correct, i.e., DΛ (αy) ∈ ` a` x` + Λ0 for some nonzero a = (a1 , . . . , aL )
Thus... embed a linear detection code C into the message space W each valid message in W is a codeword in C if α is good ⇒ decoding is correct ⇒ obtain a linear combination of messages ⇒ obtain a codeword in C ⇒ pass the error detection
Therefore... passing the error detection is a necessary condition
11
A Heuristic Baseline Scheme key idea 1 ⇒ probe a discrete set of points in the first quadrant; key idea 2 ⇒ use error detection to guess good scalars
12
A Heuristic Baseline Scheme key idea 1 ⇒ probe a discrete set of points in the first quadrant; key idea 2 ⇒ use error detection to guess good scalars
12
A Heuristic Baseline Scheme key idea 1 ⇒ probe a discrete set of points in the first quadrant; key idea 2 ⇒ use error detection to guess good scalars
12
A Heuristic Baseline Scheme key idea 1 ⇒ probe a discrete set of points in the first quadrant; key idea 2 ⇒ use error detection to guess good scalars
12
A Heuristic Baseline Scheme key idea 1 ⇒ probe a discrete set of points in the first quadrant; key idea 2 ⇒ use error detection to guess good scalars
12
A Heuristic Baseline Scheme key idea 1 ⇒ probe a discrete set of points in the first quadrant; key idea 2 ⇒ use error detection to guess good scalars
12
A Heuristic Baseline Scheme key idea 1 ⇒ probe a discrete set of points in the first quadrant; key idea 2 ⇒ use error detection to guess good scalars
12
A Heuristic Baseline Scheme key idea 1 ⇒ probe a discrete set of points in the first quadrant; key idea 2 ⇒ use error detection to guess good scalars
12
A Heuristic Baseline Scheme key idea 1 ⇒ probe a discrete set of points in the first quadrant; key idea 2 ⇒ use error detection to guess good scalars
12
Performance of the Baseline Scheme Heuristic Setup: 16 × 16 grid in [0, log10 (SNR)] × [0, log10 (SNR)] Error Detection: 20 × 20 product code Simulation Scenario: two-transmitter Rayleigh fading channel coherent
blind
100% 75% 50% 25% 0%
12dB
14dB
16dB
18dB
20dB
Throughput of coherent and blind schemes 13
Performance of the Baseline Scheme (Cont’d) Heuristic Setup: 16 × 16 grid in [0, log10 (SNR)] × [0, log10 (SNR)] Error Detection: 20 × 20 product code Simulation Scenario: two-transmitter Rayleigh fading channel 160
baseline 120 80 40 0 12dB
14dB
16dB
18dB
20dB
Average number of lattice decoding (with error detection) 14
Can we reduce the complexity?
15
Heuristic Method 1: Hierarchical Grid Search Search order matters
16
Heuristic Method 1: Hierarchical Grid Search Search order matters
16
Heuristic Method 1: Hierarchical Grid Search Search order matters
16
Heuristic Method 1: Hierarchical Grid Search Search order matters
16
Heuristic Method 1: Hierarchical Grid Search
17
Heuristic Method 1: Hierarchical Grid Search
3 searches 17
Heuristic Method 1: Hierarchical Grid Search
3 searches 17
Heuristic Method 1: Hierarchical Grid Search
3 searches vs. 9 searches 17
Performance Heuristic Setup: 16 × 16 grid in [0, log10 (SNR)] × [0, log10 (SNR)] Error Detection: 20 × 20 product code Simulation Scenario: two-transmitter Rayleigh fading channel 200
baseline hierarchical grid
150 100 50 0 12dB
14dB
16dB
18dB
20dB
Average number of lattice decoding (with error detection) 18
Can we do even better?
19
Heuristic Method 2: Early Rejection 00
0
00
1 10
0
00
1 10
0
00
1 0 10
0
00
1 0 10
10
1
1
0
0
0 01
01
1
01
1
01
0 11
11
11
1
1
01
0 11
1
11
20
Heuristic Method 2: Early Rejection 00
0
00
1 10
0
00
1 10
0
00
1 0 10
0
00
1 0 10
10
1
1
0
0
0 01
01
1
01
1
01
0 11
11
11
1
1
01
0 11
1
11
4 surviving paths so far...
20
Heuristic Method 2: Early Rejection 00
0
00
1 10
0
00
1 10
0
00
1 0 10
0
00
1 0 10
10
1
1
0
0
0 01
01
1
01
1
01
0 11
11
11
1
1
01
0 11
1
11
If none of them can pass the partial error-detection, then stop
20
Performance Heuristic Setup: 16 × 16 grid in [0, log10 (SNR)] × [0, log10 (SNR)] Error Detection: 20 × 20 product code Simulation Scenario: two-transmitter Rayleigh fading channel 200
baseline hierarchical grid early rejection combined
150 100 50 0 12dB
14dB
16dB
18dB
20dB
21
Summary
1
Blind Compute-and-Forward no CSI at the receiver
2
A Baseline Blind Scheme from optimal scalars to good scalars error detection codes
3
Two Heuristic Methods hierarchically-organized search early rejection
22
Thank You!
23