Codes on Graphs: Fundamentals

Report 3 Downloads 181 Views
Codes on Graphs: Fundamentals G. David Forney, Jr., MIT1 CITW 2013, Xidian University, Xi’an, China

October 15, 2013

1

Acknowledgments: Heide Gluesing-Luerssen

Codes on graphs Codes: Used to approach Shannon’s channel capacity Linear/group codes universally used in practice I

Simplifies encoding and decoding

I

Simplifies design, analysis

I

Group property (c + C = C) ⇒ symmetry

I

Binary linear codes used for power-limited channels

I

Euclidean-space group codes used for band-limited channels

Graphical structure increasingly important I

Trellis structure: convolutional/trellis codes

I

Bipartite graphs: low-density parity-check (LDPC) codes

Codes on graphs: fundamental algebraic theory of such codes

Realizations of codes Realization: A system of variables and constraints I Behavior B = {all variable configurations that satisfy all constraints} (as in behavioral system theory) Variables I linear case: variable V takes values v in a vector space V I group case: variable V [...] in a finite abelian group V I external variables (symbols) Ak : the variable alphabets of the code C being realized; fixed a priori I internal variables (states) Sj : auxiliary variables introduced by the designer of the realization; may be changed as we like Constraints I Defined by linear or group constraint codes Ci I Each Ci involves a subset of the symbol and state variables

Constraints: Examples Equality constraint v ∈V

v0 ∈ V v” ∈ V =

v = v0 = v” (repetition code over V) Sign-inversion constraint v ∈V



v0 ∈ V

v = −v 0

Zero-sum constraint v ∈V

v0 ∈ V v” ∈ V +

v + v0 + v” = 0 (zero-sum code over V) Isomorphism constraint v ∈V

↔ v ↔ v0

v 0 ∈ V0 ∼ =V

Realizations of codes (cont.) Realization: system of variables and constraints Q Q I Define configuration spaces A = k Ak , S = j Sj I

Define A(i) and S (i) : the Cartesian products of the alphabets of the state and system variables that are involved in Ci , so that Ci ⊆ A(i) × S (i)

I

Behavior B = {(a, s) : (a(i) , s(i) ) ∈ Ci , ∀i} ⊆ A × S Code C = B|A ⊆ A

I

I

Alternatively, C = external behavior, B = internal behavior

I

If all Ci are linear (resp. group) codes, then B and C are linear (resp. group) codes.

I

Realizations are equivalent if they realize the same code C.

Tanner graph of a realization symbols

x QQ xPPQ PQ constraints P . .S. S P Q    x S  QQS hP Q . . . PP Q S h PP Q S  . . .  h

states Bipartite graph I Circles represent variables (black = symbol, open = state) I Boxes represent constraints I A variable is connected to a constraint if that variable is involved in that constraint

Trellis realizations Trellis realization = discrete-time state-space realization I Symbols, states, and constraints are all indexed by Z, or by subinterval of Z (discrete, ordered time axis), or by Zm (circular time axis = “tail-biting trellis”) I Variables involved in Ck are Sk , Ak , Sk+1 ; i.e., Ck ⊆ Sk × Ak × Sk+1 ; Ck defines possible transitions (sk , ak , sk+1 ) I Common framework for convolutional codes, trellis codes (including tail-biting), classical discrete-time linear systems Graphical model: simple chain (or cycle) graph Ak Ak+1 ...

Sk

Ck

Sk+1

Ck+1

Sk+2

...

Sum-product decoding Sum-product decoding rule Given an initial set of weights for each symbol and state variable: I For each constraint code Ci I

For each codeword (a(i) , s(i) ) ∈ Ci I

For each variable involved in Ci : sum the product of other variable weights to a weight accumulator for that variable.

(Max-sum decoding: “max” → “sum,” “sum” → “product”) I I I I

Finite cycle-free graphs: finite, exact Graphs with cycles: iterative, approximate P Computational complexity ≈ i |Ci | P Communication complexity ≈ j |Sj |

When weights are APPs: I I

sum-product decoding = APP decoding (BCJR for trellises) max-sum decoding = block ML decoding (VA for trellises)

Normal realizations Normal degree constraints I

Symbol variables must have degree 1

I

State variables must have degree 2

I

Degree: number of constraints in which a variable is involved

I

Note: Trellis realizations are inherently normal.

A normal realization is naturally represented by a normal graph: I

Constraint codes ⇔ vertices

I

State variables ⇔ ordinary edges (degree 2)

I

Symbol variables ⇔ half-edges (degree 1)

Any realization may be trivially “normalized” (see next slide) I

No essential change in graph topology

Normalizing a generic Tanner graph =Q Q =PPQ PQ P Q . .S. S P 

S = QQS  Q . . . =P PP Q S P Q S = P  ... 

= Normalization procedure I

Replace variables by equality constraints

I

Add external half-edges (“dongles”) for symbol variables

I

Internal variables become replicas of symbol/state variables

Normal realization duality Dualization procedure. Given a linear or group normal realization defined by symbols Ak , states Sj , constraint codes Ci : ˆk I Replace each symbol alphabet Ak by dual alphabet A I I I

Finite-dimensional vector space → dual vector space Finite abelian group → dual group (character group) In both cases, Aˆk ∼ = Ak

I

Replace each state alphabet Sj by dual alphabet Sˆj

I

Replace each constraint code Ci by orthogonal code (Ci )⊥

I

Invert the sign of one of each pair of state variables (sj , sj0 )

Normal realization duality theorem [F01]: If the primal realization realizes a code C, then the dual realization realizes the orthogonal code C ⊥ .

Orthogonal constraints: Examples Equality constraint over V v ∈V

v0 ∈ V v” ∈ V =

v = v0 = v”

Zero-sum constraint over Vˆ vˆ ∈ Vˆ

vˆ 0 ∈ Vˆ vˆ ” ∈ Vˆ +

vˆ + vˆ 0 + vˆ ” = 0

Note: the dual of a two-variable equality constraint, v = v 0 , is a sign-inversion constraint, vˆ = −ˆ v 0. Generalization: Isomorphism constraint v ∈V

↔ v ↔ v0

v 0 ∈ V0 ∼ =V

Negative adjoint isomorphism vˆ ∈ Vˆ

vˆ 0 ∈ Vˆ 0 ∼ = Vˆ ↔ ˆ ◦ vˆ ↔ −ˆ v0

Normal realization duality: Examples Projection/cross-section duality: C ⊆A×B Cross-section C:A A

C

B

ˆ A

{0}

C:A = {a : (a, 0) ∈ C } ⊆ A

ˆ ×B ˆ C⊥ ⊆ A Projection (C ⊥ )|Aˆ C⊥

ˆ B

ˆ = {0}⊥ B

ˆ ∈ C ⊥} ⊆ A ˆ (C ⊥ )|Aˆ = {ˆa : ∃(ˆa, b)

Projection/cross-section duality: (C:A )⊥ = (C ⊥ )|Aˆ Graphical symbols for zero and free constraints: ˆ ˆ A B A B  C C⊥ I

ˆ is “free.” B is “grounded;” B

I

Both are now internal variables.

Normal realization duality: Examples (cont.) Sum/intersection duality: A, B ⊆ G sum A + B ⊆ G

ˆ A⊥ , B ⊥ ⊆ G ⊥ ˆ intersection A ∩ B ⊥ ⊆ G ˆ G

G A

G

+ G

B

A⊥

ˆ G

ˆ = G

Sum/intersection duality: (A + B)⊥ = A⊥ ∩ B ⊥

B⊥

Normal realization duality: Examples (cont.) Normal realization duality A

s∈S Q

i

Ci

s0 ∈ S =

Normal realization of C



ˆs ∈ Sˆ Q

i

Ci⊥ ˆs0 ∈ Sˆ ∼

Dual realization of C ⊥

Generalized normal realization duality A

s∈S Q

i

Ci

s0 ∈ S



Generalized normal realization of C



ˆs ∈ Sˆ

Q

i

◦ Ci⊥ ˆs0 ∈ Sˆ ↔ ˆ

Dual realization of C ⊥

Dualizing a generic Tanner graph =Q Q =PPQ PQ . .S. S PQ P  c1 

I

+Q Q +PPQ PQ . .S. S P P Qc1⊥ 

S c =   2 Q S Q    . =PPQ Q S .. P  P P Q Sc =  I  ... 

S ⊥ +  c2 Q  Q S Q . . .  +P QS PP  P P Q S ⊥ + cI  ... 

=

+



Given an arbitrary linear or group realization of C, we obtain a dual realization of C ⊥

Parity-check and generator realizations parity-check realization

generator realization

=PP + PP P = + Q Q  ... ... Q 

+PP = PP P  +Q = Q  ... Q ... 

=

+

Q Q+

C = {a ∈ Fn : aH T = 0}

 QQ =

C ⊥ = {ˆ a=ˆ xH : ˆx ∈ Fn−k }

I

In a parity-check realization (e.g., LDPC codes), state variables are replicas of symbol variables ak ∈ F

I

In a generator realization, state variables are replicas of generating variables xˆi ∈ F

Dual trellis realizations Trellis realization Ak ...

Sk

Ck

Ak+1 Sk+1

Dual trellis realization Aˆk ...

Sˆk

(Ck )⊥ ◦

Ck+1

Sk+2

...

Sˆk+2

...

Aˆk+1 Sˆk+1

(Ck+1 )⊥ ◦

Tail-biting trellis realizations Tail-biting trellis: State realization on a circular time axis Zm A0 C0

S1

A1 C1

S2

A2 C2

S3

A3 C3

S4

A4 C4

S0 Dual tail-biting trellis Aˆ0 C0⊥ ◦

Sˆ1

Aˆ1 C1⊥ ◦

Sˆ2

Aˆ2 C2⊥ ◦ Sˆ0

Sˆ3

Aˆ3 C3⊥ ◦

Sˆ4

Aˆ4 C4⊥ ◦

Trimness and properness A constraint code Ci is I

trim: for any state space Sj involved in Ci , the projection of Ci onto Sj is Sj

I

proper: for any state space Sj involved in Ci , the variables other than Sj determine Sj ; i.e., if Ci is linear, the cross-section of Ci on Sj is trivial

Theorem (Trim/proper duality): Ci is trim if and only if (Ci )⊥ is proper. Proof : Projection/cross-section duality: ((Ci )|Sj )⊥ = ((Ci )⊥ ):Sˆj

Trimness and properness (cont.)

Dual realizations of Ci and (Ci )⊥ : Not trim: T = (Ci )|Sj ⊂ Sj

Improper: T ⊥ = ((Ci )⊥ ):Sˆj ⊃ {0} (Ci )⊥

Ci C˜i

T

 Sj ,→ X X

Inclusion map T ,→ Sj

(C˜i )⊥

 Sˆ Sˆj /T ⊥← j XX

Natural map Sˆj → Sˆj /T ⊥

Trim or proper ⇒ locally reducible Local reducibility I

Obviously if Ci is not trim, then the state space Sj can be “trimmed” without changing the code C that is realized.

I

Correspondingly, if Ci is not proper, then the state space Sj can be “merged” without changing the code C that is realized. Ci C˜i

T

C˜i 0  Sj ,→ X X

C˜i 0

Ci Ci 0

C˜i

 Sj /T ← Sj X

Conclusion: We may assume w.l.o.g. that: all constraint codes are trim and proper.

X

Ci 0

Trimness and properness (cont.) Example (dual conventional trellis realizations) C = {000, 110}; C ⊥ = {000, 110, 001, 111} t 1 t @0 1 @ @t t 0 t 0 t 0 (not proper) t

1 @1 0@t 0 t t 0 t @ (merged)

t 0 t 1 @1 1 t 0 t @ 0@t0,1 t (not trim) t

1 @1 t 0 t @ 0@t0,1 t (trimmed)

Fragments Fragment F: connected part of a normal realization obtained by cutting an edge set δ(F) (= boundary of F) I

δ(F) = {external state variables of F}

I

External state variables have degree 1 ↔ half-edge

I

Minimal fragment: a constraint (no internal state variables)

I

Maximal fragment: a realization (no external state variables)

Example: trellis fragment F [j,k) : δ(F [j,k) ) = {Sj , Sk } Aj Sj

Cj

Aj+1 Sj+1

Cj+1

Ak−1 Sj+2

···

Sk−1

Ck−1

Sk

Behaviors of fragments Elements of a fragment F: I

Symbol configurations aF ∈ AF

I

Internal state variable configurations s, s0 ∈ S F ,int

I

External state variable configurations sF ,ext ∈ S F ,ext

I

Constraint codes Ci

Internal behavior BF : {all (aF , sF ,ext , s, s0 ) : all Ci satisfied, s0 = s} I

Generalizes the internal behavior B of a realization

External behavior C F = B|AF ×S F ,ext I Generalizes the external behavior C of a realization I

I

F may be regarded as a normal realization of C F

Externally, F is a generalized constraint code

Fragment degrees Degree of a fragment F = size |δ(F)| of its boundary (i.e., number of external state variables) Definitions I

Degree 0 (no external state variables): normal realization

I

Degree 1: leaf fragment

I

Degree 2: trellis fragment

I

Degree 3: cubic fragment

I

Degree ≥ 4: hypercubic fragment

Note: may assume |δ(F)| ≤ 3 without increasing max |Ci | [F03].

Structure of length-2 group codes Length-two group code: subgroup C ⊆ A × B (group theory: subdirect product) Fundamental structure theorem. Given subgroup C ⊆ A × B, let C|A , C|B be the projections of C onto A, B; let C:A , C:B be the cross-sections of C on A, B; then C|A ∼ C|B ∼ C . = = C:A C:B (C:A × C:B ) Proof. Correspondence theorem: πA πBC|A  C|B C π π A B -C C:A  C:A × C:B :B

Length-2 group code theorem (cont.) Generic realization of length-two group code C ⊆ A × B:  B A X C /C  X C|A /C:A ←,→ ↔ |B :B X → ←   X

Left constraint: {(a, a + C:A ) ∈ A × C|A /C:A : a ∈ C|A } I

May be regarded as combination of constraints based on inclusion map A ←- C|A and natural map C|A → C|A /C:A , or natural map A → A/C:A and inclusion map A/C:A ←- C|A /C:A .

I

Will be called an interface node

Middle constraint: isomorphism constraint C|A /C:A ↔ C|B /C:B Right constraint: {(b, b + C:B ) ∈ B × C|B /C:B : b ∈ C|B }

Dual length-2 group codes

ˆ × B: ˆ Dual realization of orthogonal group code C ⊥ ⊆ A ˆ XX (C:A )⊥ /(C|A )⊥ ˆ  B A (C:B )⊥ /(C|B )⊥  ←,→ ◦ ↔ ˆ → ← XX  Project/cross-sect duality: (C ⊥ )|Aˆ = (C:A )⊥ , (C ⊥ ):Aˆ = (C|A )⊥ Quotient group duality: (C ⊥ )|Aˆ /(C ⊥ ):Aˆ acts as dual to C|A /C:A . Middle constraint: negative adjoint isomorphism constraint

Special case: Homomorphisms Let ϕ : A → B be any homomorphism from A to B Graph of ϕ: C = {(a, ϕ(a))} ⊆ A × B Note: C|A = A, C:A = ker ϕ, C|B = im ϕ, C:B = {0} Generic realization of C : A XX A/(ker ϕ) im ϕ → ↔  

 B ,→ X X

Corollary (fundamental theorem of homomorphisms): A/(ker ϕ) ∼ = im ϕ. ˆ b)} ˆ ⊆A ˆ × B: ˆ Dual realization of C ⊥ = {(−ϕ( ˆ b), ˆ ˆ Negative adjoint homomorphism −ϕˆ : B → A ˆ XX im ϕˆ ˆ ˆ A B/(ker ϕ) ˆ  B ←◦↔ ← ˆ  X  X

Length-n group codes Length-n group code: subgroup C ⊆

Q

k

Ak

For any k, write C as a length-2 group code Y C ⊆ Ak × Ak 0 = Ak × Bk k 0 6=k

By fundamental structure theorem, C has realization ˜k Ak X X A ←→  

˜ C

Bk

˜ k = C|A /C:A Effective variable alphabet A k k ˜ k | ≤ |Ak |, equality iff C|A = Ak , C:A = {0} I Note: |A k k ˜ I C is trim and proper at Ak I

C is a coset code over the cosets of C:Ak in C|Ak

Trimness and properness of group codes Review: A length-n group code C ⊆

Q

k

Ak is:

I

Trim at Ak if C|Ak = Ak (i.e., all ak are used)

I

Proper at Ak if C:Ak = {0} (i.e., ak determined by {ak 0 : k 0 6= k})

ˆk Trim/proper duality: C is trim at Ak ⇔ C ⊥ is proper at A ⊥ Proof: Projection/cross-section duality: (C ):Aˆ k = (C|Ak )⊥ Remarks on effective variable alphabets: ˜ k = Ak iff C is trim and proper at Ak Effective variable alphabet A I

Internal Ak is reducible iff C is not trim and proper at Ak

Remarks on homomorphisms: C ⊆ A × B a homomorphism ⇐⇒ C is trim at A, proper at B C ⊆ A × B an isomorphism ⇐⇒ C is trim and proper at A and B

Canonical decomposition Canonical decomposition of a length-n group code C ⊆ ˜1 A1 X X A ←→   ˜2 A2 X X A ←→ 

An

··· ˜n XX A ←→

Q

k

Ak :

˜ C



Q ˜ ⊆ ˜ Effective code C k Ak ˜ is trim and proper at all effective symbol alphabets A ˜k I C I Any realization is equivalent to a realization made up only of: I I

interface nodes constraint codes that are trim and proper at all variables

State space theorem for leaf fragments Leaf fragment F: one external state space Sj I

C F ⊆ AF × Sj , where AF = aggregate symbol alphabet

I

If C F is trim and proper at Sj , then: AF

CF

Sj

=

X A˜F AF X S ←↔ j → 

Theorem (state space theorem for leaf fragments). If F is a linear or group leaf fragment whose external behavior C F ⊆ AF × Sj is trim and proper at Sj , then Sj ∼ = A˜F =

(C F )|AF (C F ):AF

Connecting fragments

Given two fragments F1 , F2 with external behaviors C1 , C2 I Connect by imposing an isomorphism constraint Sj ∼ = S0 j

I

⇒ combined behavior C12 A(1) S (1\j)

C12 C1

Sj



Sj0

C2

A(2) 0 S (2\j )

Lemma (connected fragments). In this situation, if C1 and C2 are trim (resp. proper), then C12 is trim (resp. proper).

Connecting cycle-free fragments Elementary graph theory: a cycle-free graph may be constructed by starting with its vertices and connecting them with edges, one at a time. Definitions: a fragment F is I

externally trim if C F is trim at all its variables (resp. proper)

I

internally trim if all its constraints are trim (resp. proper)

Theorem (trimness/properness of cycle-free fragments). If a cycle-free fragment F is internally trim (resp. proper), then it is externally trim (resp. proper).

Cycle-free leaf fragments Combine: I

Trimness/properness theorem for cycle-free fragments

I

State space theorem for leaf fragments

Theorem (cycle-free leaf fragments). If a cycle-free leaf fragment F with external behavior C F ⊆ AF × Sj is internally trim and proper, then it is externally trim and proper, so Sj ∼ = A˜F =

(C F )|AF (C F ):AF

Minimal cycle-free realizations Elementary graph theory: cutting any edge of a cycle-free realization at Sj disconnects it into two cycle-free leaf fragments, Fj and Pj (rooted trees with common root Sj ) Lemma (trim + proper ⇒ minimal). If a cycle-free realization is internally trim and proper, then every state space Sj is isomorphic to C|AFj /C:AFj , and also to C|APj /C:APj . Moreover, Sj is minimal. Proof: Equivalent realization of C:  APj S A˜Pj  AF j X X A˜Fj ←,→ ↔ j ↔ → ←  X  X

 C = (aFj , aPj ) : aFj + (C Fj ):AFj ↔ aPj + (C Pj ):APj ⊆ AFj × APj If aFj and aPj not in corresponding cosets, then (aFj , aPj ) ∈ / C. Thus Sj is minimal.

Minimal cycle-free realizations (cont.) But minimal ⇒ trim + proper, since a realization that is not internally trim and proper is reducible. Improves the proof of the following fundamental theorem: Theorem (minimal = trim + proper [FGL12]). If a finite connected normal linear or group realization R of a code C is cycle-free, then the following are equivalent: (1) R is internally trim and proper; (2) Every state space Sj is isomorphic to C|AFj /C:AFj , and also to C|APj /C:APj ; (3) Every state space Sj is minimal; i.e., R is minimal.

Minimal tree vs. trellis realizations Basic question: can a cycle-free (tree) realization of C be “simpler” than a trellis realization? State space theorem ⇒ every state space Sj in a cycle-free realization ∼ = a state space in some minimal trellis realization. I

Suggests that no great reduction in complexity is possible

Kashyap counterexample ⇒ there can be an arbitrarily large difference in constraint complexity (treewidth) between a minimal trellis and a minimal tree realization (for a poor code C). Kashyap has also shown that for some good codes (Reed-Solomon, Reed-Muller) no reduction in constraint complexity is possible. In general, for good codes, little improvement is expected.

Cycle-free vs. cyclic representations Cycle-free realizations (e.g., conventional trellis realizations): I

Given G, minimal realization of a linear code C is unique and easily computed

I

Non-iterative, exact decoding algorithms (e.g., Viterbi algorithm, sum-product (BCJR) decoding)

I

However, the complexity of any cycle-free realization of a capacity-approaching code is necessarily high

Realizations with cycles: I

No unique minimal realizations

I

Iterative, approximate decoding algorithms (e.g., sum-product decoding)

I

Feasible decoding complexity (e.g., LDPC codes, turbo codes)

Cyclic graphs and 2-cores The 2-core G 0 of a connected graph G is its maximal connected subgraph such that all vertices have degree ≥ 2. I

i.e., G 0 is the maximal connected leafless subgraph

I

G 0 may be found by deleting leaf vertices until none remain

I

an edge is in G 0 if and only if it is part of a cycle

I

G 0 is empty if and only if G is cycle-free

Any finite connected graph may be partitioned into: I

a cyclic part consisting of the 2-core;

I

a cycle-free part consisting of cycle-free leaf fragments

Cyclic realizations (cont.) Example: trim/proper realization with 2 cycle-free leaf fragments 

0 0 0  AF AF X X A˜F SF S F A˜F  ←←↔ ↔ 2-core → → XX 



Decomposition into: I

Effective code C˜ (2-core)

I

Cycle-free leaf fragments (large interface nodes) Effective aggregate symbol variables A˜F (C˜ trim and proper)

I

Remarks I

In cycle-free case, reduces to minimal cycle-free realization

I

Sum-product decoding of cycle-free leaf fragments is non-iterative, exact. All the difficulty is in the 2-core.

Internal and external behavior of fragments Review: Elements of a fragment F of a normal realization: I Symbol configurations aF ∈ AF (degree 1) I Internal state variable configurations s, s0 ∈ S F ,int (deg 2) I External state variable configurations sF ,ext ∈ S F ,ext (deg 1) I Constraint codes Ci Q Internal behavior BF = {(a, sext , s, s0 ) ∈ i Ci : s0 = s} AF s ∈ S F ,int Q S F ,int F ,ext S i Ci s0 ∈ S F ,int = External behavior C F = B|AF ×S F ,ext AF s ∈ S F ,int Q S F ,int S F ,ext  i Ci s0 ∈ S F ,int =

Observability of fragments In general, a fragment F is said to be observable if its internal or state variables are determined by an external observation I

F is externally observable if a determines sext ; i.e., if the projection C F → AF is one-to-one

I

F is internally observable if (a, sext ) determine sint ; i.e., if the projection BF → C F is one-to-one

I

F is totally observable if a determines (sext , sint ); i.e., if the projection BF → AF is one-to-one

Immediate results: I

totally observable = externally and internally observable

I

a normal realization R is trivially externally observable

I

a constraint code Ci is trivially internally observable

I

for a leaf fragment: externally observable = proper

Observability tests External observability: I

F is externally observable iff S F ,ext,u = (C F ):S F ,ext is trivial

I

Realization of externally unobservable state space S F ,ext,u : AF s ∈ S F ,int Q S F ,int S F ,ext  i Ci s0 ∈ S F ,int =

Internal observability: I

F is internally observable iff S F ,int,u = (BF ):S F ,int is trivial

I

Realization of internally unobservable state space S F ,int,u : AF s ∈ S F ,int Q S F ,int F ,ext S i Ci s0 ∈ S F ,int =

Controllability of fragments

In general, a fragment F will be defined to be controllable in such a way that: F is controllable ⇐⇒ dual fragment F ◦ is observable Method: dualize realization of observability test Define controllability to be consistent with the dual test

External controllability F is externally controllable if (BF )|S F ,ext = S F ,ext I

Realization of S F ,ext,c = (BF )|S F ,ext = (C F )|S F ,ext : 

AF s ∈ S F ,int Q S F ,ext i Ci s0 ∈ S F ,int =

I

F is externally controllable iff F ◦ is externally observable

I

For a leaf fragment: externally controllable = trim For a trellis fragment, corresponds to [j, k)-controllability

I

I

every state transition (sj , sk ) ∈ Sj × Sk = S F ,ext can occur

External controllability test: |C F |/|AF | = |S F ,ext,c | ≤ |S F ,ext |, (linear case: dim C F − dim AF = dim S F ,ext,c ≤ dim S F ,ext ), with equality if and only if F is externally controllable.

Internal controllability Define two configuration spaces: I

Configuration universe Q U = i Ci ⊆ AF × S F ,ext × S F ,int × S F ,int

I

Validity space V = {(a, sext , s, s0 ) : s = s0 )}

Internal behavior B = U ∩ V Define two corresponding check spaces: Q ⊥ I Configuration check space U ⊥ = i (Ci ) I

Validity check space V ⊥ = {(0, 0, ˆs, ˆs0 ) : ˆs = −ˆs0 }

Behavior check space B⊥ = U ⊥ + V ⊥ Def: F is internally controllable if U ⊥ and V ⊥ are independent I I

i.e., if U ⊥ ∩ V ⊥ is trivial, so B⊥ = U ⊥ × V ⊥ Q But U ⊥ ∩ V ⊥ = {(0, 0, ˆs, −ˆs) ∈ i (Ci )⊥ }; thus trivial ⇐⇒ the dual fragment F ◦ is internally observable

Internal controllability test Dualize internal observability test I

Def: internally controllable subspace: Q S F ,int,c = {s − s0 : (a, sext , s, s0 ) ∈ U = i Ci }: AF  F ,ext S 

U

s ∈ S F ,int S F ,int s0 ∈ S F ,int + ◦

I

S F ,int,c is the image of the validity check homomorphism v : U → S F ,int defined by v (a, sext , s, s0 ) = s − s0 . Note: ker v ∼ = BF

I

F is internally controllable ⇐⇒ S F ,int,c = S F ,int

Internal controllability test: |U|/|BF | = |S F ,int,c | ≤ |S F ,int |; (linear case: dim U − dim BF = dim S F ,int,c ≤ dim S F ,int ); equality ⇐⇒ F is internally controllable.

Internal unobservability and cycles Theorem [FGL12]: If F is an internally unobservable trim and proper linear or group fragment with a nonzero internal state configuration s such that (0, 0, s) ∈ BF , then the support of s must be a cycle or generalized cycle. Cycle: a finite connected graph with vertex degrees = 2. Generalized cycle: finite connected graph, vertex degrees ≥ 2 (i.e., leafless). (A 2-core is a generalized cycle.) Example of a generalized cycle:

Internal controllability and observability: Examples Linear tail-biting trellis realizations are simplest cyclic realizations I

Since a realization is always externally observable/controllable, “observable/controllable” = internally observable/controllable

Example (dual tail-biting trellis realizations): (C = {000, 011, 101, 110}; C ⊥ = {000, 111}) t 0 t 0 t 0 t @ 1 1 @1 1 @1 1 @ @ @ t 0@t 0@t 0@t

(unobservable)

t 1 t 1 t 1 t t 0 t 0 t 0 t

(uncontrollable)

Resembles observability/controllability of classical LTI systems on bi-infinite time axis Z

Controllability test for linear realizations General internal controllability test for linear realizations: dim B ≥ dim U − dim S; equality ⇐⇒ controllable P dim U = P i dim Ci dim S = j dim Sj Theorem: A linear realization is internally controllable iff X X dim B = dim Ci − dim Sj . i

Examples: (TBT 1): 3 = 6 − 3 ⇒ controllable. (TBT 2): 1 6= 3 − 3 ⇒ uncontrollable.

j

Unobservable or uncontrollable ⇒ reducible Theorem [FGL12]: An unobservable linear realization on a finite graph G with a nonzero trajectory (0, s) ∈ B may be locally reduced by trimming any single state space in the support of s. The dual uncontrollable realization may be correspondingly locally reduced by the dual merging (“pinching”) operation. Example (dual tail-biting trellis realizations): (C = {000, 011, 101, 110}; C ⊥ = {000, 111}) t 0 t t 1 t 1 @1 1 @1 1 @1 @ @ 0@t t 0 t 0@t 0@t t 0 t 0 t @ (observable) (controllable) Conclusion: We may assume w.l.o.g. that: all linear realizations are one-to-one (observable). Open research question: Extend to group realizations

Uncontrollable tail-biting trellis realizations Theorem [FGL12]: A trim tail-biting trellis realization is uncontrollable if and only if its behavior is disconnected. Example (dual tail-biting trellis realizations): C = h01110, 10010, 01101i; C ⊥ = h10111, 01100i 11 t

t t 0 t 1 BA 1 0@1 1 0 tBABA0t @ 0@t B A1 t B 0 At 0 t 1 1 B @1 1 0 t 0 B1Bt @ 0@t (unobservable)

@1 @ 10 t 0@t 0 t A  1A 1 01 t A @1  A 0@t 0 At t @

00

t t 1 t t @1 1 @0 1 0@t 1 t 0 t @@t 0 t t @ t 1 t 1 t t @1 1 0@t 0 t 0 t 0 t 0 t t @

(uncontrollable)

Open research questions: 1. Extend to general graphs. 2. Elucidate connections between codes over Z and over Zm .

Could uncontrollability be good for decoding? Theorem: A linear parity-check realization (e.g., LDPC code) is controllable if and only if its parity checks are independent. There is some theoretical and empirical evidence that adding redundant parity checks helps in decoding LDPC codes. Can uncontrollability be exploited more systematically? John Baras (Allerton, 2012): “In control system design, a little bit of carefully designed uncontrollability (nonminimality) can be useful for robustness.” Conjecture: in LDPC code design, a little bit of carefully designed uncontrollability (redundancy) can be useful to help decoder performance.

Trellis fragments

Trellis fragment F: precisely two external state variables Sj , Sk A[j,k) Sj

I

C [j,k)

Sk

External behavior C [j,k) ⊆ A[j,k) × Sj × Sk I

Assume trim and proper

Transition spaces of trellis fragments Transition space T [j,k) = (C [j,k) )|Sj ×Sk ⊆ Sj × Sk I

A trellis realization is [j, k)-controllable if T [j,k) = Sj × Sk

I

Classical notion of controllability

Unobservable transition space U [j,k) = (C [j,k) ):Sj ×Sk ⊆ Sj × Sk I A trellis realization is [j, k)-observable if U [j,k) = {(0, 0)} I I

⇐⇒ external symbol trajectory a[j,k) determines s[j,k] Classical notion of observability

Theorem: The unobservable transition space U [j,k) of C [j,k) is the orthogonal space to the transition space (T ◦ )[j,k) of (C [j,k) )⊥ . Cor.: C [j,k) is [j, k)-observable ⇐⇒ (C [j,k) )⊥ is [j, k)-controllable I

Classical [j, k)-observability/controllability duality

Trellis fragment structure A trim transition space T [j,k) ⊆ Sj × Sk is the direct sum of: 1. A start space F [j,k) , consisting of all elements of T [j,k) of the form (0, sk ). 2. A stop space L[j,k) , consisting of all elements of T [j,k) of the form (sj , 0). 3. An unreachable space T [j,k) /(F [j,k) × L[j,k) ). I

[j, k)-controllable ⇐⇒ the unreachable space is trivial

T [j,k) is isomorphic to C [j,k) , modulo the parallel transition space [j,k)

A0

= {a[j,k) : (a[j,k) , 0, 0) ∈ C [j,k) }

Concatenating trellis fragments Concatenating trellis fragments with external behaviors C [j,k) , C [k,`) ⇒ trellis fragment with external behavior C [j,`) ⊆ A[j,`) × Sj × S` . A[j,k) Sj I I

C [j,k)

Sk

C [k,`)

A[j,`) S`

=

Sj

C [j,`)

S`

Concatenation is trim and proper if components are so “Reachable spaces” V [j,`) ⊆ Sj and W [j,`) ⊆ S` are isomorphic to W [j,k) + V [k,`) ⊆ Sk I

I

A[k,`)

i.e., concatenation is not less controllable than each component

“Unobservable projections” X [j,`) ⊆ Sj and Y [j,`) ⊆ S` are T [k,`) isomorphic to Y [j,k) X ⊆ Sk I

i.e., concatenation is not less observable than each component

Tail-biting trellis fragments Tail-biting trellis R: defined on a circular time axis Zm Complementary fragments I

Cut time axis at Sj and Sk

I

Yields two complementary fragments, R[j,k) and R[k,j)

I

For both, external state spaces are Sj and Sk Aj Cj

Sj

Sj+1

Aj−1 Sj−1 Cj−1

···

···

R[j,k)

R[k,j)

Sk−1

Sk+1

Ak−1 Ck−1 Ak Ck

Sk

Tail-biting trellis fragments (cont.)

Fragment trimness and controllability I R is [j, k)-trim if every path in B[j,k) is part of a path in B I I

I

R is branch-trim at Ci iff R is [i, i + 1)-trim R is state-trim at Sj iff R is [j + m, j)-trim

Theorem [GLF13]: R is [k, j)-controllable ⇒ R is [j, k)-trim. R is controllable + [j, k)-trim ⇒ R is [k, j)-controllable.

Tail-biting trellis fragments (cont.) Tail-biting trellis reducibility I

A [k, j)-reduction of R is a replacement of the interior of the fragment R[k,j) by some other fragment whose state spaces are no larger, without changing the code C that is realized or the external state spaces Sj and Sk .

I

R is [k, j)-irreducible if it has no [k, j)-reduction other than itself, up to isomorphism.

I

Theorem [GLF13]: trim, proper, observable, controllable, [j, k)-observable, [j, k)-controllable ⇒ [k, j)-irreducible.

I

Theorem [GLF13]: Under a mild additional condition, if R is trim, proper, observable, and controllable, but not [j, k)-observable, then R is [k, j)-reducible.

Future work Further structural analysis using fragments I

Good understanding of cycle-free and trellis realizations

I

Extend to fragments of degree 3 (cubic realizations)

Better connections to engineering analysis of codes and decoding I Code performance I I

I

What graph structures yield good performance? Can a little carefully designed uncontrollability help decoding?

Decoding performance analysis I

Connections to pseudocodewords, graph covers (Vontobel)