Modeling: what BN is most appropriate for a given domain?
Representation: given a BN graph, what kinds of distributions can it encode?
Inference: given a fixed BN, what is P(X | e)?
Questions we can ask:
A Bayes’ net is an efficient encoding of a probabilistic model of a domain
Bayes’ Nets
University of California, Berkeley
Dan Klein, Pieter Abbeel
Bayes’ Nets: Independence
CS 188: Artificial Intelligence
To see what probability a BN gives to a full assignment, multiply all the relevant conditionals together:
As a product of local conditional distributions
Bayes’ nets implicitly encode joint distributions
A collection of distributions over X, one for each combination of parents’ values
A conditional probability table (CPT) for each node
A directed, acyclic graph, one node per random variable
Bayes’ Net Semantics
X and Y are conditionally independent given Z if and only if:
X, Y independent if and only if:
Chain rule
Product rule
Conditional probability
Probability Recap
‐j
+j
‐j
+a
‐a
‐a
0.95
0.05
0.1
0.9
B
J
O(N * 2k+1)
How big is an N‐node net if nodes have up to k parents?
2N
M
E
‐a
‐a
+a
+a
A
‐e
+e
E
‐m
+m
‐m
+m
M
0.998
0.002
P(E)
0.99
0.01
0.3
0.7
P(M|A)
+e ‐e ‐e +e +e ‐e ‐e
+b +b ‐b ‐b ‐b ‐b
+e
+b +b
E
B
‐a
+a
‐a
+a
‐a
+a
‐a
+a
A
Also faster to answer queries (coming)
Also easier to elicit local CPTs
BNs: Huge space savings!
0.999
0.001
0.71
0.29
0.06
0.94
0.05
0.95
P(A|B,E)
Both give you the power to calculate
Size of a Bayes’ Net
A
How big is a joint distribution over N Boolean variables?
+j
+a
P(J|A)
0.999
‐b
J
0.001
+b
A
P(B)
B
Example: Alarm Network
‐a
‐a
+a
+a
A
‐j
+j
‐j
+j
J
‐b
+b
B
0.95
0.05
0.1
0.9
P(J|A)
0.999
0.001
P(B)
J
A
‐a
‐a
+a
+a
A
‐e
+e
E
‐m
+m
‐m
+m
M
0.998
0.002
P(E)
Bayes’ Nets
M
E
0.99
0.01
0.3
0.7
P(M|A)
Learning Bayes’ Nets from Data
Probabilistic Inference
Conditional Independences
Representation
B
‐b
‐b
‐b
‐b
+b
+b
+b
+b
B
Example: Alarm Network
E
‐e
‐e
+e
+e
‐e
‐e
+e
+e
A
‐a
+a
‐a
+a
‐a
+a
‐a
+a
0.999
0.001
0.71
0.29
0.06
0.94
0.05
0.95
P(A|B,E)
DEMO
X
Y
Z
Example W
Additional implied conditional independence assumptions?
Conditional independence assumptions directly from simplifications in chain rule:
Example:
(Conditional) independence is a property of a distribution
Beyond above “chain rule Bayes net” conditional independence assumptions
X and Y are conditionally independent given Z
X
Y
Z
Are two nodes independent given certain evidence? If yes, can prove using algebra (tedious in general) If no, can prove with a counter example Example:
Answer: no. Example: low pressure causes rain, which causes traffic. X can influence Z, Z can influence X (via Y) Addendum: they could be independent: how?
Question: are X and Z necessarily independent?
Important question about a BN:
Independence in a BN
Important for modeling: understand assumptions made when choosing a Bayes net graph
They can be read off the graph
Often additional conditional independences
Assumptions we are required to make to define the Bayes net when given the graph:
Bayes Nets: Assumptions
X and Y are independent if
Conditional Independence
X: Low pressure
Y: Rain
Z: Traffic
This configuration is a “causal chain”
P( +y | +x ) = 1, P( ‐y | ‐ x ) = 1, P( +z | +y ) = 1, P( ‐z | ‐y ) = 1
In numbers:
Low pressure causes rain causes traffic, high pressure causes no rain causes no traffic
Example:
One example set of CPTs for which X is not independent of Z is sufficient to show this independence is not guaranteed.
Guaranteed X independent of Z ? No!
Causal Chains
D‐separation: Outline
X: Low pressure
Y: Rain
Z: Traffic
This configuration is a “causal chain”
Yes! Evidence along the chain “blocks” the influence
Guaranteed X independent of Z given Y?
Causal Chains
D‐separation: a condition / algorithm for answering such queries
Analyze complex cases in terms of member triples
Study independence properties for triples
D‐separation: Outline
Z: Traffic
X: Raining
Y: Ballgame
P( +x | +y ) = 1, P( ‐x | ‐y ) = 1, P( +z | +y ) = 1, P( ‐z | ‐y ) = 1
In numbers:
Project due causes both forums busy and lab full
Example:
One example set of CPTs for which X is not independent of Z is sufficient to show this independence is not guaranteed.
Guaranteed X independent of Z ? No!
possible causes.
Observing an effect activates influence between
This is backwards from the other cases
No: seeing traffic puts the rain and the ballgame in competition as explanation.
Are X and Z independent given Y?
Still need to prove they must be (try it!)
Yes: the ballgame and the rain cause traffic, but they are not correlated
Are X and Y independent?
Common Effect
Z: Lab full
Last configuration: two causes of one effect (v‐structures)
X: Forums busy
Y: Project due
This configuration is a “common cause”
Common Cause
X: Forums busy
Y: Project due
Observing the cause blocks influence between effects.
Yes!
Guaranteed X and Z independent given Y?
The General Case
Z: Lab full
This configuration is a “common cause”
Common Cause
Any complex example can be broken into repetitions of the three canonical cases
All it takes to block a path is a single inactive segment
Causal chain A B C where B is unobserved (either direction) Common cause A B C where B is unobserved Common effect (aka v‐structure) A B C where B or one of its descendents is observed
A path is active if each triple is active:
Yes, if X and Y “d‐separated” by Z Consider all (undirected) paths from X to Y No active paths = independence!
Question: Are X and Y conditionally independent given evidence variables {Z}? Active Triples
Active / Inactive Paths Inactive Triples
?
Otherwise (i.e. if all paths are inactive), then independence is guaranteed
If one or more active, then independence not guaranteed
Check all (undirected!) paths between and
Query:
D‐Separation
Where does it break? Answer: the v‐structure at T doesn’t count as a link in a path unless “active”
Almost works, but not quite
Solution: analyze the graph
D
R
Attempt 1: if two nodes are connected by an undirected path not blocked by a shaded node, they are conditionally independent
General question: in a given BN, are two variables independent (given evidence)?
L
Reachability Recipe: shade evidence nodes, look for paths in the resulting graph
The General Case
T
B
R: Raining T: Traffic D: Roof drips S: I’m sad
Questions:
Variables:
Yes
Example
Yes
Example
T
S
R
R
T’
T
D
B
D
R
L
This list determines the set of probability distributions that can be represented
Given a Bayes net structure, can run d‐ separation algorithm to build a complete list of conditional independences that are necessarily true of the form
T
T’
Structure Implications
Yes
Yes
Yes
Example
B
Z
A Bayes’ net’s joint distribution may have further (conditional) independence that is not detectable until you inspect its specific distribution
D‐separation gives precise conditional independence guarantees from graph alone
Guaranteed independencies of distributions can be deduced from BN graph structure
Bayes nets compactly encode joint distributions
Bayes Nets Representation Summary
X
Y
Z
X Y
Z
Y
Z
X
X
Y
Computing All Independences
Z
Bayes’ Nets
X
Y
X
X Y
Y
Z
Z
X
X Y
Y
Z
Z
Learning Bayes’ Nets from Data
Enumeration (exact, exponential complexity) Variable elimination (exact, worst‐case exponential complexity, often better) Probabilistic inference is NP‐complete Sampling (approximate)
Probabilistic Inference
Conditional Independences
Representation
Full conditioning can encode any distribution
Adding arcs increases the set of distributions, but has several costs
(There might be more independence)
The graph structure guarantees certain (conditional) independences
Given some graph topology G, only certain joint distributions can be encoded
Topology Limits Distributions
X
X
X
X
X
Y
Y
Y
Y
Y
Z
Z
Z
Z
Z