CS 188: Ar)ficial Intelligence
Bayes’ Nets: Sampling
Bayes’ Net Representa)on § A directed, acyclic graph, one node per random variable § A condi)onal probability table (CPT) for each node § A collec)on of distribu)ons over X, one for each combina)on of parents’ values
§ Bayes’ nets implicitly encode joint distribu)ons § As a product of local condi)onal distribu)ons § To see what probability a BN gives to a full assignment, mul)ply all the relevant condi)onals together: Instructor: Nick Hay University of California, Berkeley Slides by Dan Klein and Pieter Abbeel
Variable Elimina)on
Approximate Inference: Sampling
§ Interleave joining and marginalizing § dk entries computed for a factor over k variables with domain sizes d …
§ Ordering of elimina)on of hidden variables can affect size of factors generated
…
§ Worst case: running )me exponen)al in the size of the Bayes’ net
Sampling § Sampling is a lot like repeated simula)on § Predic)ng the weather, basketball games, …
§ Basic idea § Draw N samples from a sampling distribu)on S § Compute an approximate posterior probability § Show this converges to the true probability P
§ Why sample? § Learning: get samples from a distribu)on you don’t know § Inference: geZng a sample is faster than compu)ng the right answer (e.g. with variable elimina)on)
Sampling § Sampling from given distribu)on § Step 1: Get sample u from uniform distribu)on over [0, 1) § E.g. random() in python
§ Step 2: Convert this sample u into an outcome for the given distribu)on by having each outcome associated with a sub-‐interval of [0,1) with sub-‐interval size equal to probability of the outcome
§ Example C red
P(C) 0.6
green
0.1
blue
0.3
§ If random() returns u = 0.83, then our sample is C = blue § E.g, ader sampling 8 )mes:
Sampling in Bayes’ Nets
Prior Sampling
§ Prior Sampling § Rejec)on Sampling § Likelihood Weigh)ng § Gibbs Sampling
Prior Sampling +c -‐c
Prior Sampling § For i=1, 2, …, n
0.5 0.5
§ Sample xi from P(Xi | Parents(Xi))
Cloudy +c +s 0.1 -‐s 0.9 -‐c +s 0.5 -‐s 0.5
+s -‐s
+r -‐r +r -‐r
Sprinkler
+w -‐w +w -‐w +w -‐w +w -‐w
0.99 0.01 0.90 0.10 0.90 0.10 0.01 0.99
Rain
WetGrass
+c -‐c
+r -‐r +r -‐r
0.8 0.2 0.2 0.8
§ Return (x1, x2, …, xn)
Samples: +c, -‐s, +r, +w -‐c, +s, -‐r, +w …
Prior Sampling § This process generates samples with probability:
Example § We’ll get a bunch of samples from the BN: +c, -‐s, +r, +w +c, +s, +r, +w -‐c, +s, +r, -‐w +c, -‐s, +r, +w -‐c, -‐s, -‐r, +w
…i.e. the BN’s joint probability § Let the number of samples of an event be § Then
§ I.e., the sampling procedure is consistent
C S
§ If we want to know P(W) § § § § § §
We have counts Normalize to get P(W) = This will get closer to the true distribu)on with more samples Can es)mate anything else, too What about P(C| +w)? P(C| +r, +w)? P(C| -‐r, -‐w)? Fast: can use fewer samples if less )me (what’s the drawback?)
R W
Rejec)on Sampling
Rejec)on Sampling § Let’s say we want P(C) § No point keeping all samples around § Just tally counts of C as we go C
§ Let’s say we want P(C| +s)
S
§ Same thing: tally C outcomes, but ignore (reject) samples which don’t have S=+s § This is called rejec)on sampling § It is also consistent for condi)onal probabili)es (i.e., correct in the limit)
Rejec)on Sampling
R W
+c, -‐s, +r, +w +c, +s, +r, +w -‐c, +s, +r, -‐w +c, -‐s, +r, +w -‐c, -‐s, -‐r, +w
Likelihood Weigh)ng
§ IN: evidence instan)a)on § For i=1, 2, …, n § Sample xi from P(Xi | Parents(Xi)) § If xi not consistent with evidence § Reject: Return, and no sample is generated in this cycle
§ Return (x1, x2, …, xn)
Likelihood Weigh)ng § Problem with rejec)on sampling: § If evidence is unlikely, rejects lots of samples § Evidence not exploited as you sample § Consider P(Shape|blue)
Shape
Color
pyramid, green pyramid, red sphere, blue cube, red sphere, green
Likelihood Weigh)ng
§ Idea: fix evidence variables and sample the rest § Problem: sample distribu)on not consistent! § Solu)on: weight by probability of evidence given parents pyramid, blue pyramid, blue sphere, blue Shape Color cube, blue sphere, blue
+c -‐c
0.5 0.5
Cloudy +c +s 0.1 -‐s 0.9 -‐c +s 0.5 -‐s 0.5
+s -‐s
+r -‐r +r -‐r
Sprinkler
+w -‐w +w -‐w +w -‐w +w -‐w
0.99 0.01 0.90 0.10 0.90 0.10 0.01 0.99
Rain
WetGrass
+c -‐c
+r -‐r +r -‐r
0.8 0.2 0.2 0.8
Samples: +c, +s, +r, +w …
Likelihood Weigh)ng
Likelihood Weigh)ng
§ IN: evidence instan)a)on § w = 1.0 § for i=1, 2, …, n
§ Sampling distribu)on if z sampled and e fixed evidence
§ if Xi is an evidence variable
Cloudy C
§ Xi = observa)on xi for Xi § Set w = w * P(xi | Parents(Xi))
S
§ Now, samples have weights
§ else
R
§ Sample xi from P(Xi | Parents(Xi))
W
§ return (x1, x2, …, xn), w § Together, weighted sampling distribu)on is consistent
Likelihood Weigh)ng § Likelihood weigh)ng is good § We have taken evidence into account as we generate the sample § E.g. here, W’s value will get picked based on the evidence values of S, R § More of our samples will reflect the state of the world suggested by the evidence
Gibbs Sampling
§ Likelihood weigh)ng doesn’t solve all our problems § Evidence influences the choice of downstream variables, but not upstream ones (C isn’t more likely to get a value matching the evidence)
§ We would like to consider evidence when we sample every variable à Gibbs sampling
Gibbs Sampling § Procedure: keep track of a full instan)a)on x1, x2, …, xn. Start with an arbitrary instan)a)on consistent with the evidence. Sample one variable at a )me, condi)oned on all the rest, but keep evidence fixed. Keep repea)ng this for a long )me. § Property: in the limit of repea)ng this infinitely many )mes the resul)ng sample is coming from the correct distribu)on
Gibbs Sampling Example: P( S | +r) § Step 1: Fix evidence § R = +r
S
C
§ Randomly
+r
S
+r
W
W
§ Steps 3: Repeat § Choose a non-‐evidence variable X § Resample X from P( X | all other variables)
§ Ra2onale: both upstream and downstream variables condi)on on evidence.
C
§ In contrast: likelihood weigh)ng only condi)ons on upstream evidence, and hence weights obtained in likelihood weigh)ng can some)mes be very small. Sum of weights over all samples is indica)ve of how many “effec)ve” samples were obtained, so want high weight.
§ Step 2: Ini)alize other variables
C
S
C +r
W
S
C +r
W
S
C +r
W
S
C +r
W
S
C +r
W
S
+r W
Efficient Resampling of One Variable § Sample from P(S | +c, +r, -‐w)
Bayes’ Net Sampling Summary
C S
§ Prior Sampling P
§ Rejec)on Sampling P( Q | e )
§ Likelihood Weigh)ng P( Q | e)
§ Gibbs Sampling P( Q | e )
+r W
§ Many things cancel out – only CPTs with S remain! § More generally: only CPTs that have resampled variable need to be considered, and joined together
Further Reading on Gibbs Sampling*
How About Par)cle Filtering?
§ Gibbs sampling produces sample from the query distribu)on P( Q | e ) in limit of re-‐sampling infinitely oden
X1
X2
= likelihood weighting
E2 Elapse
§ Gibbs sampling is a special case of more general methods called Markov chain Monte Carlo (MCMC) methods
Weight
Resample
§ Metropolis-‐Has)ngs is one of the more famous MCMC methods (in fact, Gibbs sampling is a special case of Metropolis-‐Has)ngs)
§ You may read about Monte Carlo methods – they’re just sampling
Par)cle Filtering § Par)cle filtering operates on ensemble of samples § Performs likelihood weigh)ng for each individual sample to elapse )me and incorporate evidence § Resamples from the weighted ensemble of samples to focus computa)on for the next )me step where most of the probability mass is es)mated to be
Par)cles: (3,3) (2,3) (3,3) (3,2) (3,3) (3,2) (1,2) (3,3) (3,3) (2,3)
Par)cles: (3,2) (2,3) (3,2) (3,1) (3,3) (3,2) (1,3) (2,3) (3,2) (2,2)
Par)cles: (3,2) w=.9 (2,3) w=.2 (3,2) w=.9 (3,1) w=.4 (3,3) w=.4 (3,2) w=.9 (1,3) w=.1 (2,3) w=.2 (3,2) w=.9 (2,2) w=.4
(New) Par)cles: (3,2) (2,2) (3,2) (2,3) (3,3) (3,2) (1,3) (2,3) (3,2) (3,2)