Announcements § Homework 6 is out § Due today / Tuesday 3/17 at 11:59pm
CS 188: ArJficial Intelligence
Bayes’ Nets: Sampling
§ Project 4 Bayes’ Nets is out § Due Friday 3/20 at 5pm
Instructors: Pieter Abbeel -‐-‐-‐ University of California, Berkeley [These slides were created by Dan Klein and Pieter Abbeel for CS188 Intro to AI at UC Berkeley. All CS188 materials are available at hYp://ai.berkeley.edu.]
Bayes’ Net RepresentaJon § A directed, acyclic graph, one node per random variable § A condiJonal probability table (CPT) for each node § A collecJon of distribuJons over X, one for each combinaJon of parents’ values
§ Bayes’ nets implicitly encode joint distribuJons
Variable EliminaJon § Interleave joining and marginalizing § dk entries computed for a factor over k variables with domain sizes d § Ordering of eliminaJon of hidden variables can affect size of factors generated
… …
§ As a product of local condiJonal distribuJons § To see what probability a BN gives to a full assignment, mulJply all the relevant condiJonals together:
Worst Case Complexity? § CSP:
…
…
§ If we can answer P(z) equal to zero or not, we answered whether the 3-‐SAT problem has a soluJon. § Hence inference in Bayes’ nets is NP-‐hard. No known efficient probabilisJc inference in general.
§ Worst case: running Jme exponenJal in the size of the Bayes’ net
Approximate Inference: Sampling
Sampling § Sampling is a lot like repeated simulaJon
Sampling
§ Why sample? § Learning: get samples from a distribuJon you don’t know
§ PredicJng the weather, basketball games, …
§ Basic idea
§ Inference: gekng a sample is faster than compuJng the right answer (e.g. with variable eliminaJon)
§ Draw N samples from a sampling distribuJon S § Compute an approximate posterior probability § Show this converges to the true probability P
Sampling in Bayes’ Nets
§ Sampling from given distribuJon
§ Example
§ Step 1: Get sample u from uniform distribuJon over [0, 1) § E.g. random() in python
§ Step 2: Convert this sample u into an outcome for the given distribuJon by having each outcome associated with a sub-‐interval of [0,1) with sub-‐interval size equal to probability of the outcome
C
P(C)
red
0.6
green blue
0.1 0.3
§ If random() returns u = 0.83, then our sample is C = blue § E.g, aner sampling 8 Jmes:
Prior Sampling
§ Prior Sampling § RejecJon Sampling § Likelihood WeighJng § Gibbs Sampling
Prior Sampling +c -‐c
Prior Sampling § For i=1, 2, …, n
0.5 0.5
§ Sample xi from P(Xi | Parents(Xi))
Cloudy +c +s -‐s -‐c +s -‐s
+s -‐s
0.1 0.9 0.5 0.5
+r -‐r +r -‐r
Sprinkler
+w -‐w +w -‐w +w -‐w +w -‐w
0.99 0.01 0.90 0.10 0.90 0.10 0.01 0.99
Rain
WetGrass
+c -‐c
+r -‐r +r -‐r
0.8 0.2 0.2 0.8
Samples: +c, -‐s, +r, +w -‐c, +s, -‐r, +w …
§ Return (x1, x2, …, xn)
Prior Sampling § This process generates samples with probability:
Example § We’ll get a bunch of samples from the BN: +c, -‐s, +r, +w +c, +s, +r, +w -‐c, +s, +r, -‐w +c, -‐s, +r, +w -‐c, -‐s, -‐r, +w
…i.e. the BN’s joint probability § Let the number of samples of an event be § Then
§ I.e., the sampling procedure is consistent
RejecJon Sampling
C S
R W
§ If we want to know P(W) § § § § § §
We have counts Normalize to get P(W) = This will get closer to the true distribuJon with more samples Can esJmate anything else, too What about P(C| +w)? P(C| +r, +w)? P(C| -‐r, -‐w)? Fast: can use fewer samples if less Jme (what’s the drawback?)
RejecJon Sampling § Let’s say we want P(C) § No point keeping all samples around § Just tally counts of C as we go C
§ Let’s say we want P(C| +s) § Same thing: tally C outcomes, but ignore (reject) samples which don’t have S=+s § This is called rejecJon sampling § It is also consistent for condiJonal probabiliJes (i.e., correct in the limit)
RejecJon Sampling § IN: evidence instanJaJon § For i=1, 2, …, n § Sample xi from P(Xi | Parents(Xi)) § If xi not consistent with evidence § Reject: Return, and no sample is generated in this cycle
§ Return (x1, x2, …, xn)
S
R W
+c, -‐s, +r, +w +c, +s, +r, +w -‐c, +s, +r, -‐w +c, -‐s, +r, +w -‐c, -‐s, -‐r, +w
Likelihood WeighJng
Likelihood WeighJng § Problem with rejecJon sampling: § If evidence is unlikely, rejects lots of samples § Evidence not exploited as you sample § Consider P(Shape|blue)
Shape
Color
pyramid, green pyramid, red sphere, blue cube, red sphere, green
Likelihood WeighJng
§ Idea: fix evidence variables and sample the rest § Problem: sample distribuJon not consistent! § SoluJon: weight by probability of evidence given parents pyramid, blue pyramid, blue sphere, blue Shape Color cube, blue sphere, blue
Likelihood WeighJng § IN: evidence instanJaJon § w = 1.0 § for i=1, 2, …, n
+c -‐c
0.5 0.5
Cloudy +c +s 0.1 -‐s 0.9 -‐c +s 0.5 -‐s 0.5
+s -‐s
+r -‐r +r -‐r
Sprinkler
+w -‐w +w -‐w +w -‐w +w -‐w
0.99 0.01 0.90 0.10 0.90 0.10 0.01 0.99
+c -‐c
Rain
WetGrass
§ else
+c, +s, +r, +w …
Likelihood WeighJng § Sampling distribuJon if z sampled and e fixed evidence Cloudy C
§ Now, samples have weights
§ Sample xi from P(Xi | Parents(Xi))
§ Together, weighted sampling distribuJon is consistent
§ Likelihood weighJng is good § We have taken evidence into account as we generate the sample § E.g. here, W’s value will get picked based on the evidence values of S, R § More of our samples will reflect the state of the world suggested by the evidence
§ Likelihood weighJng doesn’t solve all our problems § Evidence influences the choice of downstream variables, but not upstream ones (C isn’t more likely to get a value matching the evidence)
§ We would like to consider evidence when we sample every variable à Gibbs sampling
S
R W
§ return (x1, x2, …, xn), w
Likelihood WeighJng
0.8 0.2 0.2 0.8
Samples:
§ if Xi is an evidence variable § Xi = observaJon xi for Xi § Set w = w * P(xi | Parents(Xi))
+r -‐r +r -‐r
Gibbs Sampling
Gibbs Sampling
Gibbs Sampling Example: P( S | +r) § Step 1: Fix evidence
§ Procedure: keep track of a full instanJaJon x1, x2, …, xn. Start with an arbitrary instanJaJon consistent with the evidence. Sample one variable at a Jme, condiJoned on all the rest, but keep evidence fixed. Keep repeaJng this for a long Jme.
§ R = +r
S
+r
S
+r W
§ Steps 3: Repeat § Choose a non-‐evidence variable X § Resample X from P( X | all other variables)
§ Ra2onale: both upstream and downstream variables condiJon on evidence.
C
§ In contrast: likelihood weighJng only condiJons on upstream evidence, and hence weights obtained in likelihood weighJng can someJmes be very small. Sum of weights over all samples is indicaJve of how many “effecJve” samples were obtained, so want high weight.
S
S
C +r
W
S
C +r
W
S
C +r
W
S
C +r
W
Bayes’ Net Sampling Summary
C S
C +r
W
Efficient Resampling of One Variable
C
§ Randomly
W
§ Property: in the limit of repeaJng this infinitely many Jmes the resulJng sample is coming from the correct distribuJon
§ Sample from P(S | +c, +r, -‐w)
§ Step 2: IniJalize other variables
C
§ Prior Sampling P
§ RejecJon Sampling P( Q | e )
§ Likelihood WeighJng P( Q | e)
§ Gibbs Sampling P( Q | e )
+r W
§ Many things cancel out – only CPTs with S remain! § More generally: only CPTs that have resampled variable need to be considered, and joined together
Further Reading on Gibbs Sampling* § Gibbs sampling produces sample from the query distribuJon P( Q | e ) in limit of re-‐sampling infinitely onen § Gibbs sampling is a special case of more general methods called Markov chain Monte Carlo (MCMC) methods § Metropolis-‐HasJngs is one of the more famous MCMC methods (in fact, Gibbs sampling is a special case of Metropolis-‐HasJngs)
§ You may read about Monte Carlo methods – they’re just sampling
Markov Chain Monte Carlo* § Idea: instead of sampling from scratch, create samples that are each like the last one. § Procedure: resample one variable at a Jme, condiJoned on all the rest, but keep evidence fixed. E.g., for P(b|c): -b +a +c -b -a +c +b +a +c § Proper2es: Now samples are not independent (in fact they’re nearly idenJcal), but sample averages are sJll consistent esJmators! § What’s the point: both upstream and downstream variables condiJon on evidence.
S
+r W