Today
CS 188: Artificial Intelligence Search
Agents that Plan Ahead Search Problems Uninformed Search Methods Depth-First Search Breadth-First Search Uniform-Cost Search
Dan Klein, Pieter Abbeel University of California, Berkeley
Agents that Plan
Reflex Agents Reflex agents: Choose action based on current percept (and maybe memory) May have memory or a model of the world’s current state Do not consider the future consequences of their actions Consider how the world IS
Can a reflex agent be rational?
[demo: reflex optimal / loop ]
Planning Agents
Search Problems
Planning agents: Ask “what if” Decisions based on (hypothesized) consequences of actions Must have a model of how the world evolves in response to actions Must formulate a goal (test) Consider how the world WOULD BE
Optimal vs. complete planning Planning vs. replanning [demo: plan fast / slow ]
Search Problems
Search Problems Are Models
A search problem consists of: A state space
A successor function (with actions, costs)
“N”, 1.0
“E”, 1.0
A start state and a goal test
A solution is a sequence of actions (a plan) which transforms the start state to a goal state
Example: Traveling in Romania
What’s in a State Space? The world state includes every last detail of the environment
State space: Cities
Successor function: Roads: Go to adjacent city with cost = distance
Start state: Arad
Goal test: Is state == Bucharest?
Solution?
State Space Sizes?
A search state keeps only the details needed for planning (abstraction)
Problem: Pathing States: (x,y) location Actions: NSEW Successor: update location only Goal test: is (x,y)=END
Problem: Eat-All-Dots States: {(x,y), dot booleans} Actions: NSEW Successor: update location and possibly a dot boolean Goal test: dots all false
Quiz: Safe Passage
World state:
Agent positions: 120 Food count: 30 Ghost positions: 12 Agent facing: NSEW
How many World states? 120x(230)x(122)x4 States for pathing? 120 States for eat-all-dots? 120x(230)
Problem: eat all dots while keeping the ghosts perma-scared What does the state space have to specify? (agent position, dot booleans, power pellet booleans, remaining scared time)
State Graphs and Search Trees
State Space Graphs State space graph: A mathematical representation of a search problem Nodes are (abstracted) world configurations Arcs represent successors (action results) The goal test is a set of goal nodes (maybe only one)
In a search graph, each state occurs only once! We can rarely build this full graph in memory (it’s too big), but it’s a useful idea
State Space Graphs
Search Trees This is now / start
State space graph: A mathematical representation of a search problem
G
a
Possible futures e
d
f
S
h
In a search graph, each state occurs only once!
p
We can rarely build this full graph in memory (it’s too big), but it’s a useful idea
r
q
A search tree:
Tiny search graph for a tiny search problem
State Graphs vs. Search Trees
How big is its search tree (from S)?
a e
b
c
a
a
e
p
h
q
r
G
S
e
d
f
S
h p
S
d
c
b
Quiz: State Graphs vs. Search Trees
Search Tree
G
a
A “what if” tree of plans and their outcomes The start state is the root node Children correspond to successors Nodes show states, but correspond to PLANS that achieve those states For most problems, we can never actually build the whole tree
Consider this 4-state graph:
Each NODE in in the search tree is an entire PATH in the problem graph.
State Graph
“E”, 1.0
c
b
Nodes are (abstracted) world configurations Arcs represent successors (action results) The goal test is a set of goal nodes (maybe only one)
“N”, 1.0
q
r
We construct both on demand – and we construct as little as possible.
h p q
r q
p q
f c
G
q
f c
G
b
a
a
Important: Lots of repeated structure in the search tree!
Search Example: Romania
Tree Search
Searching with a Search Tree
General Tree Search
Important ideas:
Search: Expand out potential plans (tree nodes) Maintain a fringe of partial plans under consideration Try to expand as few tree nodes as possible
Example: Tree Search G
a c
b
e
d
f
S
h p
q
r
Fringe Expansion Exploration strategy
Main question: which fringe nodes to explore?
Depth-First Search
Depth-First Search Strategy: expand a deepest node first
G
a c
b
Implementation: Fringe is a LIFO stack
Search Algorithm Properties
e
d
f
S
h p
r
q
S
e
d b
e
c
a
h
a p
h r
q
q
p q
f c
G
p q
r q
f c
G
a
a
Depth-First Search (DFS) Properties
Search Algorithm Properties
Complete: Guaranteed to find a solution if one exists? Optimal: Guaranteed to find the least cost path? Time complexity? Space complexity?
What nodes DFS expand?
b
1 node b nodes b2 nodes
Cartoon of search tree: b is the branching factor m is the maximum depth solutions at various depths
m tiers
1 + b + b2 + …. bm = O(bm)
b
1 node b nodes b2 nodes
m tiers
How much space does the fringe take? Only has siblings on path to root, so O(bm)
Is it complete? bm nodes
Number of nodes in entire tree?
Some left prefix of the tree. Could process the whole tree! If m is finite, takes time O(bm)
bm nodes
m could be infinite, so only if we prevent cycles (more later)
Is it optimal? No, it finds the “leftmost” solution, regardless of depth or cost
Breadth-First Search
Breadth-First Search
G
a
Strategy: expand a shallowest node first
c
b
Implementation: Fringe is a FIFO queue
e
d S
f
h p
r
q S
e
d Search Tiers
b
c
a
a
e h p q
q c a
h r
p
f
q G
p q
r q
f c a
G
Breadth-First Search (BFS) Properties
Quiz: DFS vs BFS
What nodes does BFS expand? Processes all nodes above shallowest solution Let depth of shallowest solution be s s tiers Search takes time O(bs)
b
1 node b nodes b2 nodes
How much space does the fringe take?
bs nodes
Has roughly the last tier, so O(bs)
Is it complete?
bm nodes
s must be finite if a solution exists, so yes!
Is it optimal? Only if costs are all 1 (more on costs later)
Quiz: DFS vs BFS
Iterative Deepening Idea: get DFS’s space advantage with BFS’s time / shallow-solution advantages
When will BFS outperform DFS?
Run a DFS with depth limit 1. If no solution… Run a DFS with depth limit 2. If no solution… Run a DFS with depth limit 3. …..
When will DFS outperform BFS?
Isn’t that wastefully redundant? Generally most work happens in the lowest level searched, so not so bad!
[demo: dfs/bfs]
Cost-Sensitive Search GOAL
a
2
2 c
b 1
3
2
8 2
e
d
3
9
8
START
p
15
f 2
h
4
1
Uniform Cost Search
4 q
2 r
BFS finds the shortest path in terms of number of actions. It does not find the least-cost path. We will now cover a similar algorithm which does find the least-cost path.
b
Uniform Cost Search 2 Strategy: expand a cheapest node first:
b
d
S 1
2
9
p
15
What nodes does UFS expand?
c
8
3
Fringe is a priority queue (priority: cumulative cost)
G
a
1
Uniform Cost Search (UCS) Properties
2
e
8
h
Processes all nodes with cost less than cheapest solution! If that solution costs C* and arcs cost at least ε , then the “effective depth” is roughly C*/ε C*/ε “tiers” Takes time O(bC*/ε) (exponential in effective depth)
f
2
1 r
q
S 0
Cost contours
b 4
c
a 6
a
9
e
3
d
How much space does the fringe take?
h 17 r 11
e 5
11
h 13 r 7
p
f 8
q
p
q
q 11 c
q
G 10
p
1
q
16
Has roughly the last tier, so O(bC*/ε)
Is it complete? Assuming best solution has a finite cost and minimum arc cost is positive, yes!
f c
G
Is it optimal?
a
Yes! (Proof next lecture via A*)
a
Uniform Cost Issues Remember: UCS explores increasing cost contours
The One Queue
c≤1 c≤2 c≤3
The good: UCS is complete and optimal! The bad: Explores options in every “direction” No information about goal location
Start
Goal
All these search algorithms are the same except for fringe strategies Conceptually, all fringes are priority queues (i.e. collections of nodes with attached priorities) Practically, for DFS and BFS, you can avoid the log(n) overhead from an actual priority queue, by using stacks and queues Can even code one implementation that takes a variable queuing object
We’ll fix that soon! [demo: search demo empty]
Search and Models Search operates over models of the world The agent doesn’t actually try all the plans out in the real world! Planning is all “in simulation” Your search is only as good as your models…
Search Gone Wrong?
b
c≤1 c≤2 c≤3