Note
Everything from this tutorial should be prior knowledge
Note
Starting from
blind.pdf
slide 24
Basic search algorithms
- search state space using tree or graph
- root = initial state, expanded by successor function
- nodes may be repeated (multiple paths to same state)
flowchart TD A[Initial state] --> B[Successor 1] A --> C[Successor 2] B --> D[Goal?] C --> E[Goal?]
Tree search
- expand nodes until goal found or frontier empty
- pseudocode (simplified):
initialize search tree with initial state
loop:
if no nodes to expand → failure
choose a node to expand (per strategy)
if node = goal → return solution
else expand node and add successors
Graph search
- maintains explored set to avoid revisiting states
- ensures completeness in finite state spaces
- pseudocode includes:
- initialize frontier
- loop: choose node, test goal, expand, add to explored, only add new states if not already explored/frontier
Nodes and implementation
- each node stores:
- state
- parent (to reconstruct path)
- action taken
- path cost
- states = representations, nodes = data structures
- expand function: generate successor nodes from state
Frontier and data structures
Search strategies
problem solving performance is measured in four ways:
-
Completeness: does it always find a solution if one exists?
-
Optimality: does it always find the least-cost solution?
-
Time complexity: number of nodes generated/expanded?
-
Space complexity: number of nodes stored in memory during search
-
frontier stored as a queue, operations:
- empty?(q)
- pop(q)
- insert(q, element)
-
types:
- fifo queue (breadth-first)
- lifo queue (depth-first)
- priority queue (uniform-cost, informed search)
Search strategies summary
Measure | Definition |
---|---|
Completeness | guarantees finding a solution if one exists |
Optimality | finds the least-cost solution |
Time complexity | number of nodes expanded |
Space complexity | maximum nodes stored |
time/space depend on:
- b = branching factor
- d = depth of least-cost solution
- m = max depth (∞ possible)
Uninformed search
- also called blind search
- uses only problem definition info
- types:
- breadth-first
- uniform-cost
- depth-first
- depth-limited
- iterative deepening
- bidirectional
Breadth-first search (bfs)
- expand shallowest node
- implementation: frontier = fifo
- properties:
- complete: yes (if finite b)
- time: O(b^(d+1))
- space: O(b^(d+1)) → big drawback
- optimal: yes (if cost=1)
flowchart TD A[Root] --> B1[Level 1 nodes] A --> B2[Level 1 nodes] B1 --> C1[Level 2 nodes] B2 --> C2[Level 2 nodes]
- evaluation: memory usage grows exponentially, limiting scale
Uniform-cost search (ucs)
- expand node with lowest path cost
- frontier = priority queue by path cost
- same as bfs if all step costs equal
- complete and optimal (for positive costs)
Depth-first search (dfs)
- expand deepest node
- frontier = lifo (stack)
- properties:
- complete: no (may loop forever in infinite spaces)
- time: O(b^m) → bad if m >> d
- space: O(bm) → good
- optimal: no
Depth-limited search
- dfs with limit l
- avoids infinite descent
- may fail if solution deeper than l
Iterative deepening search (ids)
- runs dfs with increasing depth limits
- properties:
- complete: yes
- optimal: yes (if cost=1)
- time: O(b^d)
- space: O(bd)
table comparing ndls vs nids:
Search | Nodes (b=10, d=5) |
---|---|
ndls | 111,111 |
nids | 123,456 |
Avoiding repeated states
- failure to detect repeats → exponential blowup
- graph search avoids this
Summary
- problem formulation = abstraction
- uninformed strategies differ in time/space trade-offs
- ids uses linear space, good compromise
- repeated states must be avoided for efficiency
Problem-solving agent pseudocode
function simple-problem-solving-agent(percept) return action
update state
if sequence empty:
goal ← formulate-goal(state)
problem ← formulate-problem(state, goal)
sequence ← search(problem)
action ← first(sequence)
sequence ← rest(sequence)
return action