Wednesday, September 10, 2008

In search of...

I'm taking an advanced class in heuristic search algorithms this semester, and I thought I'd show you all a little about some basic state space search algorithms. It's often possible to solve relatively hard problems using a simple breadth first or depth first search (especially if you're not worried about how long it will take.) We'll look at the algorithms in terms of a classic example. For those who are already familiar, we'll be looking at vacuum world, or how to plan the most efficient path for an automated vacuum.

First, some assumptions: we know the locations of the vacuum and all dirt from the start; actions can be a movement (up, down, left, or right) or sucking up dirt; and any action costs 1 step. We will represent a state as the vacuum location, dirt locations, and the sequence of actions that must be taken to get there. Our algorithms will be given a starting state, an expand function which takes a state and returns its children (those states reachable from it by taking one action), and a goal test function to see if we've gotten all the dirt. Without getting too muddled in implementation details, we'll look briefly at what the expand and goal test functions might look like.
#expansion function
def expand(s):
moves = [move_up, move_down, move_right, move_left, suck]

children = []
for move in moves:
if applicable(move): children += [move(s)]
return children

#goal test function
def goal_test(s):
return len(s[DIRT]) == 0

Before doing any work on the algorithms, we should consider what a solution to the problem means. What we want is a legal sequence of actions that, when executed, leaves the world free of dirt. Simply being able to find such a solution makes an algorithm complete. We also want it to be efficient, meaning that ideally it should take the absolute fewest steps possible. We call algorithms that give the cheapest possible solution admissible.

The first algorithm we'll consider is depth first search, which follows a plan (branch) to its end before exploring any others. For each iteration of the main loop, we pull a node from the start of the open list and add its children to the start of the list.
#a simple depth first search implementation
def dfs(start, expand, goal_test):
open_list = []
curr = start
while not goal_test(curr):
open_list = expand(curr) + open_list
if not len(open_list) == 0:
curr = open_list.pop(0)
else:
print "no solution possible!"
return
print "goal found:", curr
return

Let's consider what results a depth first search offers. Does it always give us a solution? Its propensity for always diving deeper means that in some cases it could keep diving forever. This is certainly true in vacuum world where we can move in circles or simply never choose to suck up dirt. We can avoid these sorts of infinite loops by maintaining a list of states we've seen (or closed list) along with the cheapest cost for getting there. If we see the same state again at an equal or greater cost, we ignore it (don't expand it or add it to the open list.) When DFS finds solutions, though, are they as cheap as possible? Again, prospects look bleak. It's quite possible that we'll take a very silly route, because our simple implementation would, for instance, go up as far as possible before trying anything else.

Next, we'll look at breadth first search, which explores all states at a given depth (with the same number of actions taken so far) before looking further down. For each loop iteration, we pull a node from the start of the open list, but insert its children at the end of the list.
#a simple breadth first search implementation
def bfs(start, expand, goal_test):
open_list = []
curr = start
while not goal_test(curr):
open_list += expand(curr)
if not len(open_list) == 0:
curr = open_list.pop(0)
else:
print "no solution possible!"
return
print "goal found:", curr
return

While the implementation seems almost identical, the properties of the two algorithms are very different. Breadth first search is guaranteed to give a solution, if one exists, because it doesn't get caught up focusing on one particular (possibly bad) branch. It also finds the cheapest solution for worlds in which all actions have the same cost, because it will never look at a lower node before it has expanded all nodes above it. We don't need a closed list, but we could maintain one to avoid duplicating work (like how left, right produces the same state as right, left.)

I'd be remiss if I didn't point out the memory and processing complexity of these two algorithms. In the worst case, both can take b^d time (b is the branching factor, decided here by the number of actions; d is the maximum depth that will ever be explored, infinity for DFS and the depth of the best solution for BFS.) When it comes to memory, we finally find a redeeming quality for DFS because it takes b*d space (since it only stores one branch at a time in memory) and BFS takes b^d space (because it stores an entire depth level in memory.)

Next, we'll discuss algorithms that use cost so far and/or a heuristic function to determine which nodes to expand.

No comments: