![]() |
A solved labyrinth |
One way to measure complexity is to look at the number of nodes that is expanded during search. We can postulate that in a perfect labyrinth, all nodes must be visited. This turns out to have a simple, but unsatisfying solution:
![]() |
A labyrinth that forces you to visit all nodes, but is still "easy". |
This is not what we would like to mean by a good labyrinth. Intuitively, in a hard labyrinth, solving is not a straight march to the goal, but rather one is forced to retrace ones steps a lot. There should be some penalty for backtracking. A measure that captures this is to count the number of passed edges. Now, if we encounter a dead end (leaf node), we also have to count the edges again when backtracking.
Suppose that our search agent makes all decisions by coin flip. Then, the expected remaining steps, E, for a node, v, obeys the equation:
\[E(v) = 1 + mean(\{E(v_i)\}_i)\]
where \[\{v_i\}\] are the neighbours of v. That is unless, v is the end node, in which case \[E(v_{end}) = 0\]
Solving this is just a matter of solving a linear equation system. It can be formulated nicely in a constraint paradigm:
\[E(v) = 1 + mean(\{E(v_i)\}_i)\]
where \[\{v_i\}\] are the neighbours of v. That is unless, v is the end node, in which case \[E(v_{end}) = 0\]
Solving this is just a matter of solving a linear equation system. It can be formulated nicely in a constraint paradigm:
Note that the use of a Linear Programming solver is overkill, but makes for an elegant formulation. The entire problem will be solved in the presolver. So what is the expected number of passed edges from start to end for a random labyrinth? The result for a 10x10 instance:
The problem here is obvious: the random walk is just too stupid to take seriously. Each node is visited 20.58 times before we reach the end. Searching through the entire labyrinth with depth-first expands 2 times the number of nodes (proof later), so that should be an upper bound.
![]() |
10x10 instance complexity computed for random walk. Nodes are visited on average over 20 times before end is reached - bad! |
Let's assume then that our search agent does a depth-first search, remembering which subtrees have already been visited completely.
Given a labyrinth then, how do we compute the expected number of edges that will be passed by the depth-first agent to solve it? This turns out to depend on whether the labyrinth is a tree or not. The labyrinths generated in [1] are trees. This follows from the fact that all the links we added were between non-connected components. It can be visualized by plotting paths to all nodes:
![]() |
Just the paths |
\[E(v) = \frac{1}{2}E(T_{original}) + \frac{1}{2}(E(T_{new} + E(T_{original}))) = N_{original} - 1 + \frac{1}{2}E(T_{new})\]
The expected cost of traversing T_new is the same as the number of nodes in the subtree. This is because all nodes are expanded and popped exactly once each. So, the sum adds up to the number of nodes in the full tree minus 1.
So, it turns out that all tree-shaped labyrinths of the same size are equally hard for a depth-first agent. This is bad news for those of us who would like to make hard mazes. But we can always try tricking another agent.
This is also valid as an upper bound for non-tree shaped labyrinths. When a node we are expanding from has a neighbour which has already been visited, we can always pretend that that edge doesn't exist. This will make the search behave as a tree search.
References
Inga kommentarer:
Skicka en kommentar