- random-labyrinths
- labyrinth-complexity
- breadth-first-search
- monte-carlo-simulations
- adversarial-search-for-bfs
- improving-random-search
In the previous post we saw how, with the help of a slightly more advanced random search, we could find optimally hard labyrinths (hardness defined in labyrinth-complexity). That is, optimal hardness given that the labyrinth was solved by breadth-first search. What if we use a more advanced search algorithm?
A* (A-star)
A* is a heuristic search algorithm. That means it uses some rule of thumb (heuristic) to decide which node to expand next. The node to be expanded next is the node with the lowest heuristic value. For A*, this heuristic f(n) is split into two parts:
f(n) = g(n) + h(n)
Here, g(n) is the cost of going from the start node to n. In our case, we can just use the depth in the search tree. The goal heuristic h(n) is a measure of the cost of going from n to the end node. For this, we use the manhattan distance between n and the end node. A* has some optimal efficiency properties, which need two assumptions to be fulfilled. First, the heuristic must satisfy a triangle inequality. Manhattan distance is just the L1 norm, which fulfills triangle inequality. Second, the heuristic must never overestimate the cost from n to end. This is also true in our case, since manhattan distance between A and B is at least as short as any possible route in the labyrinth between A and B. With this extra information given to the search, the we cannot get the same hardness anymore:
![]() |
Optimally hard labyrinth with A* as search algorithm. |
Wall-counting A*
Now, we can give the search algorithm even more information to make it an even better adversary. Here, we let the algorithm store an internal representation of all missing edges (walls) that it has seen so far in the labyrinth. Its search heuristic now becomes the length of the shortest path in this internal representation graph. Call this graph the Obstruction graph. This allows it to effectively dodge dead ends, and makes life difficult for the disrupting search.
![]() |
The disruptor needs to lead the searcher on a long detour to force it to expand all nodes. This reduces the hardness, as fewer nodes are dead-end. |
We see where the above is going:
![]() |
A*-buster |
![]() |
For N=900 nodes, search is very slow. I need to optimize the Wall-counting A* to be able to run longer searches. But we are starting to see some really difficult labyrinths. |
![]() |
I think 172 is optimal here. End should be at least (9+9) + (4 + 4) = 26 deep. |
PS: an obvious fix to Wall-counting A*
Plotting the expanding search tree (see below) made me realize: wall-counting A* expands a lot of nodes that cannot reach end without going through the rest of the existing search tree. Such a node should not be expanded, by a minimality argument. So, we should also add the edges in the search tree to the internal obstruction graph. This gives us:
![]() |
Improved wall-counting A*. Does not expand nodes that are isolated by the search tree itself. |
![]() |
Expanding search tree for improved wall-counting A*. |
Inga kommentarer:
Skicka en kommentar