- random-labyrinths
- labyrinth-complexity
- breadth-first-search
- monte-carlo-simulations
- adversarial-search-for-bfs
In this post, we will see some tricks for guiding the random search. We will still use the same pattern:
Links deleted near the current shortest path are more likely to help us go through interesting parts of the search space. This gives us the following mutation rule:
"Temporarily" is key here. It means that if we haven't had a new accepted solution within a number of iterations (100 was used), then we restart from the best current solution. This prevents the search from getting stuck if it accepts a very bad solution and cannot find an acceptable solution in the new neighbourhood. With this, we find optimal solutions to N=900 nodes, for the test cases:
We can also prove optimum for test cases 2 and 3. Test 1 optimality was proved in adversarial-search-for-bfs. Call the side length of the labyrinth n. Consider the top right node. For n=30, it will have depth at least 58 in BFS's search tree. The shortest "shortest path" that still expands the top right node must therefore at least be 58 edges long.
In test 2, start is in (0,0) and end is in (10, 10). The number of up-down edges in a path between them must be an even number. The number of left-right edges must also be even. So, the number of edges must be divisible by 4. The smallest number larger than 58 that is divisible by 4 is 60. So the shortest possible shortest path that still leads to a full expansion is 60 edges long, that is to say it contains 61 nodes. The maximal number of unexpanded nodes is then 30**2 - 61 = 839. The maximal score is then 61 - 1 + 839*2 = 1738.
For test 3, the shortest path can have any odd number as length. The smallest odd number larger than 58 is 59. This gives an optimal score of 60 - 1 + 840*2 = 1739.
- Start with a random labyrinth
- For a number of iterations, repeat:
- Mutate the labyrinth in some way
- Measure the result with the hardness score
- Decide if we want to accept the change or discard it
Here, we will change only steps 3 (mutation) and 5 (selection).
The mutation step for the original random search was very simple: delete a small number of edges and then randomly connect the labyrinth again. We make an observation: links deleted far away from the shortest path will probably not change the score a lot.
![]() |
Changing links far away from the shortest path is unlikely to change the score. |
- Select random node which is on the shortest path. Call this the center node.
- Iterate through the edges in a small box (2-3 tiles away) around the center node. For each edge:
- If the edge is in the labyrinth, remove it with some probability.
- If the edge is not in the labyrinth, add it with some probability.
- Connect the labyrinth randomly again if it became unconnected.
The selection step in the random search is also very simple: if the score of the mutated labyrinth is as good as or better than the current known best solution, we accept the change. This has two drawbacks:
- The labyrinth becomes generally sensitive to getting stuck in a local maximum, without any mutation path to the global maximum.
- The problem is often that the shortest path is too long, since the nodes that are not on the shortest path are potentially worth double the points. But making the shortest path shorter often leaves some nodes unexpanded, thus points are lost. So we want to encourage the search to make the shortest path shorter, even if it costs some points in the short term.
With this we can append the selection to include three accepting cases:
- If the score is as good as or better than the current best known solution.
- If the score is at least no worse than the current best known solution minus a threshold (2 was used), then we accept temporarily with some probability (0.1 was used).
- If the shortest path is shorter than in the best solution and the score is no worse than best minus a threshold (10 was used), then we accept temporarily with some probability (0.1 was used).
"Temporarily" is key here. It means that if we haven't had a new accepted solution within a number of iterations (100 was used), then we restart from the best current solution. This prevents the search from getting stuck if it accepts a very bad solution and cannot find an acceptable solution in the new neighbourhood. With this, we find optimal solutions to N=900 nodes, for the test cases:
![]() |
Test 1. Optimal 1740 found in 700 iterations. |
![]() |
Test 2. Optimal 1738 found in 53500 iterations |
![]() |
Test 3. Optimal 1739 found in 28900 iterations. |
In test 2, start is in (0,0) and end is in (10, 10). The number of up-down edges in a path between them must be an even number. The number of left-right edges must also be even. So, the number of edges must be divisible by 4. The smallest number larger than 58 that is divisible by 4 is 60. So the shortest possible shortest path that still leads to a full expansion is 60 edges long, that is to say it contains 61 nodes. The maximal number of unexpanded nodes is then 30**2 - 61 = 839. The maximal score is then 61 - 1 + 839*2 = 1738.
For test 3, the shortest path can have any odd number as length. The smallest odd number larger than 58 is 59. This gives an optimal score of 60 - 1 + 840*2 = 1739.
Inga kommentarer:
Skicka en kommentar