Pathfinding best practices and surprising uses

Weather is crap, so I've studied a lot of resources related to pathfinding, which is the art of finding a path between two or several points. I've already spent a lot of time studying the Hybrid A* pathfinding algorithm used by self-driving cars, but one can't study enough of pathfinding! This is a summary of what I want to remember and what you also will most likely want to know:
  • A good pathfinding engine can be used for more purposes than just moving units around the world. In the real-time strategy game Empire Earth, pathfinding was used for tasks such as terrain analysis, choke-point detection, AI military route planning, weather creation/movement, AI wall building, animal migration routes, and pathfinding. 
  • Pathfinding can also be used as a flood-filling algorithm. The closed list will store all nodes searched, so you can use it to find which nodes were filled with the flood. In this case you can return the top open list node without searching for the cheapest node to process next because in this case, tile costs mean nothing.
  • Pathfinding can be time-consuming, so it may freeze the game. A solution to this problem is to split the paths up over time. This works well for both static and dynamic maps, and was used successfully in the game Empire Earth, which used paths that supported thousands of units. The solution is to first generate a quick path to get the unit moving because the unit should move the moment the player tells it to move. This quick path uses a pathfinder algorithm that stops after maybe 10 iterations and picks the node closest to the goal-node as the path's end-node. Then you should generate a full path, which is a path from the end of the quick path to the goal node we wanted to reach from the beginning. But this full path may be ugly and look wrong because it begins at the end of the quick path, so you need to add a splice path, which uses the same idea as the quick path by using the unit's current position as starting position and a point on the full path as end position. This point should be experimented with but can be like eight nodes in on the full path.
  • In Empire Earth, if an AI-controlled unit failed to find a path more than 10 times in 30 seconds (perhaps because the unit was walled in), they simply killed that unit.
  • Finding the cheapest node in the open list is often time consuming so you need to optimize the list. To optimize this you can use a sorted heap structure that allows for fast removal, but slow insertion. The second method is to have an unsorted list where insertion is fast, but removal is slow. A third alternative is to have a "cheap" list with roughly the 15 cheapest nodes (sorted). So each time you want to add a new node to the open list, you first check if it's cheaper than any nodes in the cheap list, then add it to that list and sort it, so it can have more than 15 nodes. Otherwise, add it to the list with all the other nodes in the open list, which is unsorted. If the cheap list is empty, add 15 of the cheapest nodes from the expensive list.
  • Iterative deepening is the art of restricting the A* algorithm by limiting the number of iterations allowed or limiting the maximum path length. If A* fails to find a path, the restriction can be increased. This can be used if the map is dynamic. Let´s say you have two agents that want to go through a door. The first agent will find a path, but the second will fail. If you had not used iterative deepening, then the second search would have taken a long time before failure. With iterative deepening, the second search will fail fast, but then it may succeed the next time loop because the first agent may have passed the door.
  • In some environments it might be a good idea to return the failed path if pathfinding fails. If an agent follows a failed path instead of just standing still, it might appear as the agent is exploring the environment. 
  • If everything else fails, let the player add waypoints, and the pathfinding algorithm can find paths between these waypoints. 
  • You don't always have to use A*. What if a straight line to the goal is possible? 
  • If the player can't see the AI controlled enemy units, then you can ignore pathfinding and just teleport the units, so pathfinding is not always needed. 
  • It's possible to pre-compute every single path in a search space and store it in a look-up table. For a 100x100 grid, this will need 200 MB of space. An alternative is to calculate a few paths, store them, and see if the path you create while the game is running can use those paths. A unit will most likely pass through a door, so calculate paths from the door.
  • It's important to optimize the search space, the fewer nodes you have to search through, the better. Maybe you don't have to divide the are into small squares, what if you divide the area into waypoints? Or you can combine both techniques: first find the shortest path through the waypoints, and then find the shortest path between the waypoints by dividing the area into squares. In the game Company of Heroes, the high-level search space representation was a hex-grid, and the low-level representation was a square grid. One way to improve search space is to use Quadtrees, which means merging cells that don't have an obstacle in them into one large cell. It works like this: Pathfinding in an Entity Cluttered 3D Virtual Environment
  • A* demands that you use a heuristic which i admissible, meaning that the h-cost is never larger than the true cost. This will result in an optimal path. But if you don't follow this rule, you can get a faster algorithm but not an optimal path. But why would you need an optimal path, will anyone really notice? You can add this to your game by simply multiplying the h-cost with a constant such as 1.5.
  • The best heuristic to use on a grid is the octile heuristic. It looks like this: max(deltax, deltay) + 0.41 * min(deltax, deltay), assuming that diagonal movement cost 1.41. The problem with the euclidean distance is that it underestimates the cost, while manhattan distance overestimates the cost.
  • When should you use a grid, waypoints, or a navigation mesh to represent the search space?
    • Grids are most useful when the terrain is 2D, when implementation time is limited, when the world is dynamic, and when sufficient memory is available. Don't use them when you have a large open world or when you need accuracy (like when you have a house with an angle that doesn't fit the grid)
    • Waypoints are useful when implementation time is limited, when fast path planning is needed, and when you don't need high accuracy.
    • Navigation meshes should be used when you have time to implement them. But there's no best technique to create an optimal navigation mesh. 
  • No one solution is useful all the time. Navigating open terrain requires a mix of different techniques. Use local collision avoidance techniques for nearby areas and pre-processed data for longer distances. Using multiple techniques that complement each other yields better results than any one technique alone.
  • It's kinda boring if all AI controlled units follow the exact same path. To solve this, each edge between two nodes can have a width, and you need to make sure this width doesn't collide with an obstacle. Then each unit follows the edge but with a certain distance from the edge.
  • While A* is finding the shortest path to a single point from a points, Dijkstra's algorithm will find the shortest path to all points from a point.
  • Pathfinding algorithms tend to terminate when the goal node is the node with the smallest cost, so the algorithm doesn't terminate when it reaches the goal node. But when it first reaches the goal node, then that path is in many cases also the best path, so you could stop the algorithm when it first sees the goal node and not when the goal node is the node with the lowest cost. 

Sources

Comments