July 24, 2017

This self-driving car is now faster with the flow field algorithm

0 comments
So I've made a self-driving car in Unity. The car is using the Hybrid A* pathfinding algorithm along with other algorithms to find its way around a confined area, such as a parking lot. You can test it here or here. The old version was working fine, but one can always improve. The updates in this version are:
  • Added lines showing the waypoints 
  • Fixed a bug where the wrong value was added to the sum or errors in the PID controller, so the car is now following the path much better 
  • Added a cost for switching driving direction, like forward -> reverse, because it would be annoying to sit in a car that constantly is switching driving directions
  • Fixed a bug where the smooth path algorithm optimized the distance to obstacle in 3d space and not 2d space 
  • Added an improved obstacle-intersection algorithm which is faster and more accurate. It's now using rectangle-rectangle intersection instead of circle-circle intersection
  • Added UI where you can display the grid and the flow field 
  • When optimizing the final path, it's now optimizing the distance to the average of the surrounding obstacles and not just the closest obstacle 
  • Cleaned up the code
  • Added flow fields as heuristic and for obstacle detection, which makes the pathfinding much faster 
  • Made so that the Hybrid A star can expand to a cell if this expansion has a lower cost than previously
  • Improved path following so the car is slowing down gradually before it reaches the end or a turning point such as "reverse -> forward". Previously you could often see how the car crashed into walls because it was driving too fast

This is the flow field the car is using to faster detect if it's colliding with an obstacle:


The area is divided into cells, and the darker the cell is, the further away it is from an obstacle. So if you want to check if a car is colliding with an obstacle while generating the path, you can see in which cell it is and see how far away from an obstacle it is. If it can't possible be colliding with an obstacle, then it's not colliding with an obstacle and you don't need to use a slow rectangle-rectangle intersection algorithm to check if it's colliding with an obstacle. 

July 16, 2017

What happened to John Carmack after the book Masters of Doom?

0 comments
John Carmack is famous for creating the games Doom and Quake. In the book Masters of Doom, you will learn how he created those games and the company id software. That book ended with the game Quake 3, but what has he been up to since 1997? The answer comes in a free 300 pages pdf, which consists of interviews with John Carmack, beginning with Quake 3 and ending with Rage, so from year 1997-2008.

Here are some key lessons learned from the book
  • "The assumption that hiring more people gives a better product is often incorrect. It might (but not always) give you more product, but not necessarily better product." [He argued that the 11 people id software had at the time is the right number of people to do a project and having 50 people would hurt the project]
  • You should listen to the community, but this will also take time, so you have to prioritize. Carmack argued that John Romero (who was forced to leave the company) was talking to the community when he should have been working.
  • "I will always take an aggressively simple approach to things" was his answer to the question if Quake's level should be more interactive.
  • Carmack was working 70 hours a week for six years.
  • "I had this really bizarre conversation with a couple of lawyers and they were talking about 'How do you pick your target market? Do you use focus groups and poll people and all this?' It's like 'No, we just write games that we think are cool.' They're from such a different world that they fundamentally did not get that."
  • "There's plenty of starving artists that are probably talented and hard working, they're just doing something nobody cares about."
  • "The graphics technology everybody looks at is only a quarter of Quake's code, and it's not even the hard part of it. Look at the things that are really unglamorous but really important, like the file extension architecture in Quake."
  • "I tried just a couple of features, like transparency or shadows and reflections. And they just dropped in - it was so wonderful. And that was what really got me crusading for OpenGL. It was like, "This helps me. This is something that's letting me be more creative and try out more things to produce a better architecture than we would have had otherwise." A good API can help you produce better products." [So using a game engine like Unity is not cheating!]
  • How do you handle third-party developers when you license your engines? Carmack: It's, "Here's a CD. Thank you for the half-million dollars. Have a nice life."
  • "All of science and technology is built standing on the shoulders of the people that come before you. Did Newton patent calculus and screw the hell out of the other guy working in it? It's just so wrong, but it's what the business world does to things, and certainly the world is controlled by business interests." [Why software patents is a bad idea]
  • You hear it from the fan base a lot. "Do it right. We'll still be here. We'll wait," and it's tempting to just let things slip. But that's really not OK. If you're doing something cutting edge, you're making fundamental decisions about your architecture, and if you let it slide for a year or two, then it's just not the right decision anymore. Even if you pile on all these extras, it's not optimal. It's not targeted at what you're doing. [Why it's important to ship a game. They planned it would take 12 months to make Quake, but it took 18, which is fine] If we shipped on time, we probably weren't ambitious enough. Quake 2 would take 12 months because "a lot of things are functioning better at the company."
  • "Programming is really just the mundane aspect of expressing a solution to a problem. There are talents that are specifically related to actually coding, but the real issue is being able to grasp problems and devise solutions that are detailed enough to actually be coded." 
  • "I feel bad for some companies out there. The founders, who are these incredible engineers, are now directors of their departments doing management rather than engineering. At the same time most of the people they are managing are nowhere near as good as they were at doing the actual work. That's what I hope never happens to me. I want to stay in the trenches working on the things all the time."
  • "In the last two projects, my time has been split. I'd have about 3 months och pure research, playing around with different stuff. And then after that it's about 16 months of work on the project. It would be nice to shift that more towards research, but I would never want to devote a majority of my time to research." 
  • "It's not the one brilliant decision, it's the 500 smart decisions that really make things good. Even at the end of Quake 3, I had a to-do list of a thousand things that could potentially be improved on. So it's a matter of going through and knowing all these things that could be done, and prioritizing what the "sweet spots" are."
  • "I actually learn from almost anyone. Maybe I was smarter than the [college] professor but it didn't mean that there weren't things I could learn from him. So it doesn't take someone that's necessarily a "better" developer or programmer for them to have things that you can learn from. So almost all the programmers I've worked with, I've learned something from."
  • "There have been flight simulators where you just jump in, fly around and shoot things, and those are fun and interesting. Then there are these serious simulators where you have to convince yourself that you're being entertained. I still like playing a simple, fast game, where you jump in and have a good time, and I think there are five times as many people in the game buying world that also feel that way." 
  • "Sometimes I wind up feeling guilty that I'm not doing more. I'm willing to just ignore a whole lot of things. And that's pretty important, because so many things come in that are potential demands on my time, and it's just easy to see how people that are in a similar situation, wind up just getting their entire days spoken for, and not being able to do any work because every day, there's phone calls and emails from people that want to do something, they want to have an interview, or pitch a business proposal." 
  • On market research: "I don't do a really through canvassing, play everything that's out there. Usually I watch over someone's shoulder when they're playing the hot new game. But I don't spend that much time actually playing other games. I certainly play more Quake 3 than any other games out there."
  • "Too many programmers agree to random feature requests without thoroughly considering the impacts."
  • "There are a lot of arguments that can be made about game design, and I prefer simplicity and elegance. I'm always the one saying we want the minimum number of everything, because I want it to be simple and fun to play." He would later say that he stepped away from id software because "I'm really no representative of what most of our market is now. I did realize that my very simplified game design ethic is not what the market is demanding. [Quake 3] was the id game I probably spent the most time playing and enjoyed the most, but it was actually one of our less successful titles." 
  • "Usually when I set out making the technical decisions I don't know how it's going to turn out. A lot of it is working out what works, and what ideas come to you. We commonly switch gears during our development process when a really good opportunity comes up." 
  • "Once or twice a year I go on "working retreats", where I lock myself in a hotel room for two weeks with no internet connection for completely focused work." 
  • "We try to pick directions that a good number of people will enjoy, then just do a quality job implementing it. Some people will love it, and some people will hate it. The people that hate it usually scream louder."
  • "Tessellation has been one of those things up there with procedural content generation where it's been five generations that we've been having people tell us it's going to be the next big thing and it never does turn out to be the case. What we want is something that you can carve up the world as continuously as you want without any respect to underlying geometry." 
  • "A lot of people don't seem to really appreciate how the vertex fragment rasterization approach to computer graphics has been unquestionable the most successful multi-processing solution ever."
  • "'Oh just thread your application.' Anyone that says that is basically an idiot, not appreciating the problems." 

Update! John Carmack replied to this article on twitter: 

July 11, 2017

Why you need to learn the Flow Field algorithm

0 comments
As said in the last post, I've read a lot about pathfinding. One of the pathfinding algorithms I found was the Flow Field. A Flow Field covers the entire grid (if a cell is not an obstacle and if a cell can be found from the start position), so while A* is finding the shortest path from the start position to a goal position, a Flow Field is inspired by water and is finding a path from each cell to the start position. It looks like this:


In the image above, the darker a cell is, the further away it is from the start position, which is the red sphere. The blue lines is pointing in the direction of the shortest path. But you can also add more start positions, which is actually speeding up the algorithm. So if you want to find the shortest path to the nearest door, then you can use the Flow Field.

A Flow Field is useful if you have a lot of units trying to find their way through your map, and was used in the game Planetary Annihilation. What they were doing was to divide the area into sectors, then use A* to find the shortest path between the sectors, and then they used a Flow Field to find the shortest paths through a sector. It looks like this:


But that's not it. A few years ago I made a self-driving car in Unity after reading a report on the Hybrid A* algorithm. One of the heuristics used by the Hybrid A* algorithm was something they called Dynamic Programming. I realized that Dynamic Programming is actually a Flow Field, so I updated my self-driving car with that algorithm. The 160x160 Flow Field looks like this and is generated in around 0.05 seconds:


You can also use the Flow Field algorithm to help you with obstacle detection. In the Hybrid A* algorithm, it's obviously important to quickly find the distance to the closest obstacle, and you can generate these distances fast once in the beginning. That Flow Field looks like this:


July 4, 2017

Pathfinding best practices and surprising uses

0 comments
Weather is crap, so I've studied a lot of resources related to pathfinding, which is the art of finding a path between two or several points. I've already spent a lot of time studying the Hybrid A* pathfinding algorithm used by self-driving cars, but one can't study enough of pathfinding! This is a summary of what I want to remember and what you also will most likely want to know:
  • A good pathfinding engine can be used for more purposes than just moving units around the world. In the real-time strategy game Empire Earth, pathfinding was used for tasks such as terrain analysis, choke-point detection, AI military route planning, weather creation/movement, AI wall building, animal migration routes, and pathfinding. 
  • Pathfinding can also be used as a flood-filling algorithm. The closed list will store all nodes searched, so you can use it to find which nodes were filled with the flood. In this case you can return the top open list node without searching for the cheapest node to process next because in this case, tile costs mean nothing.
  • Pathfinding can be time-consuming, so it may freeze the game. A solution to this problem is to split the paths up over time. This works well for both static and dynamic maps, and was used successfully in the game Empire Earth, which used paths that supported thousands of units. The solution is to first generate a quick path to get the unit moving because the unit should move the moment the player tells it to move. This quick path uses a pathfinder algorithm that stops after maybe 10 iterations and picks the node closest to the goal-node as the path's end-node. Then you should generate a full path, which is a path from the end of the quick path to the goal node we wanted to reach from the beginning. But this full path may be ugly and look wrong because it begins at the end of the quick path, so you need to add a splice path, which uses the same idea as the quick path by using the unit's current position as starting position and a point on the full path as end position. This point should be experimented with but can be like eight nodes in on the full path.
  • In Empire Earth, if an AI-controlled unit failed to find a path more than 10 times in 30 seconds (perhaps because the unit was walled in), they simply killed that unit.
  • Finding the cheapest node in the open list is often time consuming so you need to optimize the list. To optimize this you can use a sorted heap structure that allows for fast removal, but slow insertion. The second method is to have an unsorted list where insertion is fast, but removal is slow. A third alternative is to have a "cheap" list with roughly the 15 cheapest nodes (sorted). So each time you want to add a new node to the open list, you first check if it's cheaper than any nodes in the cheap list, then add it to that list and sort it, so it can have more than 15 nodes. Otherwise, add it to the list with all the other nodes in the open list, which is unsorted. If the cheap list is empty, add 15 of the cheapest nodes from the expensive list.
  • Iterative deepening is the art of restricting the A* algorithm by limiting the number of iterations allowed or limiting the maximum path length. If A* fails to find a path, the restriction can be increased. This can be used if the map is dynamic. Let´s say you have two agents that want to go through a door. The first agent will find a path, but the second will fail. If you had not used iterative deepening, then the second search would have taken a long time before failure. With iterative deepening, the second search will fail fast, but then it may succeed the next time loop because the first agent may have passed the door.
  • In some environments it might be a good idea to return the failed path if pathfinding fails. If an agent follows a failed path instead of just standing still, it might appear as the agent is exploring the environment. 
  • If everything else fails, let the player add waypoints, and the pathfinding algorithm can find paths between these waypoints. 
  • You don't always have to use A*. What if a straight line to the goal is possible? 
  • If the player can't see the AI controlled enemy units, then you can ignore pathfinding and just teleport the units, so pathfinding is not always needed. 
  • It's possible to pre-compute every single path in a search space and store it in a look-up table. For a 100x100 grid, this will need 200 MB of space. An alternative is to calculate a few paths, store them, and see if the path you create while the game is running can use those paths. A unit will most likely pass through a door, so calculate paths from the door.
  • It's important to optimize the search space, the fewer nodes you have to search through, the better. Maybe you don't have to divide the are into small squares, what if you divide the area into waypoints? Or you can combine both techniques: first find the shortest path through the waypoints, and then find the shortest path between the waypoints by dividing the area into squares. In the game Company of Heroes, the high-level search space representation was a hex-grid, and the low-level representation was a square grid. One way to improve search space is to use Quadtrees, which means merging cells that don't have an obstacle in them into one large cell. It works like this: Pathfinding in an Entity Cluttered 3D Virtual Environment
  • A* demands that you use a heuristic which i admissible, meaning that the h-cost is never larger than the true cost. This will result in an optimal path. But if you don't follow this rule, you can get a faster algorithm but not an optimal path. But why would you need an optimal path, will anyone really notice? You can add this to your game by simply multiplying the h-cost with a constant such as 1.5.
  • The best heuristic to use on a grid is the octile heuristic. It looks like this: max(deltax, deltay) + 0.41 * min(deltax, deltay), assuming that diagonal movement cost 1.41. The problem with the euclidean distance is that it underestimates the cost, while manhattan distance overestimates the cost.
  • When should you use a grid, waypoints, or a navigation mesh to represent the search space?
    • Grids are most useful when the terrain is 2D, when implementation time is limited, when the world is dynamic, and when sufficient memory is available. Don't use them when you have a large open world or when you need accuracy (like when you have a house with an angle that doesn't fit the grid)
    • Waypoints are useful when implementation time is limited, when fast path planning is needed, and when you don't need high accuracy.
    • Navigation meshes should be used when you have time to implement them. But there's no best technique to create an optimal navigation mesh. 
  • No one solution is useful all the time. Navigating open terrain requires a mix of different techniques. Use local collision avoidance techniques for nearby areas and pre-processed data for longer distances. Using multiple techniques that complement each other yields better results than any one technique alone.
  • It's kinda boring if all AI controlled units follow the exact same path. To solve this, each edge between two nodes can have a width, and you need to make sure this width doesn't collide with an obstacle. Then each unit follows the edge but with a certain distance from the edge.
  • While A* is finding the shortest path to a single point from a points, Dijkstra's algorithm will find the shortest path to all points from a point.
  • Pathfinding algorithms tend to terminate when the goal node is the node with the smallest cost, so the algorithm doesn't terminate when it reaches the goal node. But when it first reaches the goal node, then that path is in many cases also the best path, so you could stop the algorithm when it first sees the goal node and not when the goal node is the node with the lowest cost. 

Sources

July 1, 2017

The secrets of Artificial Intelligence in classic games

0 comments
The old classic computer games were fun and often challenging because of the Artificial Intelligence that controlled the enemies. But how could the AI in those old computers be challenging? This will be a short description of how the AI in classic games was working and the tricks they used.
  • Warcraft. The AI was waiting for some fixed amount of time, and then started spawning a wave of units to attack you. It would spawn these units at the edge of the gray fog and not at the base as you first may have thought. It continued to spawn these units until your defenses were nearly overwhelmed - and then it would stop, leaving you to kill the remaining enemy units and win the fight. So the AI didn't have to worry about building buildings, saving money, or building units. 
  • Pac-Man. The AI had two states: normal, which is when the player is collecting dots, and another state when the player has eaten the power-up. In their normal state, each of the four ghosts moves in a straight line until they reach a junction. At a junction, they randomly choose a route to move to next. Each ghost chooses either to take the route that's in the direction of the player (calculated by using an offset, so no pathfinding), or to take a random route. The choice depends on the ghost: each has a different probability of doing one or the other. This sounds like stupid AI, but those who played the game thought the AI was smarter, arguing the ghosts tried to set a trap or group up to attack the player.
  • The Sims. This game put the intelligence in the world and not in the character. The objects in the world, like a fridge or a television, tell nearby sims what they offer, such as food or entertainment. The object will also tell the sims how to perform the action to get the offer. 
  • Starcraft. The game had several pathfinding problems. One of these problems was that the units who collected minerals jammed. The solution was: "whenever harvesters are on their way to get minerals, or when they're on the way back carrying those minerals, they ignore collisions with other units."

Sources