December 28, 2015

Nikola Tesla on Artificial Intelligence

2 comments

Nikola Tesla (1856 - 1943) was an inventor born in Croatia, and is best known for his contributions to the design of the modern alternating current (AC) electricity supply system. He was also very bright, which was a problem because he could design his machines in his head so he didn't always build the final product. And if you don't build the final product and sell it, then you will not make much money. So Tesla came up with these brilliant machines, but he still had to borrow money to survive. 

Tesla has also given his name to the electric car company Tesla Motors. One of the founders of that company, JB Straubel, was a fan of Nikola Tesla (but it wasn't he who gave the company its name), and his favorite book about the man is Wizard: The Life and Times of Nikola Tesla by Marc Seifer. I finished reading that book and learned, among other things, that one of the reasons Tesla could accomplish so much was that he was ambitious. This is very similar to Albert Einstein, who could also dedicate his life to whatever he was doing.  

When Tesla was a young student, his teachers were worried about him. The reason was that he had "a veritable mania for finishing whatever I began." So he couldn't simply stop himself from doing whatever he was doing. If he began reading a book he couldn't do anything else before he had finished it. So the teachers said that "the boy was at risk of injuring his health by obsessively long and intense hours of study." He could study for 20 hours a day. 

Tesla would move to the US together with his ambitions. He could experiment day and night, holidays not except. He drove himself until he collapsed, working around the clock, with few breaks. He preferred working through the night, when distractions could be minimized and concentration could be intensified. He argued that "every hour, every moment, that was not spent working on inventions was time away from his purpose." Even the intervals spent eating and sleeping delayed progress, so he reduced his sleeping to a minimum and his eating to the bare necessities. He argued he could sleep 2 hours per day while "dozing" from time to time to recharge his batteries. He said:
I get all the nourishment I require from my laboratory. I know I am completely worn out, and yet I cannot stop my work. These experiments of mine are so important, so beautiful, so fascinating, that I can hardly tear myself away from them to eat, and when I try to sleep I think about them constantly.

One other thing I learned from the book was that Tesla was also interested in Artificial Intelligence. The young Tesla studied the theories of René Descartes, who envisioned animals, including man, as simply "automata incapable of actions other than those characteristic of a machine." Tesla said that he wanted "to devise mechanical means for doing away with needless tasks of physical labor so that humans could spend more time in creative endeavors." When Tesla was asked to predict the future he said that robots and thinking machines will replace humans. His vision was that machines could liberate the worker and that fighting machines could replace soldiers on the field.

Tesla had come to see the human body in its essence as a machine. He said that memory "is but increased responsiveness to repeated stimuli." It's unclear if he actually tried to build a machine similar to himself, but he was thinking about it:
Long ago I conceived the idea of constructing an automaton which would mechanically represent me, and which would respond, as I do myself, but of course, in a much more primitive manner to external influences. Such an automaton evidently had to have motive power, organs for locomotion, directive organs and one or more sensitive organs so adapted as to be excited by external stimuli. Whether the automaton be of flesh and bone, or of wood or steel, it mattered little, provided it could provide all the duties required of it like an intelligent being.

But what is known is that Tesla built a remote controlled boat. To him, his boat was not simply a machine, it was "a new technological creation endowed with the ability to think." It was also, to him, the first non-biological life-form on the planet, arguing that life-forms need not be made out of flesh and blood. He said:
Even matter called inorganic, believed to be dead, responds to irritants and gives unmistakable evidence of a living principle within.

December 22, 2015

The secrets behind Albert Einsteins success

0 comments

I've read the book Einstein - His life and universe by Walter Isaacson, who is also the author of the most famous book about Steve Jobs. This summer I also read another book by Walter Isaacson called The Innovators, which is all about the history of the digital age, ranging from the first analog computer by Charles Babbage to Google. I also tried to read his book about Benjamin Franklin, but gave up because it was filled with politics which is not really my cup of tea.
The basic theme in the book The Innovators is that those who collaborated with other inventors succeeded, while those who didn't collaborate failed. It was the computer built by a team that succeeded and the computer built by the lone inventor failed. Now this is not always true, because Albert Einstein was in fact the loner who succeeded. 
Einstein didn't invent anything, but he developed the theories he's now famous for while working as a patent examiner. Why was he working as a patent examiner? Because no one wanted to hire him. Einstein was actually the only person graduating in his section who was not offered a job and he often didn't even get a reply on his applications! But he responded with humor by saying "God created the donkey and gave him a thick skin." 
So while trying to remain optimistic, Einstein examined patents six days a week and in the evenings he developed his theories that would eventually give him the Nobel Prize in physics in 1921. He was so efficient that he managed to do a full day's work in three hours, and the remaining part of the day he would work with his own ideas. It was doing what he enjoyed that kept him sane while everyone else succeeded. "What kept him happy were the theoretical papers he was writing on his own." 
In hindsight, Einstein argued that it was actually good for him to not get a job, because he wasn't influenced by other people's thinking and he could develop his own "crazy" ideas. "An academic career in which a person is forced to produce scientific writings in great amounts creates a danger of intellectual superficiality." So Einstein was a rebel, and there was a link between his creativity and his forced willingness to defy authority. He could throw out conventional thinking that had defined science for centuries.  
So how did he do it? 
  • Have imagination. Einstein argued that "Imagination is more important than knowledge." He also argued that "the value of a college education is not the learning of many facts but the training of the mind to think." Einstein never began with experimental data. Instead, he generally began with postulates he had abstracted from his understanding of the physical world. Einstein's ideas are abstract and are not always easy to grasp. But he believed that the end product of any theory must be conclusions that can be confirmed by experience and empirical tests. He is famous for ending is papers with calls for these types of suggested experiments.
  • Do something else when you are stuck. When he couldn't solve a problem he played the violin late at night. "Then, suddenly, in the middle of playing, he would announce excitedly, 'I've got it!'"
  • Work a lot. Einstein was ambitious. He and his wife had separate bedrooms so he could spend more time with his calculations. "For I shall never give up the state of living alone, which has manifested itself as an indescribable blessing." He worked so much that he didn't really enjoy food. When he invited visitors for lunch, he heated cans of beans. Then they ate the beans with spoons directly from the can. Einstein also used his work to escape the complexity of human emotions. When his wife was dying, he worked even more.  
  • Change your mind. Einstein wasn't mindlessly stubborn. When he realized his idea wouldn't work, he was willing to abandon it. Before Hitler, Einstein was a pacifist and thought the solution to war to not rearm after the First World War. But after the Second World War, Einstein thought he had made a mistake by encouraging Germany's neighbors not to rearm. 
  • Be a star. The reason Einstein is now an icon and almost everyone can recognize him if they see a picture of him is because he could, and would, play the role. "Scientists who become icons must not only be geniuses but also performers, playing to the crowd and enjoying public acclaim." And Einstein performed. He gave interviews and knew exactly what made a good story, and he often made jokes during interviews.

    December 15, 2015

    Santa Claus Down - or how to make a game in 48 hours

    0 comments

    This weekend I participated in a competition called Ludum Dare where the idea is to make a game in 48 hours or 72 hours. The former is called "Compo" and is more hardcore because you have to create everything on your own down to the smallest texture and sound. The 72 hours version, which is called "Jam," is more relaxed: you can work in a team and you can use old assets. But I'm hardcore, so I created the game Santa Claus Down in 48 hours.

    The theme this competition was either "Growing" or "Two button controls" - or both. It's usually just one theme, but this competition the voting between the themes was tied. I chose growing as my theme and the plan was to make a growing truck that's growing with more trailers as you progress in the game, like the classic game Snake.


    The basic idea behind the game is that Santa Claus has crashed and you have to deliver the gifts. I originally planed to create a random town where you have to drive around to deliver all gifts, but I ran out of time. Instead I ended up making an endless road system, where the roads varies between highway, house where you can deliver a gift, and a normal road.


    This was my sixth Ludum Dare competition. I failed once, but I submitted games to the other five competitions. The good thing is that users who are voting also give you constructive criticism. The main criticism I got from the last competitions was that my games were creative but were not fun to play. So this competition I decided to make a more simple game and spend about 50 percent of the time making the game fun to play. This failed because of the failed idea to make a random city, so I spent maybe 30 percent of the time making the game more fun.

    To learn how to make a game more fun I read the book The Art of Game Design. From that book I learned that (some) players like explosions, and most player want to play a game where the challenge is increasing. So I added explosions when you hit a car. It's not realistic but people who played the game said that it was fun to crash into cars:
    ...I felt like I was fighting the controls a bit, but it did make tail swiping cars more satisfying. As it was, tail swiping cars was really the high point of the game. I think this game could really work as a high paced drive around tailswiping cars and delivering presents to a destination kind of game.
    To make the game increasingly challenging I tweaked:
    • When new cars arrive on the roads
    • When you get a new trailer
    • When the Grinch is arriving. I added the Grinch, who is driving a green car and wants to crash into your truck. The Grinch is only appearing after you have delivered a few gifts
    • When ramps appear on the road. I added ramps that you have to either drive over or drive around
    • When a heart appears that will give back some of your health 


    This competition I also added sounds to the game. As I participated in the Compo I had to make all sounds myself. I didn't have time to go out and and record truck sounds, so I used a tool called Bfxr, which generates random sounds. Finding a good truck sound wasn't easy, so I ended up with a sound similar to the sound of a small boat. One of the players said it all:
    ...putt putt putt putt putt putt putt putt putt putt putt putt.. Don't mind me, just doin' donuts in ma truck!

    Looks interesting? You can play it here.

    December 9, 2015

    Books I read in 2015

    0 comments
    2015 is almost over and it's time to summarize which books I've read this year. This year I wanted to learn more about Artificial Intelligence, so the list includes several books with that theme. I'm keeping track of the books through my Goodreads account, so don't feel sorry for me that it took a long time to complete the list, because it didn't!

    Artificial Intelligence:
    1. Neuroscience for Dummies
    2. Ten Years To the Singularity If We Really, Really Try... and other Essays on AGI and its Implications
    3. Between Ape and Artilect: Conversations with Pioneers of Artificial General Intelligence and Other Transformative Technologies
    4. The Computer and the Brain
    5. Alan Turing: The Enigma
    6. On Intelligence
    7. Consciousness: Confessions of a Romantic Reductionist
    8. How to Create a Mind: The Secret of Human Thought Revealed
    9. Our Final Invention: Artificial Intelligence and the End of the Human Era
    10. Vehicles: Experiments in Synthetic Psychology
    11. I, Robot
    12. Predictive Analytics: The Power to Predict Who Will Click, Buy, Lie, or Die
    13. Prey
    14. The Quest for Artificial Intelligence: A History of Ideas and Achievements
    15. Artificial Intelligence for Games

    Other:
    1. Stockholms undergÄng
    2. Game Programming Patterns
    3. The Man in the High Castle
    4. The Martian
    5. Python for Data Analysis
    6. SuperBetter: A Revolutionary Approach to Getting Stronger, Happier, Braver and More Resilient--Powered by the Science of Games
    7. Bombmakaren och hans kvinna
    8. The Animator's Survival Kit: A Manual of Methods, Principles, and Formulas for Classical, Computer, Games, Stop Motion and Internet Animators
    9. The Sell: The Secrets of Selling Anything to Anyone
    10. The Innovators: How a Group of Hackers, Geniuses and Geeks Created the Digital Revolution
    11. So, Anyway...
    12. The Signal and the Noise: Why So Many Predictions Fail - But Some Don't
    13. Thunder Run: The Armored Strike to Capture Baghdad
    14. Fundamentals of Computer Programming with C#
    15. SAS Survival Guide: For any climate, for any situation
    16. Almedalen har fallit
    17. The Unthinkable: Who Survives When Disaster Strikes - and Why
    18. Einstein: His Life and Universe
    19. Wizard - The Life and Times of Nikola Tesla

    If I would recommend a book, I would recommend The Unthinkable: Who Survives When Disaster Strikes - and Why. As the title reveals, it is all about disasters and how to increase your chance of surviving a disaster and the psychology behind it all. 


    The first story is about an unfortunate woman who almost died in the first Word Trade Center attack in New York, and a few years later she was in one of the towers when the second attack happened. Even though she was responsible for the evacuation of the floor she was working on, she blacked out and forgot all about it until a few weeks later when she remembered that "Hey, maybe I was the one responsible for getting everyone out of the building." 
    Another story is about the passenger ferry Estonia which sank during a heavy storm. One of the survivors recalled that when he escaped he walked past several passengers who just sat in chairs in a bar very close to the life boats. They could have survived but they just sat in the chairs doing nothing at all.
    So who is surviving a disaster? The answer is that one part of it is based on your life history. If you have lived a rough life it will increase your survival chances. Another answer is that you have to prepare so you don't black out and be aware of the "stupid" mistakes. One stupid mistake many people do in a disaster is that they look at what other people are doing. So instead of evacuating the burning World Trade Center, many people just stood there looking at what other people were doing who were also looking at what other people were doing. So they didn't escape when they could! Those who had practiced evacuating the buildings escaped as soon as possible and survived. 

    November 9, 2015

    Explaining the Hybrid A Star pathfinding algorithm for selfdriving cars

    18 comments
    Let's say you are standing somewhere in a room and would like to find the shortest path to a goal. You can see a few obstacles, such as a table, that you would like to avoid. The easiest way to solve the problem (if you are a computer) is to divide the room into many small squares (cells) and then use the common A* (A Star) search algorithm to find the shortest path. But what if you are a car and can't turn around 360 degrees like a human can, then you have a problem! Well, at least until you learn the Hybrid A Star search algorithm. With that algorithm you will be able to find a drivable path to the goal.

    The reason I wanted to learn the Hybrid A* algorithm was that I took a class in self-driving cars, where the teacher showed a really cool video with a car that drove around in a maze until it found the goal:


    The problem was that the teacher didn't explain the Hybrid A* algorithm - he just explained the classic A* algorithm, so I had to figure it out on my own. This took some months, so I decided to write a summary of what I learned.

    Before you begin, you have to learn the "normal" A* algorithm. If you don't know where to begin, I suggest you take the same class I took: Artificial Intelligence for Robotics - Programming a Robotic Car. It's free to take it, so don't worry. You should also read the sources I used to figure out the algorithm, mainly the following reports:

    While those reports above will explain the more complicated algorithm that will be a little faster and maybe produce a better result, I will explain the very basic idea so you can get started. And when you can understand the basic idea behind the Hybrid A* search algorithm you can improve it.

    Hybrid A Star

    First of all you need to learn how to simulate a vehicle that you can drive with the help of math. There are several car models, but you can learn the ideas behind the one I used by watching this video from the class I took (I had to replace the sin with cos and vice versa when I converted the math to from Python to Unity because of Unity's coordinate system, so you might have to experiment a little):


    If you want to attach a trailer to the car, there's a mathematical model you can use for that as well: A car pulling trailers. And if you investigate the area further, you will also find models for odd cars that steer with both the front and rear wheels, so you are never limited by the simulation models. The tricky part is to try to replicate these simple models by using a real car when you have found a path, but that's a later problem! 

    Next step is to watch this video where the teacher Sebastian Thrun is explaining the basic idea behind the Hybrid A Star algorithm. The video is actually from the basic course in Artifical Intelligence, which is funny because he didn't explain it in the more advanced course.


    The problem with the video is that Sebastian Thrun is just drawing one line from the first square, even though he should have drawn several lines (one for each turning angle). According to the reports from above, you should simulate three angles: maximum steer left, maximum steer right, and forward. I'm currently using the following steering angles: [-40, 0, 40]. The driving distance (d) is the same as the hypotenuse of one square plus some small distance so you never end up in the same cell again. 

    So from each node, you drive forward with a distance of d with three different angles, and if the car can reverse, you need to reverse with the same distance and three angles. So from each node, you end up with 6 child nodes. For each child, you have to calculate the cost (g-cost) and heuristic (h-cost). The self-driving car Junior used a more complicated heuristic, but you can use the traditional Euclidean distance as the heuristic before you begin calculating something more advanced.

    You also need a few extra costs. According to the reports, you should add an extra cost to those nodes that change steering angle compared with the previous node, so you have to save the steering angle in each node. You should also add an extra cost if the node is close to an obstacle, or if you are reversing. The result is that you prioritize nodes that avoid obstacles because obstacles are bad and may scratch the car's paint.

    Now you have 6 child nodes and have to investigate if they are valid. A node is of course not valid if it's colliding with an obstacle or is outside of the map, so you have to check that. Neither is a node valid if you have closed the cell it's in (after previously removing a node from the list of open nodes). The way you close nodes is that for each cell you close it if the car has not been there before with a certain heading angle. So you need to determine a heading resolution for each cell (I believe the report is using 5 degrees but I'm using 15 degrees or you will end up with a lot of nodes). Each child node has a heading, so you need to round that heading to the heading resolution you determined. Then you need a data structure so you can close a cell with a specific heading. So the car can move in to a cell if it has not been there before with that rounded heading.

    All valid child nodes should now be added to the list of open nodes (like in normal A star). The biggest problem I had from a performance point of view was the search among the open nodes for the node with lowest f-cost. After a few attempts with different algorithms I realized that a heap was the fastest algorithm.

    When you remove a node from the list of open nodes, you need to check if it has found the path to the goal. It will never find the exact path, so you need to check if it's close to the goal by comparing the node's position and heading with the position and heading you want.

    That's it! If you add the ideas from above you will be able to make something that looks like this:


    Make it better

    If you reed the original reports, you notice that they are using some methods to make the Hybrid A star faster. These methods include:

    Reeds-Shepp paths
    If the map you have had no obstacles, you wouldn't need Hybrid A* because you could use a Reeds-Shepp path, which is the shortest distance between two cars if the cars can reverse. I've written a separate article about them and they were so annoying to make so I've decided to give away the code for free: Download Reeds-Shepp paths for Unity in C#. You can use them in the Hybrid A* first as a heuristic instead of just Euclidean distance, and you can also use them when expanding nodes. So instead of 6 child nodes, you get 6 + 1 node from the shortest Reeds-Shepp path.


    Flowfield
    A flowfield is very similar to the traditional A* algorithm, but instead of finding the shortest path from one cell to another cell, the flowfield algorithm is finding the shortest path from all cells to one or more other cells. This is for example useful as heuristic in Hybrid A* because the flowfield is avoiding obstacles. So as heuristic you should use the maximum of Euclidean distance, Reeds-Shepp path, and the flowfield. It looks like this (red is far away and green is close):


    Voronoi field
    A voronoi field tells the distance to the closest voronoi edge and the closest obstacle. To create a voronoi field, you first need to create a voronoi diagram which tells the distance to closest obstacle (the border is also an obstacle). You can create it with the help of a flowfield algorithm:


    Then you need to identify the voronoi edges which is the edges where the color meet in the image above. Then you use the flowfield algorithm a second time to find the distance from the voronoi edges to the obstacles, and when you have these distances you can use the formula from the reports to calculate the voronoi field:


    If you have the voronoi field you can speed up collision detection because you now know for each cell the distance to the closest obstacle. You can also use it when calculating the costs for each node. You should add a cost if a node is close to an obstacle, but you also want to find the shortest path and don't want the car to drive a long distance around obstacles. The voronoi field will help you add a small cost if you are close to an obstacle but still far way so the car will not be "scared" of obstacles.

    Make the car follow the path

    The path you get from the Hybrid A star is ugly. To make it easier for the car to follow it you need to smooth it by using some technique like gradient descent (which you will learn in the online class I recommended you should take). You might also need to add more waypoints between the original waypoints to make it easier for the car to follow it.


    To make the car follow the path, you can use a PID controller. The problem is that you used the rear wheels of the car when making the path in the Hybrid A* algorithm. But the car wants a path following the front wheels to follow it easier. So you need to move the path forward a distance which is the distance between the front and rear axle (the wheel base).

    To make the car reverse along the path you need to move the path in the opposite direction, so the reverse path can be said to be the "mirrored front path". When the car is reversing along the mirrored path, you use the mirrored wheel-base distance to calculate the error to the path. In the video below, this mirrored path is the green line, the red line is the original path, and the blue line is the path the vehicles are using when driving forward.

    And if everything works fine, it should look like this:


    I believe Tesla is using something similar in their Summon feature algorithm, because they look very similar from above. But in the video you can see that the Tesla is is also following traffic rules (driving in the right the lane, etc). To add that you can maybe add a weight so if the vehicle is driving with the wrong direction in a lane then the node gets a lower priority, but that's for another update! 


    Test it and download the source code

    October 13, 2015

    Forest Fire Simulator Update - improved trees and fire

    2 comments
    A few week ago I finished the first version of a forest fire simulator in Unity. I used the real physics equations, which worked perfectly fine, but what didn't work fine was the performance of the simulation. The physics equations were not the problem - the problems were the number of trees and the fire and smoke. I even had to add a special button so the user could remove the smoke and flames because everything was so slow. This is how it looked like:


    You can see in the image above that the simulation is running at 12 frames per second, which is not that good! To improve that number I first decided to improve the trees. The problem was that the trees were all individual objects, and to improve performance you have to combine them into fewer objects. But I also have to remove trees that have burned down and add trees that are more darker. After a few experiments, I realized that the fastest way was to once in the beginning combine all trees of each tree type, and then cheat by moving trees that are not to be seen to positions the user can't see. It looks like this behind the scenes:


    You can see that all black trees are hidden below the ground. And when one of those trees are supposed to be visible, I just move the vertices to above the ground. This is super fast, but tricky to figure out how to do it and no one else had really discussed it when I made a Google search for it. So I decided to make my own tutorial on how to do it. You can find it here: Dynamic Mesh Combining. It took a while to write it, but I'm a big believer in the idea that you should share your knowledge, and if no one else has done so, then people will find you through Google and you might get links and sell more of your products! Moreover, if you do good you will get comments like this on Reddit, which is good for the self esteem:


    Anyway, the last problem was the fire and smoke. I used to have one fire and one smoke particle in each square. But when the entire forest is on fire, this will be really slow. A better way is to try to have fewer, but larger fire and smoke particles. So I wrote an algorithm that searches through the map and tries to build larger squares, up to 12 squares by 12 squares. The result is this:


    You can clearly see that the entire forest is on fire, but the simulation is running at a fantastic 30 frames per second thanks to the improved trees and fire. I also think the smoke is looking more realistic, so that's a good side-effect: faster and better. 

    If you want to test the forest fire simulator you can test it here: Forest Fire Simulator

    September 28, 2015

    Improving Unity's physics engine PhysX to achieve higher accuracy

    0 comments
    Unity is a popular game engine with a built-in physics engine called PhysX. Like all other physics engines, PhysX uses numerical integration techniques to simulate real world physics. Force and movement calculations can get pretty complicated, so in most cases it will be impossible to calculate the exact movements. This is the reason why the physics engine uses numerical integration techniques to approximately integrate the equations of movements. 

    The problem with several of the available numerical integration techniques is that the result in not always 100 percent accurate, because the game also has to run fast on your computer. Most game engines prefer less accuracy, but more faster games because the player will not notice the difference anyway. But what if you are going to make a game that requires more accurate movements, like a sniper rifle. 

    Low accuracy is a problem I had a few days ago when I simulated bullet trajectories. First I calculated the angle to hit the target, then I fired a bullet with Unity's physics engine, and then I noticed that the bullet didn't hit the target. First I thought I had made an error in the calculation of the angle, but then I realized that the reason the bullet didn't hit the target was because of limitations of the physics engine.

    So which integration method is PhysX using? The answer (according to my research on the Internet) is that no one really knows for sure. So let's make an experiment to find out! The first integration method I used was Euler Forward, and you can see the result here:


    You can see that the trajectory line when using Euler Forward is overshooting the target and is not following the same path as the bullet, which is using PhysX. So let's try Backward Euler:


    You can see that the trajectory line undershoots the target, but follows the same path as the bullet would have (if you compare with the bullet in the image that uses the Euler Forward to calculate the trajectory line). So there's a chance that PhysX is using Backward Euler to calculate all physics. But we are still not hitting the target. So let's try Heun's Method:


    You can see that we now hit the target with the trajectory line. But to improve the bullet trajectories, you have to write your own physics engine so that the bullets are also using Heun's Method instead of Backward Euler (or whatever method PhysX is actually using). If you would like to learn how, I've written a tutorial: How to make realistic bullets in Unity.

    September 22, 2015

    Random Show Episode 29

    0 comments
    A new episode of the Random Show with Kevin Rose (founder of Digg) and Tim Ferriss (author of The 4-Hour Workweek) is out! This is episode 29.


    Lessons learned
    • Kevin Rose's new watch-blog-company, Hodinkee, has a "small" but engaged audience. They had 1.3 million user sessions in a month from users that have opened the app more than 200 times. By the way, the name is not Swedish for small watch, at least I've never heard of it and I've been speaking Swedish for more than thirty years. According to Google translate, the word "hodinke" is "watch" in Czech/Slovak. 

    Recommendations
    • Kevin Rose recommended the book The Okinawa Program, even though he also said some parts of the book were rubbish, and he also recommended eating Germinated brown rice with seaweeds, sesame, and eggs. 
    • It was difficult to hear, but I think Tim Ferriss recommended The Age of Miracles Animal Rescue if you want to adopt a dog.
    • Tim Ferriss recommended the podcast player app Overcast and playing Tetris. Why Tetris you might ask? The reason is that he interviewed Jane McGonigal who argues that playing Tetris a short time after a traumatic event will minimize the risk of post-traumatic stress. He also recommended the book Anything you want

    Recommendations If you want to watch the rest of the episodes, you can find them here: The Random Show with Kevin Rose and Tim Ferriss.

    September 13, 2015

    Simulation of a forest fire in Unity

    2 comments
    This summer I decided to learn more about rockets and found an online course called Differential Equations in Action by Udacity. The first and second lessons in that course teaches you how to bring back the unfortunate Apollo 13 space ship from space to Earth. You will also learn about ABS brakes and how many people that how to be vaccinated to stop an outbreak of an epidemic. 
    But sixth lesson is all about forest fires, and you will learn how to simulate a forest fire in Python. The differential equations include change in temperature from:
    • Heat diffusion
    • Heat loss
    • Wind speed
    • Combustion of wood      
    I thought the simulation of a forest fire was really interesting, but it was boring to simulate it in Python. Wouldn't it be more interesting to see the forest burn in real time? Yes it would! So I decided to make a forest fire in Unity
    It was really easy to translate the code from Python to Unity and make a real time simulation. The code in Python used a method called Euler forward to solve the differential equations, and Euler forward is working really well in Unity. This is the result:

    The temperature at the start of the fire is about 700°C and the surrounding temperature is set to 37°C.

    After about 2 minutes the fire has spread towards north-west, because the wind speed is north-west. The core temperature is now about 2500°C and the fire covers an area of  50*50 meters.

    After 20 minutes the fire covers the entire area it can cover in the simulation. Notice that the forest that's not north-west of the fire has not ignited. The temperature at the edge is still between 100°C and 200°C, so you don't want to be there, but the wood has not ignited.

    After 1 hour, 20 percent of the wood has gone up in flames. The trees change shape after a certain amount of wood has burned up. The core temperature is now 4000°C. 

    Still smoking after 2 hours, but the core temperature has gone down to 2500°C, so the fire is dying.

    After 5 hours, 70 percent of the wood has gone up in flames. The core temperature has gone down to 500°C.

    The outer part of the forest fire that's not in the wind-direction has now stopped burning.

    After 6 hours and 12 minutes, the last part of the forest has finally stopped burning. 

    ...or if you are more interested in a video (not the same fire as in the images)


    Looks interesting? You can test it here: Fore Fire Simulator

    August 18, 2015

    Was Alan Turing a silver bug?

    0 comments
    I'm reading the book Alan Turing: The Enigma by Andrew Hodges. The book is a biography on the mathematician Alan Turing, who is the star in the 2014 movie The Imitation Game where he was portrayed by actor Benedict Cumberbatch. While the movie focused mainly on Alan Turing's code-breaking skills, he is also famous for having thought much about Artificial Intelligence.

    One thing Alan Turing is not famous for is his interest in the commodity silver. During the Second World War, Alan Turing wanted to protect is savings against imminent disaster in the case Germany would actually be able to invade Britain. His co-worker had seen how silver was the one thing that had gained in real value during the First World War. So Alan Turing decided to invest in physical silver.
    Apparently he imagined that by burying the silver ingots, he [Alan Turing] could recover them after an invasion had been repelled, or that at least he could evade a post-war capital levy. (In 1920, Churchill and the Labour party had both favored such a policy.) It was an odd idea.
    He bought two [silver] bars, worth about £250, and wheeled them out in an old pram to some woods near Shenley. One was buried under the forest floor, the other under a bridge in the bed of a stream. He wrote out instructions for the recovery of the buried treasure and enciphered them.
    Fast forward to 1952
    ...the main point if the weekend was to make one last serious attempt to retrieve the silver bars. This time Don [Alan Turing's friend] had got hold of a commercial metal detector, and they went out to the bridge near Shenley in his car. Alan said, "It looks a bit different," as he took off his socks and shoes and paddled in the mud. "Christ, do you know what's happened? They've rebuilt the bridge and concreted over the bed!"
    They tried for the other bar in the woods, finding that the pram in which he had wheeled the ingots in 1940 was still there, but without any more luck than before in locating the spot. Giving up both bars as lost forever, they made their way to the Crown Inn at Shenley Brook End for some bread and cheese.   

    According to this source, in 1944, 1946 and 1952 Alan Turing tried to find them and failed. No-one knows what happened to his buried treasure!

    June 20, 2015

    Peter Lynch was right - you can't predict the economy

    0 comments
    This is a quote by the famous investor Peter Lynch
    I spend about 15 minutes a year on economic analysis. The way you lose money in the stock market is to start off with an economic picture. I also spend 15 minutes a year on where the stock market is going.
    According to Peter Lynch, it's easy to overestimate the skill and wisdom of professionals who is trying to predict the economy. If an "expert" on television with a fancy job title is predicting the economy, it's easy for those who haven't studied Peter Lynch to believe the expert knows something that you and me don't understand. The truth is that the expert knows as little as everyone else does, even though many experts believe they know something. 
    Peter Lynch describes it more deeply in his book One up on Wall Street. According to the book, there are 60,000 economists in the US. Many of them are employed full-time trying to forecast recessions and interest rates. If they could do it successfully twice in a row, they'd all be millionaires by now. But the truth is that they aren't. 
    The idea that it's impossible to predict the economy might first sound strange. I myself had a hard time before I accepted this fact. The question is why so many experts are wasting their time, even though they are not always aware of it, while the amateurs continue to listen to them.
    Because Peter Lynch is not describing exactly why in his book, I had to wait a few years to really understand why it's impossible to predict the economy. I finally found the answer in the book The signal and the noise by Nate Silver. That book includes several chapters on why so many predictions fail, including why it's impossible to predict earthquakes, why US failed to predict Pearl Harbor, and why some people didn't trust those who predicted the Hurricane Katrina. Another chapter in the book is about why it's unnecessary to listen to statements like:
    • The economy will create 150,000 jobs next month
    • GDP will grow by 3 percent next year
    • Oil will rise to $120 per barrel

    The secret truth about economic forecasts
    According to Nate Silver, economic forecasts are blunt instruments at best, rarely being able to anticipate economic turning points more than a few months in advance. In fact, these forecasts have failed to "predict" recessions even once they were already under way: a majority of economists didn't think we were in one when the three most recent recessions were later determined to have begun. 
    Peter Lynch has the same ideas as Nate Silver. Again a quote from his book:
    Nobody called to inform me of an immediate collapse in October [1987], and if all the people who claimed to have predicted it beforehand had sold out their shares, then the market would have dropped the 1,000 points much earlier due to these great crowds of informed sellers.
    Every year I talk to executives of a thousand companies, and I can't avoid hearing from the various gold bugs, interest-rates disciples, Federal Reserve watchers, and fiscal mystics quoted in the newspapers. They can't predict the markets with any useful consistency, any more than the gizzard squeezers could tell the Roman emperors when the Huns would attack.
    One of the latest examples showing how inaccurate experts are when predicting the economy is the Credit Crisis of 2008. About one year earlier, economists in the Survey of Professional Forecasters expected the economy to grow at a just slightly below average rate of 2.4 percent in 2008. And they thought there was almost no chance of a recession as severe as the one that actually hit the world in 2008 where GDP shrank by 3.3 percent. It was a scenario the economists thought would happen with a probability of just 3 percent. 
    The above isn't just one case that confirms the rule. In a report with data from 1968 up until now, predictions by the Survey of Professional Forecasters for GDP fell outside the prediction interval almost half the time. So it is clear that the economists weren't unlucky - they fundamentally overestimated the reliability of their predictions.

    Why is it so difficult to predict the economy? 
    According to Nate Silver, economic forecasters face 3 fundamental challenges:
    1. It is very hard to determine the cause and effect from economic statistics alone. There are millions of statistical indicators in the world, and a few will happen to correlate with stock prices and GDP, even though it's just a coincidence. For example, ice cream sales and forest fires are correlated because both occur more often in the summer, but there's no causation. With so many economic variables to pick from, someone will find something that fits the past data, even though it's just a coincidence. 
    2. The economy is always changing. If the economists have predicted an upcoming recession, the government and the Federal Reserve will take steps to soften the recession. So forecasters have to predict political decisions as well as economic ones. Also, the American economy has changed from an economy dominated by manufacturing to one dominated by the service sector, which will make old models that used to work obsolete. 
    3. The data the economists have to work with isn't that good. With different governments with different policies, the past data will be subjected to these different policies, making it difficult to use old data and say that the data will be the same as today. Also, the data available may be limited. If you look at data between 1986 and 2006, which is 20 years, but the problem is that these years contained just two mild recessions.  
    So what's the solution?
    We know that Peter Lynch's solution is to spend about 15 minutes a year on economic analysis. Nate Silver, on the other hand, argues that if you have to listen to predictions of the economy, you should listen to the average or aggregate prediction rather than that of any one economist. These aggregate forecasts are about
    • 20 percent more accurate than the typical individual's forecast at predicting GDP
    • 10 percent better at predicting unemployment
    • 30 percent better at predicting inflation. 

    June 19, 2015

    How to tell stories with data and what's the future of journalism?

    0 comments
    Pulitzer-prize winning journalist and editor of the New York Times data journalism website The Upshot, David Leonhardt, shares the tricks of the master storyteller's trade. In conversation with Google News Lab data editor Simon Rogers, he shows how data is changing the world - and your part in the revolution.


    Key points
    • Journalism is not in decline. Journalism (at least American journalism) is better today than it has ever been - even as little as 10 years ago. Yes there are challenges, and the business model is changing. But journalism is still keeping people informed about the world and has not been replaced by click-bait articles. 
    • Why journalism is better today than 10-20 years ago:
      • Journalism is more accurate than it used to be (but not perfectly accurate). One reason is that it is easier to change inaccurate information in articles when the articles are digital, like spelling errors, compared with printed articles. It is also easier for the audience to interact with articles and journalists today when everything is digital. The audience can improve the articles. 
      • The tools and techniques for telling a story has improved. It is today easy to create interactive visualizations, like maps and let the map zoom in on the area, and give the reader different information, depending on where the reader is living. These techniques didn't exist 20 years ago.
      • Journalists are using better data than before. As long as the journalists are using the data in the correct way, the result is better than it used to be. 
      • The audience for ambitious journalism is larger than it was just a few years ago. People from across the globe can read the New York Times. 
    • The most articles in the New York Times are not traditional articles with blocks of text - they are interactive visualizations, essays, Q&A's, and videos. But they are not click-baits - they are about serious topics and the people behind them have put a lot of effort into them. The smartest and clearest way to tell a story isn't anymore the traditional article.
    • New York Times is sometimes writing 2 articles, one traditional with just text and a similar article with more visualizations. In one example, the article with the more visualizations got 8 times the traffic compared with the traditional article with just text.
    • Journalists are becoming more and more specialized within a certain area.
    • You can probably find big opportunities within local news, but only if you are using data. 

    June 18, 2015

    How to make better predictions and decisions

    0 comments

    I've read a book called The signal and the noise: Why so many predictions fail - but some don't by Nate Silver. The basic idea behind the book is that ever since Johannes Gutenberg invented the printing press, the information in the world has increased, making it more and more difficult to make good predictions because of the noise. Moreover, the Internet has increased the information overload, making it even harder to make good predictions. A lot of people are still making what they think are good predictions, even though they shouldn't make predictions at all (*cough* economists), because it is simply impossible to predict everything. 
    What most people are doing when trying to predict something from the information available, like a stock price, is to pick out the parts they like while ignoring the parts they don't like. If the same person is trying to predict if he/she should keep a position in let's say Tesla Motors, then the person will read everything that confirms that it is a good idea to keep that position and hang out with people with the same ideas, while ignoring the facts that maybe Tesla Motors's stock is a bubble. 
    You may first argue that only amateurs pick out the parts they like while ignoring the parts they don't like. But if you can't remember the 2008 stock market crash, The signal and the noise includes an entire chapter describing it. It turned out that those who worked in the rating agencies, whose job it was to measure risk in financial markets, also picked out the parts they liked, while ignoring the signs that there was a housing bubble. For example, the phrase "housing bubble" appeared in just eight news accounts in 2001, but jumped to 3447 references by 2005. And yet, the rating agencies say that they missed it.


    Another example is the Japanese earthquake and following tsunami in 2011. The book includes an entire chapter on predicting earthquakes. It turns that it is impossible to predict when an earthquake will happen. What you can predict is that an earthquake will happen and with which magnitude it might have. The Fukushima nuclear reactor had been designed to handle a magnitude 8.6 earthquake, in part because the seismologists concluded that anything larger was impossible. Then came the 9.1 earthquake. 
    The Credit Crisis of 2008 and the 2011 Japanese earthquake are not the only examples in the book:
    It didn't matter whether the experts were making predictions about economics, domestic politics, or international affairs; their judgment was equally bad across the board.  
    The reason why we humans are bad at making predictions is because we are humans. A newborn baby can recognize the basic pattern of a face because the evolution has taught it how. The problem is that these evolutionary instincts sometimes lead us to see patterns when there are none there. We are constantly finding patterns in random noise.

    So how can you improve your predictions?
    Nate Silver argues that we can never make perfectly objective predictions. They will always be tainted by our subjective point of view. But we can at least try to improve the way we make predictions. This is how you can do it:
    • Don't always listen to experts. You can listen to some experts, but make sure the expert can really predict what the expert is trying to predict. The octopus who predicted the World Cup is not an expert, and neither can you predict an earthquake. What you can predict is the weather, but the public is not trusting weather forecasts. This could sometimes be dangerous. Several people died from the Hurricane Katrina because they didn't trust the weather forecaster who said a hurricane was on its way. Another finding from the book is that weather forecasters on television tend to overestimate the probability of rain because people will be upset if they predict sun and then it is raining, even though the forecast from the computer predicts sunny weather.  
    • Incorporate ideas from different disciplines and regardless of their origin on the political spectrum.
    • Find a new approach, or pursue multiple approaches at the same time, if you aren't sure the original one is working. Making a lot of predictions is also the only way to get better at it.
    • Be willing to acknowledge mistakes in your predictions and accept the blame for them. Good predictions should always change if you find more information. But wild gyrations in your prediction from day to day is a bad sign, then you probably have a bad model or whatever you are predicting isn't predictable. 
    • See the universe as complicated, perhaps to the point of many fundamental problems being inherently unpredictable. If you make a prediction and it goes badly, you can never really be certain whether it was your fault or not, whether your model is flawed, or if you were just unlucky. 
    • Try to express you prediction as a probability by using Bayes's theorem. Weather forecasters are always using a probability to determine if it might rain the next week, "With a probability of 60 percent it will rain on Monday the next week," but they will not tell you that on television. The reason is that even though we have super-fast computers it is still impossible to find out the real answer, as explained in a chapter in the book. If you publish your findings, make sure to include this probability, because people have died when they have misinterpreted the probability. A weather station predicted that a river would rise with x +- y meters. Those who used the prediction though the river could rise with x meters, and it turned out the river rose with x+y meters, flooding the area.    
    • Rely more on observation than theory. All models are wrong because all models are simplifications of the universe. One bad simplification is overfitting your data, which is the act of mistaking noise for signal. But some models are useful as long as you test them in the real world rather than in the comfort of a statistical model. The goal of the predictive model is to capture as much signal as possible and as little noise as possible.  
    • Use the aggregate prediction. Quite a lot of evidence suggests that the aggregate prediction is often 15 to 20 percent more accurate than the individual prediction made by one person. But remember that this is not always true. An individual prediction can be better and the aggregate prediction might be bad because you can't predict whatever you are trying to predict. 
    • Combine computer predictions with your own intelligence. A visual inspection of a graphic showing the interaction between two variables is often a quicker and more reliable way to detect outliers in your data than a statistical test.

    This sounds reasonable? So why are we seeing so many experts who are not really experts? According to the book, the more interviews that an expert had done with the press, the worse his/her predictions tended to be. The reason is that the experts who are really experts and are aware of the fact that they can't predict everything, tend to be boring on television. It is much funnier to invite someone who says that "the stock market will increase 40 percent this year" than someone who says "I don't know because it is impossible to predict the stock market."
    So we all should learn how to make better predictions and learn which predictions we should trust. If we can, we might avoid another Credit Crisis, another 9/11, another Pearl Harbor, another Fukushima, and unnecessary deaths from another Hurricane Katrina.

    June 5, 2015

    Video: Marketing for Indies - PR, Social Media, and Game Trailers

    0 comments
    I found a video on YouTube called "Marketing for Indies - PR, Social Media, and Game Trailers." It is rather long, but very interesting.


    Key points
    • He sent out around 1500 requests for people to cover the game Albino Lullaby on YouTube (and other services like Twitch), in articles (including bloggers), or through podcasts. In general, it was the smaller YouTube accounts that responded to the requests. He didn't get any response at all when sending requests for people to write articles about the game. At one point, he even gave up trying. But he continued to send requests to around 4 big (popular) people each week, and in the end 1 big people wrote an article about the game. Then other big people followed because they saw that article and were now also more interested in writing articles than before. But he argued that you will need both popular accounts and less popular accounts, because the popular accounts will Google the game and find articles and videos by the less popular accounts. If they hadn't found those articles, they would have ignored the game. 
    • You should know what the rules are, but also be ready to break them. 
    • Getting noticed is really hard.
    • You will need a press kit. It should include:
      • Description: Make sure you can explain your game in 1 sentence, 1 paragraph, and 1 article (each should describe the entire game). Make sure you test it on real people and notices how they react
      • Press Releases: Anything that is significant can become a press release. Most people who write articles will copy-and-paste the press release, so write a good press release, but most will not care about press releases - they want to play the game itself
      • Trailer
      • Screenshots
      • Demo
      • Links 
    • Use a spreadsheet to keep track of the requests.
    • Interact with popular accounts on Twitter, so the popular accounts will recognize you when you reach out to them. But don't spam, because he was kicked out from Reddit for spamming. 
    • Above everything, you have to make an amazing game.
    • Be open with what you do, have a blog and stream the development process (some stream their entire day). People love to read behind-the-scenes and stories about the little guy vs the evil big company.
    • Ask yourself: What can I do to make it easier for someone to write an article about my game?
    • Marketing of the game begins before the development begins. Start a Twitter account and a blog today and start getting recognized. 
    • When on Twitter, use the hashtags #indiegamedev, #videogame and use the website RiteTag to find other hashtags that you might use. 
    • Be 100 percent data driven - opinions don't matter!
    • 99 percent of the players will not play your game, but they will watch your trailer, so make sure the quality of the trailer is 100 percent.

    Why you should be pronoid and not paranoid

    0 comments
    This is an excerpt from the book The Sell written by Fredrik Eklund, who is a top  New York City real estate broker.
    I'm going to teach you a word: pronoia. It's the opposite of paranoia. Paranoia is when you think the world is against you in some shape or form. Pronoia is the happy opposite: having the sense that there is a conspiracy that exists to help you. I just decided that's how it is, because I said so. I run my life on pronoia, and I want you to start, too. Right now. Did you know there's actually a great conspiracy that exists to help you? It's called the universe. Step a little closer. Let me whisper it in your ear. I'm telling you that the world is set up to secretly benefit you!
    Tell yourself that the person in front of you in the express lane, who is suddenly backing up the line with her credit card that won't work, is giving you a minute to flip through a tabloid and get a laugh at some of the preposterous stories and pictures. See how pronoia can make that frustrating moment a gift?

    May 31, 2015

    Book Review: Predicitve Analytics

    0 comments
    According to Wikipedia, predictive analytics is an area of data mining that deals with extracting information from data and using it to predict trends and behavior patterns. Often the unknown event of interest is in the future, but predictive analytics can be applied to any type of unknown whether it be in the past, present or future.
    If you have ever bought a book on Amazon, at the bottom of the page you will see the small text "Customers Who Bought This Item Also Bought..." If you for example buy the book Predictive Analytics: The Power to Predict Who Will Click, Buy, Lie, or Die by Eric Siegel, you will see that Amazon recommends to you the books:
    • Data science for business
    • Competing on analytics
    • Predictive analytics for dummies
    This is predictive analytics when Amazon's engineers are using algorithms to try to determine which books you might be interested in to read. And this is really working. This is an excerpt from the book The Everything Store of what happened when Amazon.com installed a predictive analytics system: 
    Eric Benson took about two weeks to construct a preliminary version that grouped together customers who had similar purchasing histories and then found books that appealed to the people in each group. That feature, called Similarities, immediately yielded a noticeable uptick in sales and allowed Amazon to point customers toward books that they might not otherwise have found. Greg Linded, and engineer who worked on the project, recalls [Jeff] Bezos coming into his office, getting downed on his hand and knees, and joking, "I'm not worthy."
    I myself have used predictive analytics a few times before. When I participated in a Kaggle competition to predict if a sound file included the sound made by a whale, I used so called random forests to make that prediction. I've also at the bottom of each article in this blog used predictive analytics to, in a similar way as Amazon recommends books, used algorithms to predict related blog posts.
    To learn more about predictive analytics I decided to read the book Predictive Analytics. This book will tell you why you need predictive analytics and what you can do with it, not how. So you will not find a single mathematical equation in the book, but the author will describe some basic algorithms, such as decision trees.
    The books if filled with examples from the author's own work within the field and what other people have predicted. One chapter is about the machine that learned how to predict Jeopardy answers. Other chapters include examples how you can predict which employees will quit their job, where a crime might happen, and how Barack Obama used predictive analytics to win an election.
    If you, like I have been, involved within the field, you will be familiar with most examples. I've heard before about the Jeopardy machine and the large US retail chain that messed up by sending discounted prices to a teen who they had predicted was pregnant. Her father thought it was an outrageous accusation, and then it turned out she was really pregnant, but she hadn't told her father about it. But if you read through the book and recognize everything, then it will confirm that you know what you should know within the field, and you can move on to applying the algorithms. And if you learn something new, then it's just great.

    May 25, 2015

    How to optimize Unity and other tips and tricks as well as best practices

    1 comments
    This was a link roundup with articles describing how to optimize Unity as well as other tips and tricks and best practices. It has moved to here: Learn how to optimize your Unity project.

    May 24, 2015

    How to create water wakes in Unity?

    0 comments
    The question is: How do you create water wakes in Unity? If you have ever seen a boat on a lake, you notice that it will leave small waves behind it while it is floating forward. But how do you create those waves, and how do you combine them with a moving sea in a computer environment like Unity? I've earlier made a boat that will float in Unity with realistic buoyancy, and now I felt I wanted to learn how to add water wakes. No boat is complete without them.
    The answer to the question is using an algorithm called iWave. There are other alternatives, but I believe iWave is the most popular algorithm. It was first published in 2008 in a report called Simulation of Interactive Surface Waves. The report was written by Jerry Tessendorf, who is an expert in the field. If you've ever seen a computer animated sea in a movie, like the sea in Titanic, the algorithm behind the realistic moving sea was written by Jerry Tessendorf.
    The algorithm itself is not that easy to understand, but the simplified version is actually very easy to implement - and it is super fast. The only expert tip I have is to change the parameter alpha if you encounter gigantic waves. The algorithm itself is not that stable, so alpha is like a damping parameter. 
    My final results look like this:



    You can interact with the surface with your mouse, and the small cube will float with my old buoyancy algorithm. The difference now is that the cube will leave ripples whenever it is bouncing up and down. 
    Looks interesting? Don't worry, because I will soon write a tutorial on how to do it in Unity. It will be published here: Tutorial on how to make a boat in Unity.  

    May 18, 2015

    Book review: The Quest for Artificial Intelligence

    0 comments

    This weekend I've watched two movies about smart robots. The first was called Chappie and it tells the story of a robot named Chappie, who is the first robot with the ability to think and feel for himself. Chappie is therefore much smarter than any other human. The second movie, Ex Machina, is also about a smart robot with the same abilities as Chappie, although the robot in Ex Machina is looking more like a real human.
    The question is how realistic are those movies? Will we soon see robots that are as smart as Chappie? To be able to answer that question, we have to take a look at the history of artificial intelligence, or AI. And that's why I read the book The Quest for Artificial Intelligence by Nils Nilsson.


    The Quest for Artificial Intelligence was published in 2010, and is therefore up to date with the latest histories of artificial intelligence. But AI is a field that these days is moving fast, so some of the latest achievements are not included, such as the algorithm that learned how to play the game Breakout.
    Nils Nilsson knows what he is talking about. Among other projects, he has developed the famous A* search algorithm. If you have ever played a computer game where the characters are controlled by the computer, the game is probably using the A* algorithm to help the characters find their way around the map.
    The early history of AI begins hundreds of years ago when the old Greeks dreamed about self-propelled chairs. Then it continues with Leonardo da Vinci, who in 1495 sketched designs for a humanoid robot in the form of a medieval knight, and ends with a mechanical duck.
    But the real history of true artificial intelligence begins after the Second World War, with a series of meetings. At these meetings, researchers described early attempts to highlight features in images and how to program a computer to play chess. The first meeting was called "Session on Learning Machines," so the word artificial intelligence was not yet invented, until someone suggested the word. Everyone were not convinced, but then most researchers began to use the name artificial intelligence. 
    "So cherish the name artificial intelligence. It is a good name. Like all names of scientific fields, it will grow to become exactly what its field comes to mean."
    What happened after these early meetings was that the quest for artificial intelligence began at the same times as the computers improved. But everything didn't go smoothly. Nils Nilsson has named the downs in the quest for artificial intelligence "AI winters." What happened was that governments around the world sponsored researchers to develop AI algorithms (generally for military purposes). But since AI is a difficult topic, these algorithms didn't always work.
    When they didn't work, the governments decided to decrease the funding, and the researchers had to endure an AI winter with little or no money. Then the computers and algorithms improved, the governments were yet again excited, then the algorithms didn't work as promised, and another AI winter happened.
    According to Nils Nilsson, what these AI winters led to was scared researchers. The naysayers around the researchers could give comments like:
    "Most people working on speech recognition were acting like mad scientists and untrustworthy engineers." 
    So the researchers decided to develop simple algorithms that would actually work as promised, and ignore the more complicated algorithms that didn't always work as promised. For example, the transcription of spoken sentences to their textual equivalents is now largely a solved problem. But that is not true intelligence. The computer can't still understand natural language speech (or text) so someone can have a dialog with a computer, like in the movie Her. The latter is generally called strong AI, while the former is called weak AI. So most researchers have throughout history focused on weak AI to not lose any respect. They were saying:
    "AI used to be criticized for its flossiness. Now that we have made solid progress, lets us not risk losing our respectability,"
    So if you are wondering why artificial intelligence is not as intelligent as in the movies Chappie, Ex Machina, and Her, the answer is that most researchers have focused on weak AI. The emphasis has been on using AI to help humans rather than to replace them. Yes, a computer can beat a human in a game of chess, but that computer is not intelligent. Deep Blue is considered to be the machine that's the best chess player in the world, but Deep Blue doesn't know that it is playing chess. 
    "Does Deep Blue use artificial intelligence? The short answer is no. Earlier computer designs that tried to mimic human thinking weren't very good at it. No formula exists for intuition. ...Deep Blue relies more on computational power and a simpler search and evaluation function."
    One idea here is to actually have computer chess tournaments that will admit programs only with severe limits on computation. This would concentrate attention on scientific advances. 
    The last chapter in the book is called "The Quest Continues," so artificial intelligence is still far away from being a solved problem. We may have algorithms that know how to drive a car, how to paint a painting indistinguishable from true art, and how to compose music. But we are still far away from robots like Chappie.
    This short text is far away from being a summary of the book The Quest for Artificial Intelligence. The 700 pages are filled with facts and anecdotes. But don't worry, even though you will need math to develop AI algorithms, the book includes some math to explain a few algorithms, but it is more a history book than a math book. So if you are interested in the history of AI, then you should read it.