December 28, 2015

Nikola Tesla on Artificial Intelligence


Nikola Tesla (1856 - 1943) was an inventor born in Croatia, and is best known for his contributions to the design of the modern alternating current (AC) electricity supply system. He was also very bright, which was a problem because he could design his machines in his head so he didn't always build the final product. And if you don't build the final product and sell it, then you will not make much money. So Tesla came up with these brilliant machines, but he still had to borrow money to survive. 

Tesla has also given his name to the electric car company Tesla Motors. One of the founders of that company, JB Straubel, was a fan of Nikola Tesla (but it wasn't he who gave the company its name), and his favorite book about the man is Wizard: The Life and Times of Nikola Tesla by Marc Seifer. I finished reading that book and learned, among other things, that one of the reasons Tesla could accomplish so much was that he was ambitious. This is very similar to Albert Einstein, who could also dedicate his life to whatever he was doing.  

When Tesla was a young student, his teachers were worried about him. The reason was that he had "a veritable mania for finishing whatever I began." So he couldn't simply stop himself from doing whatever he was doing. If he began reading a book he couldn't do anything else before he had finished it. So the teachers said that "the boy was at risk of injuring his health by obsessively long and intense hours of study." He could study for 20 hours a day. 

Tesla would move to the US together with his ambitions. He could experiment day and night, holidays not except. He drove himself until he collapsed, working around the clock, with few breaks. He preferred working through the night, when distractions could be minimized and concentration could be intensified. He argued that "every hour, every moment, that was not spent working on inventions was time away from his purpose." Even the intervals spent eating and sleeping delayed progress, so he reduced his sleeping to a minimum and his eating to the bare necessities. He argued he could sleep 2 hours per day while "dozing" from time to time to recharge his batteries. He said:
I get all the nourishment I require from my laboratory. I know I am completely worn out, and yet I cannot stop my work. These experiments of mine are so important, so beautiful, so fascinating, that I can hardly tear myself away from them to eat, and when I try to sleep I think about them constantly.

One other thing I learned from the book was that Tesla was also interested in Artificial Intelligence. The young Tesla studied the theories of René Descartes, who envisioned animals, including man, as simply "automata incapable of actions other than those characteristic of a machine." Tesla said that he wanted "to devise mechanical means for doing away with needless tasks of physical labor so that humans could spend more time in creative endeavors." When Tesla was asked to predict the future he said that robots and thinking machines will replace humans. His vision was that machines could liberate the worker and that fighting machines could replace soldiers on the field.

Tesla had come to see the human body in its essence as a machine. He said that memory "is but increased responsiveness to repeated stimuli." It's unclear if he actually tried to build a machine similar to himself, but he was thinking about it:
Long ago I conceived the idea of constructing an automaton which would mechanically represent me, and which would respond, as I do myself, but of course, in a much more primitive manner to external influences. Such an automaton evidently had to have motive power, organs for locomotion, directive organs and one or more sensitive organs so adapted as to be excited by external stimuli. Whether the automaton be of flesh and bone, or of wood or steel, it mattered little, provided it could provide all the duties required of it like an intelligent being.

But what is known is that Tesla built a remote controlled boat. To him, his boat was not simply a machine, it was "a new technological creation endowed with the ability to think." It was also, to him, the first non-biological life-form on the planet, arguing that life-forms need not be made out of flesh and blood. He said:
Even matter called inorganic, believed to be dead, responds to irritants and gives unmistakable evidence of a living principle within.

December 22, 2015

The secrets behind Albert Einsteins success


I've read the book Einstein - His life and universe by Walter Isaacson, who is also the author of the most famous book about Steve Jobs. This summer I also read another book by Walter Isaacson called The Innovators, which is all about the history of the digital age, ranging from the first analog computer by Charles Babbage to Google. I also tried to read his book about Benjamin Franklin, but gave up because it was filled with politics which is not really my cup of tea.
The basic theme in the book The Innovators is that those who collaborated with other inventors succeeded, while those who didn't collaborate failed. It was the computer built by a team that succeeded and the computer built by the lone inventor failed. Now this is not always true, because Albert Einstein was in fact the loner who succeeded. 
Einstein didn't invent anything, but he developed the theories he's now famous for while working as a patent examiner. Why was he working as a patent examiner? Because no one wanted to hire him. Einstein was actually the only person graduating in his section who was not offered a job and he often didn't even get a reply on his applications! But he responded with humor by saying "God created the donkey and gave him a thick skin." 
So while trying to remain optimistic, Einstein examined patents six days a week and in the evenings he developed his theories that would eventually give him the Nobel Prize in physics in 1921. He was so efficient that he managed to do a full day's work in three hours, and the remaining part of the day he would work with his own ideas. It was doing what he enjoyed that kept him sane while everyone else succeeded. "What kept him happy were the theoretical papers he was writing on his own." 
In hindsight, Einstein argued that it was actually good for him to not get a job, because he wasn't influenced by other people's thinking and he could develop his own "crazy" ideas. "An academic career in which a person is forced to produce scientific writings in great amounts creates a danger of intellectual superficiality." So Einstein was a rebel, and there was a link between his creativity and his forced willingness to defy authority. He could throw out conventional thinking that had defined science for centuries.  
So how did he do it? 
  • Have imagination. Einstein argued that "Imagination is more important than knowledge." He also argued that "the value of a college education is not the learning of many facts but the training of the mind to think." Einstein never began with experimental data. Instead, he generally began with postulates he had abstracted from his understanding of the physical world. Einstein's ideas are abstract and are not always easy to grasp. But he believed that the end product of any theory must be conclusions that can be confirmed by experience and empirical tests. He is famous for ending is papers with calls for these types of suggested experiments.
  • Do something else when you are stuck. When he couldn't solve a problem he played the violin late at night. "Then, suddenly, in the middle of playing, he would announce excitedly, 'I've got it!'"
  • Work a lot. Einstein was ambitious. He and his wife had separate bedrooms so he could spend more time with his calculations. "For I shall never give up the state of living alone, which has manifested itself as an indescribable blessing." He worked so much that he didn't really enjoy food. When he invited visitors for lunch, he heated cans of beans. Then they ate the beans with spoons directly from the can. Einstein also used his work to escape the complexity of human emotions. When his wife was dying, he worked even more.  
  • Change your mind. Einstein wasn't mindlessly stubborn. When he realized his idea wouldn't work, he was willing to abandon it. Before Hitler, Einstein was a pacifist and thought the solution to war to not rearm after the First World War. But after the Second World War, Einstein thought he had made a mistake by encouraging Germany's neighbors not to rearm. 
  • Be a star. The reason Einstein is now an icon and almost everyone can recognize him if they see a picture of him is because he could, and would, play the role. "Scientists who become icons must not only be geniuses but also performers, playing to the crowd and enjoying public acclaim." And Einstein performed. He gave interviews and knew exactly what made a good story, and he often made jokes during interviews.

    December 15, 2015

    Santa Claus Down - or how to make a game in 48 hours


    This weekend I participated in a competition called Ludum Dare where the idea is to make a game in 48 hours or 72 hours. The former is called "Compo" and is more hardcore because you have to create everything on your own down to the smallest texture and sound. The 72 hours version, which is called "Jam," is more relaxed: you can work in a team and you can use old assets. But I'm hardcore, so I created the game Santa Claus Down in 48 hours.

    The theme this competition was either "Growing" or "Two button controls" - or both. It's usually just one theme, but this competition the voting between the themes was tied. I chose growing as my theme and the plan was to make a growing truck that's growing with more trailers as you progress in the game, like the classic game Snake.

    The basic idea behind the game is that Santa Claus has crashed and you have to deliver the gifts. I originally planed to create a random town where you have to drive around to deliver all gifts, but I ran out of time. Instead I ended up making an endless road system, where the roads varies between highway, house where you can deliver a gift, and a normal road.

    This was my sixth Ludum Dare competition. I failed once, but I submitted games to the other five competitions. The good thing is that users who are voting also give you constructive criticism. The main criticism I got from the last competitions was that my games were creative but were not fun to play. So this competition I decided to make a more simple game and spend about 50 percent of the time making the game fun to play. This failed because of the failed idea to make a random city, so I spent maybe 30 percent of the time making the game more fun.

    To learn how to make a game more fun I read the book The Art of Game Design. From that book I learned that (some) players like explosions, and most player want to play a game where the challenge is increasing. So I added explosions when you hit a car. It's not realistic but people who played the game said that it was fun to crash into cars:
    ...I felt like I was fighting the controls a bit, but it did make tail swiping cars more satisfying. As it was, tail swiping cars was really the high point of the game. I think this game could really work as a high paced drive around tailswiping cars and delivering presents to a destination kind of game.
    To make the game increasingly challenging I tweaked:
    • When new cars arrive on the roads
    • When you get a new trailer
    • When the Grinch is arriving. I added the Grinch, who is driving a green car and wants to crash into your truck. The Grinch is only appearing after you have delivered a few gifts
    • When ramps appear on the road. I added ramps that you have to either drive over or drive around
    • When a heart appears that will give back some of your health 

    This competition I also added sounds to the game. As I participated in the Compo I had to make all sounds myself. I didn't have time to go out and and record truck sounds, so I used a tool called Bfxr, which generates random sounds. Finding a good truck sound wasn't easy, so I ended up with a sound similar to the sound of a small boat. One of the players said it all:
    ...putt putt putt putt putt putt putt putt putt putt putt putt.. Don't mind me, just doin' donuts in ma truck!

    Looks interesting? You can play it here.

    December 9, 2015

    Books I've read in 2015

    2015 is almost over and it's time to summarize which books I've read this year. This year I wanted to learn more about Artificial Intelligence, so the list includes several books with that theme. I'm keeping track of the books through my Goodreads account, so don't feel sorry for me that it took a long time to complete the list, because it didn't!

    Artificial Intelligence:
    1. Neuroscience for Dummies
    2. Ten Years To the Singularity If We Really, Really Try... and other Essays on AGI and its Implications
    3. Between Ape and Artilect: Conversations with Pioneers of Artificial General Intelligence and Other Transformative Technologies
    4. The Computer and the Brain
    5. Alan Turing: The Enigma
    6. On Intelligence
    7. Consciousness: Confessions of a Romantic Reductionist
    8. How to Create a Mind: The Secret of Human Thought Revealed
    9. Our Final Invention: Artificial Intelligence and the End of the Human Era
    10. Vehicles: Experiments in Synthetic Psychology
    11. I, Robot
    12. Predictive Analytics: The Power to Predict Who Will Click, Buy, Lie, or Die
    13. Prey
    14. The Quest for Artificial Intelligence: A History of Ideas and Achievements
    15. Artificial Intelligence for Games

    1. Stockholms undergång
    2. Game Programming Patterns
    3. The Man in the High Castle
    4. The Martian
    5. Python for Data Analysis
    6. SuperBetter: A Revolutionary Approach to Getting Stronger, Happier, Braver and More Resilient--Powered by the Science of Games
    7. Bombmakaren och hans kvinna
    8. The Animator's Survival Kit: A Manual of Methods, Principles, and Formulas for Classical, Computer, Games, Stop Motion and Internet Animators
    9. The Sell: The Secrets of Selling Anything to Anyone
    10. The Innovators: How a Group of Hackers, Geniuses and Geeks Created the Digital Revolution
    11. So, Anyway...
    12. The Signal and the Noise: Why So Many Predictions Fail - But Some Don't
    13. Thunder Run: The Armored Strike to Capture Baghdad
    14. Fundamentals of Computer Programming with C#
    15. SAS Survival Guide: For any climate, for any situation
    16. Almedalen har fallit
    17. The Unthinkable: Who Survives When Disaster Strikes - and Why
    18. Einstein: His Life and Universe
    19. Wizard - The Life and Times of Nikola Tesla

    If I would recommend a book, I would recommend The Unthinkable: Who Survives When Disaster Strikes - and Why. As the title reveals, it is all about disasters and how to increase your chance of surviving a disaster and the psychology behind it all. 

    The first story is about an unfortunate woman who almost died in the first Word Trade Center attack in New York, and a few years later she was in one of the towers when the second attack happened. Even though she was responsible for the evacuation of the floor she was working on, she blacked out and forgot all about it until a few weeks later when she remembered that "Hey, maybe I was the one responsible for getting everyone out of the building." 
    Another story is about the passenger ferry Estonia which sank during a heavy storm. One of the survivors recalled that when he escaped he walked past several passengers who just sat in chairs in a bar very close to the life boats. They could have survived but they just sat in the chairs doing nothing at all.
    So who is surviving a disaster? The answer is that one part of it is based on your life history. If you have lived a rough life it will increase your survival chances. Another answer is that you have to prepare so you don't black out and be aware of the "stupid" mistakes. One stupid mistake many people do in a disaster is that they look at what other people are doing. So instead of evacuating the burning World Trade Center, many people just stood there looking at what other people were doing who were also looking at what other people were doing. So they didn't escape when they could! Those who had practiced evacuating the buildings escaped as soon as possible and survived. 

    November 9, 2015

    Explaining the Hybrid A Star pathfinding algorithm for selfdriving cars

    Let's say you are standing somewhere in a room and would like to find the shortest path to a goal. You can see a few obstacles, such as a table, that you would like to avoid. The easiest way to solve the problem (if you are a computer) is to divide the room into squares and then use the common A* (A Star) search algorithm to find the shortest path. But what if you are a car and can't turn around like a human. Then you have a problem! Well, at least until you learn the Hybrid A Star search algorithm. With that algorithm you will be able to find a drivable path to the goal.

    The reason I wanted to learn the Hybrid A* algorithm was that I took a class in self-driving cars, where the teacher showed a really cool video with a car that drove around in a maze until it found the goal:

    The problem was that the teacher didn't explain the Hybrid A* algorithm - he just explained the normal A* algorithm, so I had to figure it out on my own. A few days later, I had found the pieces I needed and could build it. Because no-one else had really explained the algorithm, I decided to write this short summary.

    Before you begin, you really have to learn the normal A* algorithm, so if you don't know how you should do that, I suggest that you take the same class as I took: Artificial Intelligence for Robotics - Programming a Robotic Car. It's free to take it, so don't worry. You should also read the sources I used to figure out the algorithm, mainly the following reports:

    While those reports above will explain the more complicated algorithm that will be a little faster and maybe produce a better result, I will explain the very basic idea so you can get started. And when you can understand the basic idea behind the Hybrid A* search algorithm you can always improve it.

    First of all you will need to be able to build a simulated vehicle that you can drive with the help of math. These types of vehicles are called skeleton vehicles and you can learn how to build one by watching this video from the class I took (I had to replace the sin with cos and vice versa when I converted the math to from Python to Unity, so you might have to experiment a little):

    Next step is to watch this video where the teacher Sebastian Thrun is explaining the basic idea behind the Hybrid A Star algorithm. The video is actually from the basic course in Artifical Intelligence, which is funny because he didn't explain it in the more advanced course.

    The problem with the video is that Sebastian Thrun is just drawing one line from the first square, even though he should have drawn several lines (one for each turning angle). According to the reports from above, the resolution is 5 degrees, so I believe you should simulate a skeleton car with steering angles that are going from the lowest possible steering angle to the largest possible steering angle (so -30, -25. -20, ...). When I did that I thought it took too long time, so I'm just simulating three angles: [-35, 0, 35] (but converted to radians). 

    The driving distance (d) is the same as the width of one square. But I realized that the driving distance should be a little longer, such as 1.05 m if your square is 1 m if you want to get a better result. But you have to experiment a little.

    When you have expanded the first node by simulating all these three angles, you should close the squares they have arrived to, so another path from another node can't arrive to the same square (as he explains in the video). But it is important that you are closing the squares after you have simulated all possible angles from one node, not just after one simulated car arrives to the square, because we want to save all possible new nodes for later, so we can find the one with the lowest cost.

    As you add these new nodes, you also have to calculate the cost and heuristic. The self-driving car Junior used a more complicated heuristic, but I realized that you can use the traditional Euclidean distance as the heuristic before you begin calculating something more advanced.

    You also need a few extra costs. According to the reports, you should add an extra cost to those nodes that change steering angle compared with the previous node, so you have to save the steering angle in each node. You should also add an extra cost if the node is close to an obstacle, or if you are reversing. By the way, reversing is easy if you are using the same skeleton car as explained above. If that is the case, just use a negative distance to reverse. 

    That's it! If you add the ideas from above you will be able to make something that looks like this:

    Notice from the image that the car prefers to drive forward (because of the extra cost of reversing). It also prefers to use the same steering angle as in the previous node because of the extra cost of changing steering angle.

    Update! I finished implementing the Hybrid A Star algorithm. The car can now find a path from the bottom of the map to the top of the map in less than 1 second.

    The biggest problem I had from a performance point of view was the search among the open nodes for the node with lowest cost f. After a few attempts with different algorithms I realized that a heap was the fastest algorithm.
    I also realized that using two lists (or arrays) for the closed nodes produced the best result. One list should contain nodes where the car is driving forward and the other should contain nodes where the car is reversing. So the car can reverse into a node where it was previously driving forward. This will result in more expanded nodes and thus slower performance, but I think it produces a better result.

    This is how the basic algorithm looks like:

    Test it and download the source code
    If you want to test the latest version of the algorithm and download the source code, you can find it here: Self-driving car

    October 13, 2015

    Forest Fire Simulator Update - improved trees and fire

    A few week ago I finished the first version of a forest fire simulator in Unity. I used the real physics equations, which worked perfectly fine, but what didn't work fine was the performance of the simulation. The physics equations were not the problem - the problems were the number of trees and the fire and smoke. I even had to add a special button so the user could remove the smoke and flames because everything was so slow. This is how it looked like:

    You can see in the image above that the simulation is running at 12 frames per second, which is not that good! To improve that number I first decided to improve the trees. The problem was that the trees were all individual objects, and to improve performance you have to combine them into fewer objects. But I also have to remove trees that have burned down and add trees that are more darker. After a few experiments, I realized that the fastest way was to once in the beginning combine all trees of each tree type, and then cheat by moving trees that are not to be seen to positions the user can't see. It looks like this behind the scenes:

    You can see that all black trees are hidden below the ground. And when one of those trees are supposed to be visible, I just move the vertices to above the ground. This is super fast, but tricky to figure out how to do it and no one else had really discussed it when I made a Google search for it. So I decided to make my own tutorial on how to do it. You can find it here: Dynamic Mesh Combining. It took a while to write it, but I'm a big believer in the idea that you should share your knowledge, and if no one else has done so, then people will find you through Google and you might get links and sell more of your products! Moreover, if you do good you will get comments like this on Reddit, which is good for the self esteem:

    Anyway, the last problem was the fire and smoke. I used to have one fire and one smoke particle in each square. But when the entire forest is on fire, this will be really slow. A better way is to try to have fewer, but larger fire and smoke particles. So I wrote an algorithm that searches through the map and tries to build larger squares, up to 12 squares by 12 squares. The result is this:

    You can clearly see that the entire forest is on fire, but the simulation is running at a fantastic 30 frames per second thanks to the improved trees and fire. I also think the smoke is looking more realistic, so that's a good side-effect: faster and better. 

    If you want to test the forest fire simulator you can test it here: Forest Fire Simulator

    September 28, 2015

    Improving Unity's physics engine PhysX to achieve higher accuracy

    Unity is a popular game engine with a built-in physics engine called PhysX. Like all other physics engines, PhysX uses numerical integration techniques to simulate real world physics. Force and movement calculations can get pretty complicated, so in most cases it will be impossible to calculate the exact movements. This is the reason why the physics engine uses numerical integration techniques to approximately integrate the equations of movements. 

    The problem with several of the available numerical integration techniques is that the result in not always 100 percent accurate, because the game also has to run fast on your computer. Most game engines prefer less accuracy, but more faster games because the player will not notice the difference anyway. But what if you are going to make a game that requires more accurate movements, like a sniper rifle. 

    Low accuracy is a problem I had a few days ago when I simulated bullet trajectories. First I calculated the angle to hit the target, then I fired a bullet with Unity's physics engine, and then I noticed that the bullet didn't hit the target. First I thought I had made an error in the calculation of the angle, but then I realized that the reason the bullet didn't hit the target was because of limitations of the physics engine.

    So which integration method is PhysX using? The answer (according to my research on the Internet) is that no one really knows for sure. So let's make an experiment to find out! The first integration method I used was Euler Forward, and you can see the result here:

    You can see that the trajectory line when using Euler Forward is overshooting the target and is not following the same path as the bullet, which is using PhysX. So let's try Backward Euler:

    You can see that the trajectory line undershoots the target, but follows the same path as the bullet would have (if you compare with the bullet in the image that uses the Euler Forward to calculate the trajectory line). So there's a chance that PhysX is using Backward Euler to calculate all physics. But we are still not hitting the target. So let's try Heun's Method:

    You can see that we now hit the target with the trajectory line. But to improve the bullet trajectories, you have to write your own physics engine so that the bullets are also using Heun's Method instead of Backward Euler (or whatever method PhysX is actually using). If you would like to learn how, I've written a tutorial: How to make realistic bullets in Unity.

    September 22, 2015

    Random Show Episode 29

    A new episode of the Random Show with Kevin Rose (founder of Digg) and Tim Ferriss (author of The 4-Hour Workweek) is out! This is episode 29.

    Lessons learned
    • Kevin Rose's new watch-blog-company, Hodinkee, has a "small" but engaged audience. They had 1.3 million user sessions in a month from users that have opened the app more than 200 times. By the way, the name is not Swedish for small watch, at least I've never heard of it and I've been speaking Swedish for more than thirty years. According to Google translate, the word "hodinke" is "watch" in Czech/Slovak. 

    • Kevin Rose recommended the book The Okinawa Program, even though he also said some parts of the book were rubbish, and he also recommended eating Germinated brown rice with seaweeds, sesame, and eggs. 
    • It was difficult to hear, but I think Tim Ferriss recommended The Age of Miracles Animal Rescue if you want to adopt a dog.
    • Tim Ferriss recommended the podcast player app Overcast and playing Tetris. Why Tetris you might ask? The reason is that he interviewed Jane McGonigal who argues that playing Tetris a short time after a traumatic event will minimize the risk of post-traumatic stress. He also recommended the book Anything you want

    Recommendations If you want to watch the rest of the episodes, you can find them here: The Random Show with Kevin Rose and Tim Ferriss.

    September 13, 2015

    Simulation of a forest fire in Unity

    This summer I decided to learn more about rockets and found an online course called Differential Equations in Action by Udacity. The first and second lessons in that course teaches you how to bring back the unfortunate Apollo 13 space ship from space to Earth. You will also learn about ABS brakes and how many people that how to be vaccinated to stop an outbreak of an epidemic. 
    But sixth lesson is all about forest fires, and you will learn how to simulate a forest fire in Python. The differential equations include change in temperature from:
    • Heat diffusion
    • Heat loss
    • Wind speed
    • Combustion of wood      
    I thought the simulation of a forest fire was really interesting, but it was boring to simulate it in Python. Wouldn't it be more interesting to see the forest burn in real time? Yes it would! So I decided to make a forest fire in Unity
    It was really easy to translate the code from Python to Unity and make a real time simulation. The code in Python used a method called Euler forward to solve the differential equations, and Euler forward is working really well in Unity. This is the result:

    The temperature at the start of the fire is about 700°C and the surrounding temperature is set to 37°C.

    After about 2 minutes the fire has spread towards north-west, because the wind speed is north-west. The core temperature is now about 2500°C and the fire covers an area of  50*50 meters.

    After 20 minutes the fire covers the entire area it can cover in the simulation. Notice that the forest that's not north-west of the fire has not ignited. The temperature at the edge is still between 100°C and 200°C, so you don't want to be there, but the wood has not ignited.

    After 1 hour, 20 percent of the wood has gone up in flames. The trees change shape after a certain amount of wood has burned up. The core temperature is now 4000°C. 

    Still smoking after 2 hours, but the core temperature has gone down to 2500°C, so the fire is dying.

    After 5 hours, 70 percent of the wood has gone up in flames. The core temperature has gone down to 500°C.

    The outer part of the forest fire that's not in the wind-direction has now stopped burning.

    After 6 hours and 12 minutes, the last part of the forest has finally stopped burning. 

    ...or if you are more interested in a video (not the same fire as in the images)

    Looks interesting? You can test it here: Fore Fire Simulator

    June 19, 2015

    How to tell stories with data and what's the future of journalism?

    Pulitzer-prize winning journalist and editor of the New York Times data journalism website The Upshot, David Leonhardt, shares the tricks of the master storyteller's trade. In conversation with Google News Lab data editor Simon Rogers, he shows how data is changing the world - and your part in the revolution.

    Key points
    • Journalism is not in decline. Journalism (at least American journalism) is better today than it has ever been - even as little as 10 years ago. Yes there are challenges, and the business model is changing. But journalism is still keeping people informed about the world and has not been replaced by click-bait articles. 
    • Why journalism is better today than 10-20 years ago:
      • Journalism is more accurate than it used to be (but not perfectly accurate). One reason is that it is easier to change inaccurate information in articles when the articles are digital, like spelling errors, compared with printed articles. It is also easier for the audience to interact with articles and journalists today when everything is digital. The audience can improve the articles. 
      • The tools and techniques for telling a story has improved. It is today easy to create interactive visualizations, like maps and let the map zoom in on the area, and give the reader different information, depending on where the reader is living. These techniques didn't exist 20 years ago.
      • Journalists are using better data than before. As long as the journalists are using the data in the correct way, the result is better than it used to be. 
      • The audience for ambitious journalism is larger than it was just a few years ago. People from across the globe can read the New York Times. 
    • The most articles in the New York Times are not traditional articles with blocks of text - they are interactive visualizations, essays, Q&A's, and videos. But they are not click-baits - they are about serious topics and the people behind them have put a lot of effort into them. The smartest and clearest way to tell a story isn't anymore the traditional article.
    • New York Times is sometimes writing 2 articles, one traditional with just text and a similar article with more visualizations. In one example, the article with the more visualizations got 8 times the traffic compared with the traditional article with just text.
    • Journalists are becoming more and more specialized within a certain area.
    • You can probably find big opportunities within local news, but only if you are using data. 

    June 18, 2015

    How to make better predictions and decisions


    I've read a book called The signal and the noise: Why so many predictions fail - but some don't by Nate Silver. The basic idea behind the book is that ever since Johannes Gutenberg invented the printing press, the information in the world has increased, making it more and more difficult to make good predictions because of the noise. Moreover, the Internet has increased the information overload, making it even harder to make good predictions. A lot of people are still making what they think are good predictions, even though they shouldn't make predictions at all (*cough* economists), because it is simply impossible to predict everything. 
    What most people are doing when trying to predict something from the information available, like a stock price, is to pick out the parts they like while ignoring the parts they don't like. If the same person is trying to predict if he/she should keep a position in let's say Tesla Motors, then the person will read everything that confirms that it is a good idea to keep that position and hang out with people with the same ideas, while ignoring the facts that maybe Tesla Motors's stock is a bubble. 
    You may first argue that only amateurs pick out the parts they like while ignoring the parts they don't like. But if you can't remember the 2008 stock market crash, The signal and the noise includes an entire chapter describing it. It turned out that those who worked in the rating agencies, whose job it was to measure risk in financial markets, also picked out the parts they liked, while ignoring the signs that there was a housing bubble. For example, the phrase "housing bubble" appeared in just eight news accounts in 2001, but jumped to 3447 references by 2005. And yet, the rating agencies say that they missed it.

    Another example is the Japanese earthquake and following tsunami in 2011. The book includes an entire chapter on predicting earthquakes. It turns that it is impossible to predict when an earthquake will happen. What you can predict is that an earthquake will happen and with which magnitude it might have. The Fukushima nuclear reactor had been designed to handle a magnitude 8.6 earthquake, in part because the seismologists concluded that anything larger was impossible. Then came the 9.1 earthquake. 
    The Credit Crisis of 2008 and the 2011 Japanese earthquake are not the only examples in the book:
    It didn't matter whether the experts were making predictions about economics, domestic politics, or international affairs; their judgment was equally bad across the board.  
    The reason why we humans are bad at making predictions is because we are humans. A newborn baby can recognize the basic pattern of a face because the evolution has taught it how. The problem is that these evolutionary instincts sometimes lead us to see patterns when there are none there. We are constantly finding patterns in random noise.

    So how can you improve your predictions?
    Nate Silver argues that we can never make perfectly objective predictions. They will always be tainted by our subjective point of view. But we can at least try to improve the way we make predictions. This is how you can do it:
    • Don't always listen to experts. You can listen to some experts, but make sure the expert can really predict what the expert is trying to predict. The octopus who predicted the World Cup is not an expert, and neither can you predict an earthquake. What you can predict is the weather, but the public is not trusting weather forecasts. This could sometimes be dangerous. Several people died from the Hurricane Katrina because they didn't trust the weather forecaster who said a hurricane was on its way. Another finding from the book is that weather forecasters on television tend to overestimate the probability of rain because people will be upset if they predict sun and then it is raining, even though the forecast from the computer predicts sunny weather.  
    • Incorporate ideas from different disciplines and regardless of their origin on the political spectrum.
    • Find a new approach, or pursue multiple approaches at the same time, if you aren't sure the original one is working. Making a lot of predictions is also the only way to get better at it.
    • Be willing to acknowledge mistakes in your predictions and accept the blame for them. Good predictions should always change if you find more information. But wild gyrations in your prediction from day to day is a bad sign, then you probably have a bad model or whatever you are predicting isn't predictable. 
    • See the universe as complicated, perhaps to the point of many fundamental problems being inherently unpredictable. If you make a prediction and it goes badly, you can never really be certain whether it was your fault or not, whether your model is flawed, or if you were just unlucky. 
    • Try to express you prediction as a probability by using Bayes's theorem. Weather forecasters are always using a probability to determine if it might rain the next week, "With a probability of 60 percent it will rain on Monday the next week," but they will not tell you that on television. The reason is that even though we have super-fast computers it is still impossible to find out the real answer, as explained in a chapter in the book. If you publish your findings, make sure to include this probability, because people have died when they have misinterpreted the probability. A weather station predicted that a river would rise with x +- y meters. Those who used the prediction though the river could rise with x meters, and it turned out the river rose with x+y meters, flooding the area.    
    • Rely more on observation than theory. All models are wrong because all models are simplifications of the universe. One bad simplification is overfitting your data, which is the act of mistaking noise for signal. But some models are useful as long as you test them in the real world rather than in the comfort of a statistical model. The goal of the predictive model is to capture as much signal as possible and as little noise as possible.  
    • Use the aggregate prediction. Quite a lot of evidence suggests that the aggregate prediction is often 15 to 20 percent more accurate than the individual prediction made by one person. But remember that this is not always true. An individual prediction can be better and the aggregate prediction might be bad because you can't predict whatever you are trying to predict. 
    • Combine computer predictions with your own intelligence. A visual inspection of a graphic showing the interaction between two variables is often a quicker and more reliable way to detect outliers in your data than a statistical test.

    This sounds reasonable? So why are we seeing so many experts who are not really experts? According to the book, the more interviews that an expert had done with the press, the worse his/her predictions tended to be. The reason is that the experts who are really experts and are aware of the fact that they can't predict everything, tend to be boring on television. It is much funnier to invite someone who says that "the stock market will increase 40 percent this year" than someone who says "I don't know because it is impossible to predict the stock market."
    So we all should learn how to make better predictions and learn which predictions we should trust. If we can, we might avoid another Credit Crisis, another 9/11, another Pearl Harbor, another Fukushima, and unnecessary deaths from another Hurricane Katrina.

    June 5, 2015

    Video: Marketing for Indies - PR, Social Media, and Game Trailers

    I found a video on YouTube called "Marketing for Indies - PR, Social Media, and Game Trailers." It is rather long, but very interesting.

    Key points
    • He sent out around 1500 requests for people to cover the game Albino Lullaby on YouTube (and other services like Twitch), in articles (including bloggers), or through podcasts. In general, it was the smaller YouTube accounts that responded to the requests. He didn't get any response at all when sending requests for people to write articles about the game. At one point, he even gave up trying. But he continued to send requests to around 4 big (popular) people each week, and in the end 1 big people wrote an article about the game. Then other big people followed because they saw that article and were now also more interested in writing articles than before. But he argued that you will need both popular accounts and less popular accounts, because the popular accounts will Google the game and find articles and videos by the less popular accounts. If they hadn't found those articles, they would have ignored the game. 
    • You should know what the rules are, but also be ready to break them. 
    • Getting noticed is really hard.
    • You will need a press kit. It should include:
      • Description: Make sure you can explain your game in 1 sentence, 1 paragraph, and 1 article (each should describe the entire game). Make sure you test it on real people and notices how they react
      • Press Releases: Anything that is significant can become a press release. Most people who write articles will copy-and-paste the press release, so write a good press release, but most will not care about press releases - they want to play the game itself
      • Trailer
      • Screenshots
      • Demo
      • Links 
    • Use a spreadsheet to keep track of the requests.
    • Interact with popular accounts on Twitter, so the popular accounts will recognize you when you reach out to them. But don't spam, because he was kicked out from Reddit for spamming. 
    • Above everything, you have to make an amazing game.
    • Be open with what you do, have a blog and stream the development process (some stream their entire day). People love to read behind-the-scenes and stories about the little guy vs the evil big company.
    • Ask yourself: What can I do to make it easier for someone to write an article about my game?
    • Marketing of the game begins before the development begins. Start a Twitter account and a blog today and start getting recognized. 
    • When on Twitter, use the hashtags #indiegamedev, #videogame and use the website RiteTag to find other hashtags that you might use. 
    • Be 100 percent data driven - opinions don't matter!
    • 99 percent of the players will not play your game, but they will watch your trailer, so make sure the quality of the trailer is 100 percent.

    Why you should be pronoid and not paranoid

    This is an excerpt from the book The Sell written by Fredrik Eklund, who is a top  New York City real estate broker.
    I'm going to teach you a word: pronoia. It's the opposite of paranoia. Paranoia is when you think the world is against you in some shape or form. Pronoia is the happy opposite: having the sense that there is a conspiracy that exists to help you. I just decided that's how it is, because I said so. I run my life on pronoia, and I want you to start, too. Right now. Did you know there's actually a great conspiracy that exists to help you? It's called the universe. Step a little closer. Let me whisper it in your ear. I'm telling you that the world is set up to secretly benefit you!
    Tell yourself that the person in front of you in the express lane, who is suddenly backing up the line with her credit card that won't work, is giving you a minute to flip through a tabloid and get a laugh at some of the preposterous stories and pictures. See how pronoia can make that frustrating moment a gift?

    June 4, 2015

    Rushstudy - The vocabulary test app that will help you study with the help of mathematical models

    A few days ago I read the book Predictive Analytics. A chapter in the book was about the computer machine that won Jeopardy! One of the humans who competed against the machine, Roger Craig, used a special way to prepare himself. This is an excerpt from the book:
    ...To prepare for his appearance in the show, which he's craved since age 12, Roger did for Jeopardy! what had never been done before. He Moneyballed it.
    Roger optimized his study time with prediction. As a mere mortal, he faced a limited number of hours per day to study. He rigged his computer with Jeopardy! data. An expert in predictive modeling, he developed a system to learn from his performance practicing Jeopardy! questions so that it could serve up questions he was likely to miss in order to efficiently focus his practice time on the topics where he needed it the most.
    This bolstered the brainiac for a breakout. On Jeopardy!, Roger set the all-time record for a single-game win of $77,000 and continued on, winning more than $230,000 during a seven-day run that placed him as the third-highest winning contestant (regular season) to date.
    This sounded like a good idea, so I decided to develop a similar system for everyone to use. The result is "Rushstudy - The vocabulary test app that will help you study with the help of mathematical models." This is a screenshot:

    You can upload your own files, but if you just want to test it, I've already loaded it with the periodic system. It will keep track of the questions that you answer wrong, and the probability to get those questions in the future will be higher. I've also added a chart so you can keep track of your learning process. This is not the final version, so I will improve it in the future.  

    Looks interesting? You can test it here: Rushstudy - The vocabulary test app that will help you study with the help of mathematical models

    May 31, 2015

    Book Review: Predicitve Analytics

    According to Wikipedia, predictive analytics is an area of data mining that deals with extracting information from data and using it to predict trends and behavior patterns. Often the unknown event of interest is in the future, but predictive analytics can be applied to any type of unknown whether it be in the past, present or future.
    If you have ever bought a book on Amazon, at the bottom of the page you will see the small text "Customers Who Bought This Item Also Bought..." If you for example buy the book Predictive Analytics: The Power to Predict Who Will Click, Buy, Lie, or Die by Eric Siegel, you will see that Amazon recommends to you the books:
    • Data science for business
    • Competing on analytics
    • Predictive analytics for dummies
    This is predictive analytics when Amazon's engineers are using algorithms to try to determine which books you might be interested in to read. And this is really working. This is an excerpt from the book The Everything Store of what happened when installed a predictive analytics system: 
    Eric Benson took about two weeks to construct a preliminary version that grouped together customers who had similar purchasing histories and then found books that appealed to the people in each group. That feature, called Similarities, immediately yielded a noticeable uptick in sales and allowed Amazon to point customers toward books that they might not otherwise have found. Greg Linded, and engineer who worked on the project, recalls [Jeff] Bezos coming into his office, getting downed on his hand and knees, and joking, "I'm not worthy."
    I myself have used predictive analytics a few times before. When I participated in a Kaggle competition to predict if a sound file included the sound made by a whale, I used so called random forests to make that prediction. I've also at the bottom of each article in this blog used predictive analytics to, in a similar way as Amazon recommends books, used algorithms to predict related blog posts.
    To learn more about predictive analytics I decided to read the book Predictive Analytics. This book will tell you why you need predictive analytics and what you can do with it, not how. So you will not find a single mathematical equation in the book, but the author will describe some basic algorithms, such as decision trees.
    The books if filled with examples from the author's own work within the field and what other people have predicted. One chapter is about the machine that learned how to predict Jeopardy answers. Other chapters include examples how you can predict which employees will quit their job, where a crime might happen, and how Barack Obama used predictive analytics to win an election.
    If you, like I have been, involved within the field, you will be familiar with most examples. I've heard before about the Jeopardy machine and the large US retail chain that messed up by sending discounted prices to a teen who they had predicted was pregnant. Her father thought it was an outrageous accusation, and then it turned out she was really pregnant, but she hadn't told her father about it. But if you read through the book and recognize everything, then it will confirm that you know what you should know within the field, and you can move on to applying the algorithms. And if you learn something new, then it's just great.

    May 25, 2015

    How to optimize Unity and other tips and tricks as well as best practices

    This was a link roundup with articles describing how to optimize Unity as well as other tips and tricks and best practices. It has moved to here: Learn how to optimize your Unity project.

    May 24, 2015

    How to create water wakes in Unity?

    The question is: How do you create water wakes in Unity? If you have ever seen a boat on a lake, you notice that it will leave small waves behind it while it is floating forward. But how do you create those waves, and how do you combine them with a moving sea in a computer environment like Unity? I've earlier made a boat that will float in Unity with realistic buoyancy, and now I felt I wanted to learn how to add water wakes. No boat is complete without them.
    The answer to the question is using an algorithm called iWave. There are other alternatives, but I believe iWave is the most popular algorithm. It was first published in 2008 in a report called Simulation of Interactive Surface Waves. The report was written by Jerry Tessendorf, who is an expert in the field. If you've ever seen a computer animated sea in a movie, like the sea in Titanic, the algorithm behind the realistic moving sea was written by Jerry Tessendorf.
    The algorithm itself is not that easy to understand, but the simplified version is actually very easy to implement - and it is super fast. The only expert tip I have is to change the parameter alpha if you encounter gigantic waves. The algorithm itself is not that stable, so alpha is like a damping parameter. 
    My final results look like this:

    You can interact with the surface with your mouse, and the small cube will float with my old buoyancy algorithm. The difference now is that the cube will leave ripples whenever it is bouncing up and down. 
    Looks interesting? Don't worry, because I will soon write a tutorial on how to do it in Unity. It will be published here: Tutorial on how to make a boat in Unity.  

    May 18, 2015

    Book review: The Quest for Artificial Intelligence


    This weekend I've watched two movies about smart robots. The first was called Chappie and it tells the story of a robot named Chappie, who is the first robot with the ability to think and feel for himself. Chappie is therefore much smarter than any other human. The second movie, Ex Machina, is also about a smart robot with the same abilities as Chappie, although the robot in Ex Machina is looking more like a real human.
    The question is how realistic are those movies? Will we soon see robots that are as smart as Chappie? To be able to answer that question, we have to take a look at the history of artificial intelligence, or AI. And that's why I read the book The Quest for Artificial Intelligence by Nils Nilsson.

    The Quest for Artificial Intelligence was published in 2010, and is therefore up to date with the latest histories of artificial intelligence. But AI is a field that these days is moving fast, so some of the latest achievements are not included, such as the algorithm that learned how to play the game Breakout.
    Nils Nilsson knows what he is talking about. Among other projects, he has developed the famous A* search algorithm. If you have ever played a computer game where the characters are controlled by the computer, the game is probably using the A* algorithm to help the characters find their way around the map.
    The early history of AI begins hundreds of years ago when the old Greeks dreamed about self-propelled chairs. Then it continues with Leonardo da Vinci, who in 1495 sketched designs for a humanoid robot in the form of a medieval knight, and ends with a mechanical duck.
    But the real history of true artificial intelligence begins after the Second World War, with a series of meetings. At these meetings, researchers described early attempts to highlight features in images and how to program a computer to play chess. The first meeting was called "Session on Learning Machines," so the word artificial intelligence was not yet invented, until someone suggested the word. Everyone were not convinced, but then most researchers began to use the name artificial intelligence. 
    "So cherish the name artificial intelligence. It is a good name. Like all names of scientific fields, it will grow to become exactly what its field comes to mean."
    What happened after these early meetings was that the quest for artificial intelligence began at the same times as the computers improved. But everything didn't go smoothly. Nils Nilsson has named the downs in the quest for artificial intelligence "AI winters." What happened was that governments around the world sponsored researchers to develop AI algorithms (generally for military purposes). But since AI is a difficult topic, these algorithms didn't always work.
    When they didn't work, the governments decided to decrease the funding, and the researchers had to endure an AI winter with little or no money. Then the computers and algorithms improved, the governments were yet again excited, then the algorithms didn't work as promised, and another AI winter happened.
    According to Nils Nilsson, what these AI winters led to was scared researchers. The naysayers around the researchers could give comments like:
    "Most people working on speech recognition were acting like mad scientists and untrustworthy engineers." 
    So the researchers decided to develop simple algorithms that would actually work as promised, and ignore the more complicated algorithms that didn't always work as promised. For example, the transcription of spoken sentences to their textual equivalents is now largely a solved problem. But that is not true intelligence. The computer can't still understand natural language speech (or text) so someone can have a dialog with a computer, like in the movie Her. The latter is generally called strong AI, while the former is called weak AI. So most researchers have throughout history focused on weak AI to not lose any respect. They were saying:
    "AI used to be criticized for its flossiness. Now that we have made solid progress, lets us not risk losing our respectability,"
    So if you are wondering why artificial intelligence is not as intelligent as in the movies Chappie, Ex Machina, and Her, the answer is that most researchers have focused on weak AI. The emphasis has been on using AI to help humans rather than to replace them. Yes, a computer can beat a human in a game of chess, but that computer is not intelligent. Deep Blue is considered to be the machine that's the best chess player in the world, but Deep Blue doesn't know that it is playing chess. 
    "Does Deep Blue use artificial intelligence? The short answer is no. Earlier computer designs that tried to mimic human thinking weren't very good at it. No formula exists for intuition. ...Deep Blue relies more on computational power and a simpler search and evaluation function."
    One idea here is to actually have computer chess tournaments that will admit programs only with severe limits on computation. This would concentrate attention on scientific advances. 
    The last chapter in the book is called "The Quest Continues," so artificial intelligence is still far away from being a solved problem. We may have algorithms that know how to drive a car, how to paint a painting indistinguishable from true art, and how to compose music. But we are still far away from robots like Chappie.
    This short text is far away from being a summary of the book The Quest for Artificial Intelligence. The 700 pages are filled with facts and anecdotes. But don't worry, even though you will need math to develop AI algorithms, the book includes some math to explain a few algorithms, but it is more a history book than a math book. So if you are interested in the history of AI, then you should read it.

    May 16, 2015

    How to get more data than what you already have by using data you already have

    I've read an article about Andrew Ng called Inside The Mind That Built Google Brain: On Life, Creativity, And Failure. He has earlier worked for Google and is now working for Baidu. At Baidu he's working with speech recognition. To make speech recognition work, you will need a lot of data. One clever way he found to get more data by using data he already had is:
    Then one of the things we did was, if we have an audio clip of you saying something, we would take that audio clip of you and add background noise to it, like a clip recorded in a cafe. So we synthesize an audio clip of what you would sound like if you were speaking in a cafe. By synthesizing your voice against lots of backgrounds, we just multiply the amount of data that we have. We use tactics like that to create more data to feed to our machines, to feed to our rocket engines.
    The rocket engine he is referring to is the speech recognition algorithm, where data is the fuel that powers the rocket.

    May 12, 2015

    Random Show Episode 28

    A new episode of the Random Show with Kevin Rose (founder of Digg) and Tim Ferriss (author of The 4-Hour Workweek) is out! This is episode 28.

    Lessons learned
    • Kevin Rose has begun to move away from the digital world and towards the old-fashioned analog world. Tim Ferriss is following the same way, and he's often putting his phone on airplane-mode to disconnect.
    • Neither Kevin Rose nor Tim Ferriss are interested in Apple's new watch. Neither is Tim Ferriss needing an iPad. He prefers a smart-phone with a large screen and a flat keyboard he can connect to the phone. 

    • Tim Ferriss is experimenting with a new diet (as usual) called Ketosis. To help him, he's using a glucometer called Precision Xtra. 
    • Kevin Rose is drink a lot of coffee (and tea) and now he has bought a coffee bean-roaster, called Fresh Roast SR700. Tim Ferriss, on the other hand, has a Hario V60, which is a Japanese coffee brew method.
    • Kevin Rose has bought a new camera, a Leica M-P, which is a camera that's both digital and analog.
    • Kevin Rose recommended the the Breakaway Matcha, which is a brand of green tea, and he is also into fermenting is own food.
    • Both Tim Ferriss and Kevin Rose are using an egg-cooking-machine called Cuisinart Egg Cooker.
    • Tim Ferriss recommended the book A Wrestling Life: The Inspiring Stories of Dan Gable, which is a biography on the wrestler and coach Dan Gable.
    • The "Tim Ferriss Experiment" is out! It is a video-series where Tim Ferriss is experimenting with various skills, like shooting a gun, learning a language, and driving a rally car.

    If you want to watch the rest of the episodes, you can find them here: The Random Show with Kevin Rose and Tim Ferriss.

    May 11, 2015

    How to tie your running shoes to prevent blisters with a "Heel Lock" or "Lace Lock"

    When you are buying new shoes and lacing them for the first time you might have wondered why the upper part of the shoe sometimes have two holes. I've wondered that, but I've never thought about finding out why there are two holes. Today I found a video explaining exactly that. Apparently, if you are using the second hole (in the way described in the video below), it will prevent blisters. This technique is called a "Heel Lock" or "Lace Lock."

    May 10, 2015

    Book review: I, Robot by Isaac Asimov

    Two words everyone in the world should be aware of is technological singularity, which is a moment in time when smart machines become intelligent. If you have seen the movie series Terminator, you know what I'm talking about. In Terminator, this moment is called "judgment day" and is almost the end of mankind. But technological singularity doesn't have to be something bad. What if the smart machines can help us to build a better world? This is what the book I, Robot by Isaac Asimov is all about. 
    I, Robot is the first book I've read written by Isaac Asimov. Ever since I wrote a biography on Elon Musk, I've wanted to read one of Isaac Asimov's books because he is one of Elon Musk's favorite authors. But I had to wait until one of my neighbors left I, Robot in our local book club, which is basically a shelf in a window.
    The book was first published in 1950 and became a huge success. It has over 148 thousand ratings at Goodreads. That's not strange because the book was very interesting. Elon Musk is also interested in technological singularity, and I've earlier read a few books on the topic
    What's interesting with I, Robot is that it covers the development of smart machines, from stupid robots to machines that can build other machines. It will also look at some of the pitfalls that might happen when we develop them.
    I, Robot consists of several smaller stories that are somewhat connected, although you can read them independently. The first story is about a robot designed to take care of a girl and what people in general might think of a smart robot. Other stories in the book is based on the Three Laws of Robotics:
    1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
    2. A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.
    3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
    You might think that mankind is safe if we design the robots to follow these laws. But that's far away from the truth. Machines are both smart and stupid. What happens if a human tells a robot to "go away," meaning that the human just want the robot to move a few meters away from him/her. The robot will follow "A robot must obey the orders given it by human beings," and might interpret "go away" as hiding. This is one of the stories in the book. The "smart" robot hides among other robots and the main characters in the book have to figure out who is the robot that's hiding among the other robots.
    These robots are designed by human beings. As we all know, human beings are greedy. What will happen if a human being decides to ignore one the the Three Laws of Robotics? That's another story in the book. And what happens if we tell a robot to build a spaceship? What if a robot becomes so human that's impossible to tell the difference, and that robot decides to become a politician? Will we be able to tell the difference? How can in comparison stupid humans outsmart an intelligent machine?
    So if you are interested in what might happen when we have smart robots around or lives, then you should read I, Robot. Isaac Asimov might have written it in the 1950s, but it is still accurate.

    May 7, 2015

    Book review: The Sell by Fredrik Eklund

    Source: @fredrikeklundny
    As I promised a few weeks ago, I've read the book The Sell - The secrets of selling anything to anyone by Fredrik Eklund. He's one of the best real-estate agents in New York, and probably the world because New York is the hottest market for real estates. Can you sell in New York, then you can sell everywhere. I've read a few other books on selling, and my goal is to read one new book on selling every year, which was recommended to me by one of the books. 
    So this year I read The Sell. The author Fredrik Eklund is Swedish, and so am I, so I knew who he was even before I began reading it. He is not that famous in Sweden, and I believe he is most famous for suddenly begin speaking English during an interview on Swedish television where they began by speaking Swedish. Everyone has made fun of him since then even though it turned out the story was more complicated than that. I will not tell the entire story, but he didn't do it by mistake or that he had somehow forgotten how to speak Swedish. 
    Like many other books on selling, The Sell will tell you that all salesmen/women have to be structured, be dedicated to selling, look good, dress well, and both you and the person you are selling something to have to be satisfied after the deal. 
    The difference between this book and the other books I've read on selling is that The Sell has been adapted to the Internet age. Fredrik Eklund has included an entire section on how to be popular on social media, like Twitter and Instagram, because those tools are now important. He says that 25 percent of his business originates from social media.
    Another large part of the book is dedicated to the person who is selling something. Fredrik Eklund says that you should "forget selling and begin by finding yourself." You have to believe in yourself and accept setbacks. You have to exercise, sleep (Fredrik Eklund is always sleeping for at least 8 hours), and eat healthy food. 
    The part of the book that is most different from other books on selling is the part that tells you that you should be you. "If you want people to believe in you and in what you have to offer, you have to believe in yourself." I don't know if you have seen it, but Fredrik Eklund is famous for doing the so-called "high-kick," which is a kick in the air while yelling. He is even doing it in-front of clients, including the famous ones, so that they will remember him. Because selling is always competitive, you have to find a way to stand out. "The crazy and happy ones, not the normal and bitter ones, become the real superstars." Another example is that Fredrik Eklund is telling us to to leave the black funeral-suit at home and dress in something with more color. Fredrik Eklund is often wearing a blue suit.
    So if you want to listen to a success story (Swedish guy without any contacts and money becomes a porn star in US and then the top real-estate broker in New York), improve your selling skills, and learn how to improve your life, then you should read The Sell.