June 20, 2015

Peter Lynch was right - you can't predict the economy

0 comments
This is a quote by the famous investor Peter Lynch
I spend about 15 minutes a year on economic analysis. The way you lose money in the stock market is to start off with an economic picture. I also spend 15 minutes a year on where the stock market is going.
According to Peter Lynch, it's easy to overestimate the skill and wisdom of professionals who is trying to predict the economy. If an "expert" on television with a fancy job title is predicting the economy, it's easy for those who haven't studied Peter Lynch to believe the expert knows something that you and me don't understand. The truth is that the expert knows as little as everyone else does, even though many experts believe they know something. 
Peter Lynch describes it more deeply in his book One up on Wall Street. According to the book, there are 60,000 economists in the US. Many of them are employed full-time trying to forecast recessions and interest rates. If they could do it successfully twice in a row, they'd all be millionaires by now. But the truth is that they aren't. 
The idea that it's impossible to predict the economy might first sound strange. I myself had a hard time before I accepted this fact. The question is why so many experts are wasting their time, even though they are not always aware of it, while the amateurs continue to listen to them.
Because Peter Lynch is not describing exactly why in his book, I had to wait a few years to really understand why it's impossible to predict the economy. I finally found the answer in the book The signal and the noise by Nate Silver. That book includes several chapters on why so many predictions fail, including why it's impossible to predict earthquakes, why US failed to predict Pearl Harbor, and why some people didn't trust those who predicted the Hurricane Katrina. Another chapter in the book is about why it's unnecessary to listen to statements like:
  • The economy will create 150,000 jobs next month
  • GDP will grow by 3 percent next year
  • Oil will rise to $120 per barrel

The secret truth about economic forecasts
According to Nate Silver, economic forecasts are blunt instruments at best, rarely being able to anticipate economic turning points more than a few months in advance. In fact, these forecasts have failed to "predict" recessions even once they were already under way: a majority of economists didn't think we were in one when the three most recent recessions were later determined to have begun. 
Peter Lynch has the same ideas as Nate Silver. Again a quote from his book:
Nobody called to inform me of an immediate collapse in October [1987], and if all the people who claimed to have predicted it beforehand had sold out their shares, then the market would have dropped the 1,000 points much earlier due to these great crowds of informed sellers.
Every year I talk to executives of a thousand companies, and I can't avoid hearing from the various gold bugs, interest-rates disciples, Federal Reserve watchers, and fiscal mystics quoted in the newspapers. They can't predict the markets with any useful consistency, any more than the gizzard squeezers could tell the Roman emperors when the Huns would attack.
One of the latest examples showing how inaccurate experts are when predicting the economy is the Credit Crisis of 2008. About one year earlier, economists in the Survey of Professional Forecasters expected the economy to grow at a just slightly below average rate of 2.4 percent in 2008. And they thought there was almost no chance of a recession as severe as the one that actually hit the world in 2008 where GDP shrank by 3.3 percent. It was a scenario the economists thought would happen with a probability of just 3 percent. 
The above isn't just one case that confirms the rule. In a report with data from 1968 up until now, predictions by the Survey of Professional Forecasters for GDP fell outside the prediction interval almost half the time. So it is clear that the economists weren't unlucky - they fundamentally overestimated the reliability of their predictions.

Why is it so difficult to predict the economy? 
According to Nate Silver, economic forecasters face 3 fundamental challenges:
  1. It is very hard to determine the cause and effect from economic statistics alone. There are millions of statistical indicators in the world, and a few will happen to correlate with stock prices and GDP, even though it's just a coincidence. For example, ice cream sales and forest fires are correlated because both occur more often in the summer, but there's no causation. With so many economic variables to pick from, someone will find something that fits the past data, even though it's just a coincidence. 
  2. The economy is always changing. If the economists have predicted an upcoming recession, the government and the Federal Reserve will take steps to soften the recession. So forecasters have to predict political decisions as well as economic ones. Also, the American economy has changed from an economy dominated by manufacturing to one dominated by the service sector, which will make old models that used to work obsolete. 
  3. The data the economists have to work with isn't that good. With different governments with different policies, the past data will be subjected to these different policies, making it difficult to use old data and say that the data will be the same as today. Also, the data available may be limited. If you look at data between 1986 and 2006, which is 20 years, but the problem is that these years contained just two mild recessions.  
So what's the solution?
We know that Peter Lynch's solution is to spend about 15 minutes a year on economic analysis. Nate Silver, on the other hand, argues that if you have to listen to predictions of the economy, you should listen to the average or aggregate prediction rather than that of any one economist. These aggregate forecasts are about
  • 20 percent more accurate than the typical individual's forecast at predicting GDP
  • 10 percent better at predicting unemployment
  • 30 percent better at predicting inflation. 

June 19, 2015

How to tell stories with data and what's the future of journalism?

0 comments
Pulitzer-prize winning journalist and editor of the New York Times data journalism website The Upshot, David Leonhardt, shares the tricks of the master storyteller's trade. In conversation with Google News Lab data editor Simon Rogers, he shows how data is changing the world - and your part in the revolution.


Key points
  • Journalism is not in decline. Journalism (at least American journalism) is better today than it has ever been - even as little as 10 years ago. Yes there are challenges, and the business model is changing. But journalism is still keeping people informed about the world and has not been replaced by click-bait articles. 
  • Why journalism is better today than 10-20 years ago:
    • Journalism is more accurate than it used to be (but not perfectly accurate). One reason is that it is easier to change inaccurate information in articles when the articles are digital, like spelling errors, compared with printed articles. It is also easier for the audience to interact with articles and journalists today when everything is digital. The audience can improve the articles. 
    • The tools and techniques for telling a story has improved. It is today easy to create interactive visualizations, like maps and let the map zoom in on the area, and give the reader different information, depending on where the reader is living. These techniques didn't exist 20 years ago.
    • Journalists are using better data than before. As long as the journalists are using the data in the correct way, the result is better than it used to be. 
    • The audience for ambitious journalism is larger than it was just a few years ago. People from across the globe can read the New York Times. 
  • The most articles in the New York Times are not traditional articles with blocks of text - they are interactive visualizations, essays, Q&A's, and videos. But they are not click-baits - they are about serious topics and the people behind them have put a lot of effort into them. The smartest and clearest way to tell a story isn't anymore the traditional article.
  • New York Times is sometimes writing 2 articles, one traditional with just text and a similar article with more visualizations. In one example, the article with the more visualizations got 8 times the traffic compared with the traditional article with just text.
  • Journalists are becoming more and more specialized within a certain area.
  • You can probably find big opportunities within local news, but only if you are using data. 

June 18, 2015

How to make better predictions and decisions

0 comments

I've read a book called The signal and the noise: Why so many predictions fail - but some don't by Nate Silver. The basic idea behind the book is that ever since Johannes Gutenberg invented the printing press, the information in the world has increased, making it more and more difficult to make good predictions because of the noise. Moreover, the Internet has increased the information overload, making it even harder to make good predictions. A lot of people are still making what they think are good predictions, even though they shouldn't make predictions at all (*cough* economists), because it is simply impossible to predict everything. 
What most people are doing when trying to predict something from the information available, like a stock price, is to pick out the parts they like while ignoring the parts they don't like. If the same person is trying to predict if he/she should keep a position in let's say Tesla Motors, then the person will read everything that confirms that it is a good idea to keep that position and hang out with people with the same ideas, while ignoring the facts that maybe Tesla Motors's stock is a bubble. 
You may first argue that only amateurs pick out the parts they like while ignoring the parts they don't like. But if you can't remember the 2008 stock market crash, The signal and the noise includes an entire chapter describing it. It turned out that those who worked in the rating agencies, whose job it was to measure risk in financial markets, also picked out the parts they liked, while ignoring the signs that there was a housing bubble. For example, the phrase "housing bubble" appeared in just eight news accounts in 2001, but jumped to 3447 references by 2005. And yet, the rating agencies say that they missed it.


Another example is the Japanese earthquake and following tsunami in 2011. The book includes an entire chapter on predicting earthquakes. It turns that it is impossible to predict when an earthquake will happen. What you can predict is that an earthquake will happen and with which magnitude it might have. The Fukushima nuclear reactor had been designed to handle a magnitude 8.6 earthquake, in part because the seismologists concluded that anything larger was impossible. Then came the 9.1 earthquake. 
The Credit Crisis of 2008 and the 2011 Japanese earthquake are not the only examples in the book:
It didn't matter whether the experts were making predictions about economics, domestic politics, or international affairs; their judgment was equally bad across the board.  
The reason why we humans are bad at making predictions is because we are humans. A newborn baby can recognize the basic pattern of a face because the evolution has taught it how. The problem is that these evolutionary instincts sometimes lead us to see patterns when there are none there. We are constantly finding patterns in random noise.

So how can you improve your predictions?
Nate Silver argues that we can never make perfectly objective predictions. They will always be tainted by our subjective point of view. But we can at least try to improve the way we make predictions. This is how you can do it:
  • Don't always listen to experts. You can listen to some experts, but make sure the expert can really predict what the expert is trying to predict. The octopus who predicted the World Cup is not an expert, and neither can you predict an earthquake. What you can predict is the weather, but the public is not trusting weather forecasts. This could sometimes be dangerous. Several people died from the Hurricane Katrina because they didn't trust the weather forecaster who said a hurricane was on its way. Another finding from the book is that weather forecasters on television tend to overestimate the probability of rain because people will be upset if they predict sun and then it is raining, even though the forecast from the computer predicts sunny weather.  
  • Incorporate ideas from different disciplines and regardless of their origin on the political spectrum.
  • Find a new approach, or pursue multiple approaches at the same time, if you aren't sure the original one is working. Making a lot of predictions is also the only way to get better at it.
  • Be willing to acknowledge mistakes in your predictions and accept the blame for them. Good predictions should always change if you find more information. But wild gyrations in your prediction from day to day is a bad sign, then you probably have a bad model or whatever you are predicting isn't predictable. 
  • See the universe as complicated, perhaps to the point of many fundamental problems being inherently unpredictable. If you make a prediction and it goes badly, you can never really be certain whether it was your fault or not, whether your model is flawed, or if you were just unlucky. 
  • Try to express you prediction as a probability by using Bayes's theorem. Weather forecasters are always using a probability to determine if it might rain the next week, "With a probability of 60 percent it will rain on Monday the next week," but they will not tell you that on television. The reason is that even though we have super-fast computers it is still impossible to find out the real answer, as explained in a chapter in the book. If you publish your findings, make sure to include this probability, because people have died when they have misinterpreted the probability. A weather station predicted that a river would rise with x +- y meters. Those who used the prediction though the river could rise with x meters, and it turned out the river rose with x+y meters, flooding the area.    
  • Rely more on observation than theory. All models are wrong because all models are simplifications of the universe. One bad simplification is overfitting your data, which is the act of mistaking noise for signal. But some models are useful as long as you test them in the real world rather than in the comfort of a statistical model. The goal of the predictive model is to capture as much signal as possible and as little noise as possible.  
  • Use the aggregate prediction. Quite a lot of evidence suggests that the aggregate prediction is often 15 to 20 percent more accurate than the individual prediction made by one person. But remember that this is not always true. An individual prediction can be better and the aggregate prediction might be bad because you can't predict whatever you are trying to predict. 
  • Combine computer predictions with your own intelligence. A visual inspection of a graphic showing the interaction between two variables is often a quicker and more reliable way to detect outliers in your data than a statistical test.

This sounds reasonable? So why are we seeing so many experts who are not really experts? According to the book, the more interviews that an expert had done with the press, the worse his/her predictions tended to be. The reason is that the experts who are really experts and are aware of the fact that they can't predict everything, tend to be boring on television. It is much funnier to invite someone who says that "the stock market will increase 40 percent this year" than someone who says "I don't know because it is impossible to predict the stock market."
So we all should learn how to make better predictions and learn which predictions we should trust. If we can, we might avoid another Credit Crisis, another 9/11, another Pearl Harbor, another Fukushima, and unnecessary deaths from another Hurricane Katrina.

June 5, 2015

Video: Marketing for Indies - PR, Social Media, and Game Trailers

0 comments
I found a video on YouTube called "Marketing for Indies - PR, Social Media, and Game Trailers." It is rather long, but very interesting.


Key points
  • He sent out around 1500 requests for people to cover the game Albino Lullaby on YouTube (and other services like Twitch), in articles (including bloggers), or through podcasts. In general, it was the smaller YouTube accounts that responded to the requests. He didn't get any response at all when sending requests for people to write articles about the game. At one point, he even gave up trying. But he continued to send requests to around 4 big (popular) people each week, and in the end 1 big people wrote an article about the game. Then other big people followed because they saw that article and were now also more interested in writing articles than before. But he argued that you will need both popular accounts and less popular accounts, because the popular accounts will Google the game and find articles and videos by the less popular accounts. If they hadn't found those articles, they would have ignored the game. 
  • You should know what the rules are, but also be ready to break them. 
  • Getting noticed is really hard.
  • You will need a press kit. It should include:
    • Description: Make sure you can explain your game in 1 sentence, 1 paragraph, and 1 article (each should describe the entire game). Make sure you test it on real people and notices how they react
    • Press Releases: Anything that is significant can become a press release. Most people who write articles will copy-and-paste the press release, so write a good press release, but most will not care about press releases - they want to play the game itself
    • Trailer
    • Screenshots
    • Demo
    • Links 
  • Use a spreadsheet to keep track of the requests.
  • Interact with popular accounts on Twitter, so the popular accounts will recognize you when you reach out to them. But don't spam, because he was kicked out from Reddit for spamming. 
  • Above everything, you have to make an amazing game.
  • Be open with what you do, have a blog and stream the development process (some stream their entire day). People love to read behind-the-scenes and stories about the little guy vs the evil big company.
  • Ask yourself: What can I do to make it easier for someone to write an article about my game?
  • Marketing of the game begins before the development begins. Start a Twitter account and a blog today and start getting recognized. 
  • When on Twitter, use the hashtags #indiegamedev, #videogame and use the website RiteTag to find other hashtags that you might use. 
  • Be 100 percent data driven - opinions don't matter!
  • 99 percent of the players will not play your game, but they will watch your trailer, so make sure the quality of the trailer is 100 percent.

Why you should be pronoid and not paranoid

0 comments
This is an excerpt from the book The Sell written by Fredrik Eklund, who is a top  New York City real estate broker.
I'm going to teach you a word: pronoia. It's the opposite of paranoia. Paranoia is when you think the world is against you in some shape or form. Pronoia is the happy opposite: having the sense that there is a conspiracy that exists to help you. I just decided that's how it is, because I said so. I run my life on pronoia, and I want you to start, too. Right now. Did you know there's actually a great conspiracy that exists to help you? It's called the universe. Step a little closer. Let me whisper it in your ear. I'm telling you that the world is set up to secretly benefit you!
Tell yourself that the person in front of you in the express lane, who is suddenly backing up the line with her credit card that won't work, is giving you a minute to flip through a tabloid and get a laugh at some of the preposterous stories and pictures. See how pronoia can make that frustrating moment a gift?