How to make better predictions and decisions


I've read a book called The signal and the noise: Why so many predictions fail - but some don't by Nate Silver. The basic idea behind the book is that ever since Johannes Gutenberg invented the printing press, the information in the world has increased, making it more and more difficult to make good predictions because of the noise. Moreover, the Internet has increased the information overload, making it even harder to make good predictions. A lot of people are still making what they think are good predictions, even though they shouldn't make predictions at all (*cough* economists), because it is simply impossible to predict everything. 
What most people are doing when trying to predict something from the information available, like a stock price, is to pick out the parts they like while ignoring the parts they don't like. If the same person is trying to predict if he/she should keep a position in let's say Tesla Motors, then the person will read everything that confirms that it is a good idea to keep that position and hang out with people with the same ideas, while ignoring the facts that maybe Tesla Motors's stock is a bubble. 
You may first argue that only amateurs pick out the parts they like while ignoring the parts they don't like. But if you can't remember the 2008 stock market crash, The signal and the noise includes an entire chapter describing it. It turned out that those who worked in the rating agencies, whose job it was to measure risk in financial markets, also picked out the parts they liked, while ignoring the signs that there was a housing bubble. For example, the phrase "housing bubble" appeared in just eight news accounts in 2001, but jumped to 3447 references by 2005. And yet, the rating agencies say that they missed it.


Another example is the Japanese earthquake and following tsunami in 2011. The book includes an entire chapter on predicting earthquakes. It turns that it is impossible to predict when an earthquake will happen. What you can predict is that an earthquake will happen and with which magnitude it might have. The Fukushima nuclear reactor had been designed to handle a magnitude 8.6 earthquake, in part because the seismologists concluded that anything larger was impossible. Then came the 9.1 earthquake. 
The Credit Crisis of 2008 and the 2011 Japanese earthquake are not the only examples in the book:
It didn't matter whether the experts were making predictions about economics, domestic politics, or international affairs; their judgment was equally bad across the board.  
The reason why we humans are bad at making predictions is because we are humans. A newborn baby can recognize the basic pattern of a face because the evolution has taught it how. The problem is that these evolutionary instincts sometimes lead us to see patterns when there are none there. We are constantly finding patterns in random noise.

So how can you improve your predictions?
Nate Silver argues that we can never make perfectly objective predictions. They will always be tainted by our subjective point of view. But we can at least try to improve the way we make predictions. This is how you can do it:
  • Don't always listen to experts. You can listen to some experts, but make sure the expert can really predict what the expert is trying to predict. The octopus who predicted the World Cup is not an expert, and neither can you predict an earthquake. What you can predict is the weather, but the public is not trusting weather forecasts. This could sometimes be dangerous. Several people died from the Hurricane Katrina because they didn't trust the weather forecaster who said a hurricane was on its way. Another finding from the book is that weather forecasters on television tend to overestimate the probability of rain because people will be upset if they predict sun and then it is raining, even though the forecast from the computer predicts sunny weather.  
  • Incorporate ideas from different disciplines and regardless of their origin on the political spectrum.
  • Find a new approach, or pursue multiple approaches at the same time, if you aren't sure the original one is working. Making a lot of predictions is also the only way to get better at it.
  • Be willing to acknowledge mistakes in your predictions and accept the blame for them. Good predictions should always change if you find more information. But wild gyrations in your prediction from day to day is a bad sign, then you probably have a bad model or whatever you are predicting isn't predictable. 
  • See the universe as complicated, perhaps to the point of many fundamental problems being inherently unpredictable. If you make a prediction and it goes badly, you can never really be certain whether it was your fault or not, whether your model is flawed, or if you were just unlucky. 
  • Try to express you prediction as a probability by using Bayes's theorem. Weather forecasters are always using a probability to determine if it might rain the next week, "With a probability of 60 percent it will rain on Monday the next week," but they will not tell you that on television. The reason is that even though we have super-fast computers it is still impossible to find out the real answer, as explained in a chapter in the book. If you publish your findings, make sure to include this probability, because people have died when they have misinterpreted the probability. A weather station predicted that a river would rise with x +- y meters. Those who used the prediction though the river could rise with x meters, and it turned out the river rose with x+y meters, flooding the area.    
  • Rely more on observation than theory. All models are wrong because all models are simplifications of the universe. One bad simplification is overfitting your data, which is the act of mistaking noise for signal. But some models are useful as long as you test them in the real world rather than in the comfort of a statistical model. The goal of the predictive model is to capture as much signal as possible and as little noise as possible.  
  • Use the aggregate prediction. Quite a lot of evidence suggests that the aggregate prediction is often 15 to 20 percent more accurate than the individual prediction made by one person. But remember that this is not always true. An individual prediction can be better and the aggregate prediction might be bad because you can't predict whatever you are trying to predict. 
  • Combine computer predictions with your own intelligence. A visual inspection of a graphic showing the interaction between two variables is often a quicker and more reliable way to detect outliers in your data than a statistical test.

This sounds reasonable? So why are we seeing so many experts who are not really experts? According to the book, the more interviews that an expert had done with the press, the worse his/her predictions tended to be. The reason is that the experts who are really experts and are aware of the fact that they can't predict everything, tend to be boring on television. It is much funnier to invite someone who says that "the stock market will increase 40 percent this year" than someone who says "I don't know because it is impossible to predict the stock market."
So we all should learn how to make better predictions and learn which predictions we should trust. If we can, we might avoid another Credit Crisis, another 9/11, another Pearl Harbor, another Fukushima, and unnecessary deaths from another Hurricane Katrina.

Comments