This article is the ninth of this weekly series called Latticework of Mental Models, which will be authored by my friend and partner in writing the Value Investing Almanack, Anshul Khare. Anshul will write on various mental models – big ideas from various disciplines – which can help you think more rationally while analyzing businesses and making your stock investment decisions.
Outside of the closed glass chambers in the corporate world, there is another very unusual place where grand scheming about workplace strategies (read, office politics) happens regularly. Don’t worry, there is nothing hush-hush about this place and it’s not at all surrounded by thick soundproof walls.
Wondering what I am talking about? Let me give you another hint.
It’s the designated open area in every large company, where people go out in fresh air to fill their lungs with freshly brewed smoke. You guessed it right. I am talking about the smoking areas.
Mind you, it’s not just a place trafficked by the smoke billowing nicotine guzzlers, but you’ll often find those hapless passive smokers too who don’t realize that their lungs are going to collapse sooner than their active smoker buddies.
Before you dismiss my lame attempt at sarcasm, let me confess that I used to be one such ignorant passive smokers who would stand next to a friendly smoke machine and take pride in my ability to resist the temptation of those white sticks.
Smokers! Please forgive me if I have offended you. But trust me, it was intentional! 🙂
Anyways, my sole consolation on those suffocating trips were some of the delusional (and absolutely hilarious) arguments that I would get to hear by my nicotine addicted pals to justify their habit. You know what my favourite argument was? This one –
My grandfather was a lifelong chain smoker and still lived up to a ripe age of 90 years. So don’t tell me that smoking kills.
There is just one problem with this logic – it is dead wrong!
I don’t deny the authenticity of the claim but I can’t help but imagine how Charlie Munger would react to this line of thought. He would probably snort with laughter, as if he wanted to say – “Boy, you’re suffering from I-know-a-man syndrome.” A classic mistake of taking a specific instance and blindly generalizing it.
Cognitive dissonance is one mental model that we had discussed in the April issue of Value Investing Almanack, which explains the self-deceptive behaviour of tobacco addicts. But an irrational behaviour is rarely the result of a single behavioural bias. We have something else at play here.
What we’re going to explore today is a bias which was first identified by Amos Tversky and Daniel Kahneman. It’s called the Law of Small Numbers, also known as ‘insensitivity to sample size’.
In fact, this ‘I-know-a-man-syndrome’ is an extreme case where people tend to generalize from small amounts of data. Most people, including many experts (involved in serious empirical research), don’t appreciate how research based upon small numbers or small populations can often generate extreme observations. As a result, people have a tendency to believe that a relatively small number of observations will closely reflect the general population.
Consider my argument – “I know a man who jumped from 5th floor and survived the fall. Which implies that it’s safe for anyone to repeat the act.”
“No it’s not!” Intuitively dismissing the argument, you might say, “It’s an aberration, an exception that can’t be relied on.”
The point I am trying to make here is that sample size matters while trying to establish any patterns. Jana Vembunarayanan has written a very nice post explaining the idea using elementary mathematics.
Fooled by Sample Size
Next time when you watch a commercial on TV, pay attention and see if you find any of the following claims – “Seven out of ten housewives prefer this washing powder.” Or for that matter, “Four out of five dentists recommend this toothpaste.”
Rings a bell? And there is no end to such compelling claims. It’s a common trick used by marketing people to win your trust.
Truth be told, these claims don’t tell us anything unless we know how many dentists or housewives were surveyed. Maybe they surveyed just 10 housewives and dentists; an observation that can’t be extrapolated to include all dentists or housewives.
Anytime you see such statistics thrown at you, please don’t forget to question the sample size. With small sample size, odds are pretty high that conclusions can’t be trusted.
Peter Bevelin in his must-read book Seeking Wisdom, explains –
A small sample size has no predictive value. The smaller the sample is, the more statistical fluctuations and the more likely it is that we find chance events. We need a representative comparison group, large enough sample size, and long enough periods of time.
Small samples can cause us to believe a risk is lower or higher than reality. Why? A small sample increases the chance that we won’t find a particular relationship where it exists or find one where it doesn’t exist.
Charlie Munger gives an example of the importance of getting representative data – even if it’s approximate –
The water system of California was designed looking at a fairly short period of weather history. If they’d been willing to take less perfect records and look an extra hundred years back, they’d have seen that they weren’t designing it right to handle drought conditions which were entirely likely.
Many a times people fall for the confirmation bias and try to back-fit the data to their theory or model. If something doesn’t fit their hypothesis, they discard it and cherry pick the smaller set of observations which conform to their assumptions. Munger adds –
You see that again and again – that people have some information they can count well and they have other information much harder to count. So they make the decision based only on what they can count well. And they ignore much more important information because its quality in terms of numeracy is less – even though it’s very important in terms of reaching the right cognitive result. All I can tell you is that around Wesco and Berkshire, we try not to be like that. We have Lord Keynes’ attitude, which Warren quotes all the time: “We’d rather be roughly right than precisely wrong.” In other words, if something is terribly important, we’ll guess at it rather than just make our judgment based on what happens to be easily countable.
In Thinking Fast and Slow, Daniel Kahneman writes –
Extreme outcomes (both high and low) are more likely to be found in small than in large samples.
So why do we fall for this bias? One reason is our love for stories. Remember the Storytelling mental model we discussed few weeks back? Instead of questioning the accuracy and uncertainty associated with the sample, we focus on the story those numbers are telling us.
Kahneman further explains –
The exaggerated faith in small samples is only one example of a more general illusion – we pay more attention to the content of messages than to information about their reliability, and as a result end up with a view of the world around us that is simpler and more coherent than the data justify. Jumping to conclusions is a safer sport in the world of our imagination than it is in reality.
Luck and Small Numbers
It’s important to understand the context or circumstance where this fallacy becomes more pronounced. Let me use an example to explain my point.
If you were observing Michael Phelps, the swimming legend, compete against few amateur swimmers, even few observations would be enough to make a generalization about future outcomes of such competitive events i.e., Michael Phelps will trounce each one of them every single time. Here you don’t need to worry about the law of small numbers. Why?
The answer lies in understanding the role of skill and luck in any activity.
The magnitude of the fallacy grows larger as the luck-to-skill ratio rises. Be it sports or investing, a lot in our life is governed by luck. Certain games (cricket or poker) have higher element of luck and some are completely devoid of luck (chess or swimming).
So an amateur player can win a game of poker a few times just because of luck, but over a longer period of time i.e., over a large number of hands played, the luck evens out and skill prevails.
A corollary to our law of small numbers would be – over short periods of time, luck is more important than skill. The more luck contributes to the outcome, the larger the sample you’ll need to distinguish between someone’s skill and pure chance.
For that matter, how do you determine if an activity is ruled by luck or pure skill?
There’s a simple and elegant test proposed by Michael Mauboussin in his book, The Success Equation, which is to ask whether you can lose on purpose. If you can’t lose on purpose, or if it’s really hard, luck likely dominates that activity. If it’s easy to lose on purpose, skill is more important.
This has huge implications not only in sports but in investing too. David Einhorn, billionaire hedge fund manager and author of Fooling Some of the People All of the Time, explains –
People ask me “Is poker luck?” and “Is investing luck?”
The answer is, not at all. But sample sizes matter. On any given day a good investor or a good poker player can lose money. Any stock investment can turn out to be a loser no matter how large the edge appears. Same for a poker hand. One poker tournament isn’t very different from a coin-flipping contest and neither is six months of investment results.
On that basis luck plays a role. But over time – over thousands of hands against a variety of players and over hundreds of investments in a variety of market environments – skill wins out.
Law of Small Number in Business and Investing
Business and investing are fields where this fallacy is rampant and that’s because luck plays a significant role, especially in the short term.
As a result, unscrupulous finance professionals (salesmen) mis-sell their useless financial products by decorating them with flawed performance statistics. Similarly, shrewd managements use the same tricks to hide their poor performance.
If a mutual fund manager has had three above-average years in a row, many people will conclude that the fund manager is better than average, even though this conclusion does not follow from such a small amount of data. A prudent way to assess the real performance of a fund manager is to observe his returns and actions over a longer period of time.
Recently, my relationship manager-cum-stockbroker offered me a new fund scheme. He claimed, “Our fund has generated a 60% CAGR in last one year.” It sounded not only misleading but outright hilarious.
CAGR stands for compounded annual growth rate and the idea of compounding isn’t much useful if you’re talking about a period as short as one year. I am sure he would be peddling a different “one-year-60-percent-cagr” product to another gullible investor next year.
Following the same line of inquiry, it’s obvious that while evaluating businesses, you should look at long-term performance numbers. Preferably past ten years. Doing this not only gives you an idea about resilience of the business during downturns, but also shrinks the possibility of extreme numbers (very poor performance in one quarter or extraordinary performance in another quarter) skewing your decision making process.
I am guessing that this law is equally applicable in human relationships too. You can’t judge somebody’s character based on your interaction with him or her a couple of times. Character is revealed by observing a person’s behaviour under diverse set of situations and over extended periods of time.
So the central insight is that activities where luck plays some role, relying on small set of observations can lead you to faulty conclusions. Many decision makers do not understand this fallacy and are often fooled by the high degree of randomness inherent in the small numbers.
The solution is to develop a knack for placing an activity in the skill-luck continuum and maintain a healthy skepticism for the patterns observed in the small samples.
I wish I could go back in time and tell my smoke buddy that his grandfather represented an extremely small sample size and perhaps the luck was heavily on his side. Who knows he could have survived past the age of 100 had he stopped smoking. I am just speculating.
I am guessing some of the readers may already be well-versed with the ideas presented here and might have developed deeper insights on the topic.
I was recently exposed to an idea called “public thinking”. Simply put, it’s a tribe of like minded people coming together and participating in an enriching discussion. Safal Niveshak is one such platform which can be used for “public thinking” and can incubate refreshing insights on a subject.
So dear tribe members, I invite you to participate in this latticework “public thinking” forum by sharing your insights and experiences in the Comments section of this post.
As always, let me confess that my primary purpose in compiling the latticework series is to deepen my own thinking about these mental models and writing about them has accelerated my learning. I hope that you’re also deriving some value out of this experiment.
Once again, the best way to learn an idea is to teach it, so grab hold of one of your buddies (if nobody wants to listen to you, sit in front of the mirror and assume you’re talking to your sibling) and share your knowledge.
Take care and keep learning.
Disclosure: Safal Niveshak participates in the Amazon Associates Program, which simply means that if you purchase a book on Amazon from a link on this page, we receive a small commission. The book does not cost you any extra. We give away 100% of the commission for the betterment of the under-privileged.