This article is the ninth of this weekly series called Latticework of Mental Models, which will be authored by my friend and partner in writing the Value Investing Almanack, Anshul Khare. Anshul will write on various mental models – big ideas from various disciplines – which can help you think more rationally while analyzing businesses and making your stock investment decisions.
Outside of the closed glass chambers in the corporate world, there is another very unusual place where grand scheming about workplace strategies (read, office politics) happens regularly. Don’t worry, there is nothing hush-hush about this place and it’s not at all surrounded by thick soundproof walls.
Wondering what I am talking about? Let me give you another hint.
It’s the designated open area in every large company, where people go out in fresh air to fill their lungs with freshly brewed smoke. You guessed it right. I am talking about the smoking areas.
Mind you, it’s not just a place trafficked by the smoke billowing nicotine guzzlers, but you’ll often find those hapless passive smokers too who don’t realize that their lungs are going to collapse sooner than their active smoker buddies.
Before you dismiss my lame attempt at sarcasm, let me confess that I used to be one such ignorant passive smokers who would stand next to a friendly smoke machine and take pride in my ability to resist the temptation of those white sticks.
Smokers! Please forgive me if I have offended you. But trust me, it was intentional! 🙂
Anyways, my sole consolation on those suffocating trips were some of the delusional (and absolutely hilarious) arguments that I would get to hear by my nicotine addicted pals to justify their habit. You know what my favourite argument was? This one –
My grandfather was a lifelong chain smoker and still lived up to a ripe age of 90 years. So don’t tell me that smoking kills.
There is just one problem with this logic – it is dead wrong!
I don’t deny the authenticity of the claim but I can’t help but imagine how Charlie Munger would react to this line of thought. He would probably snort with laughter, as if he wanted to say – “Boy, you’re suffering from I-know-a-man syndrome.” A classic mistake of taking a specific instance and blindly generalizing it.
Cognitive dissonance is one mental model that we had discussed in the April issue of Value Investing Almanack, which explains the self-deceptive behaviour of tobacco addicts. But an irrational behaviour is rarely the result of a single behavioural bias. We have something else at play here.
What we’re going to explore today is a bias which was first identified by Amos Tversky and Daniel Kahneman. It’s called the Law of Small Numbers, also known as ‘insensitivity to sample size’.
In fact, this ‘I-know-a-man-syndrome’ is an extreme case where people tend to generalize from small amounts of data. Most people, including many experts (involved in serious empirical research), don’t appreciate how research based upon small numbers or small populations can often generate extreme observations. As a result, people have a tendency to believe that a relatively small number of observations will closely reflect the general population.
Consider my argument – “I know a man who jumped from 5th floor and survived the fall. Which implies that it’s safe for anyone to repeat the act.”
“No it’s not!” Intuitively dismissing the argument, you might say, “It’s an aberration, an exception that can’t be relied on.”
The point I am trying to make here is that sample size matters while trying to establish any patterns. Jana Vembunarayanan has written a very nice post explaining the idea using elementary mathematics.
Fooled by Sample Size
Next time when you watch a commercial on TV, pay attention and see if you find any of the following claims – “Seven out of ten housewives prefer this washing powder.” Or for that matter, “Four out of five dentists recommend this toothpaste.”
Rings a bell? And there is no end to such compelling claims. It’s a common trick used by marketing people to win your trust.
Truth be told, these claims don’t tell us anything unless we know how many dentists or housewives were surveyed. Maybe they surveyed just 10 housewives and dentists; an observation that can’t be extrapolated to include all dentists or housewives.
Anytime you see such statistics thrown at you, please don’t forget to question the sample size. With small sample size, odds are pretty high that conclusions can’t be trusted.
Peter Bevelin in his must-read book Seeking Wisdom, explains –
A small sample size has no predictive value. The smaller the sample is, the more statistical fluctuations and the more likely it is that we find chance events. We need a representative comparison group, large enough sample size, and long enough periods of time.
Small samples can cause us to believe a risk is lower or higher than reality. Why? A small sample increases the chance that we won’t find a particular relationship where it exists or find one where it doesn’t exist.
Charlie Munger gives an example of the importance of getting representative data – even if it’s approximate –
The water system of California was designed looking at a fairly short period of weather history. If they’d been willing to take less perfect records and look an extra hundred years back, they’d have seen that they weren’t designing it right to handle drought conditions which were entirely likely.
Many a times people fall for the confirmation bias and try to back-fit the data to their theory or model. If something doesn’t fit their hypothesis, they discard it and cherry pick the smaller set of observations which conform to their assumptions. Munger adds –
You see that again and again – that people have some information they can count well and they have other information much harder to count. So they make the decision based only on what they can count well. And they ignore much more important information because its quality in terms of numeracy is less – even though it’s very important in terms of reaching the right cognitive result. All I can tell you is that around Wesco and Berkshire, we try not to be like that. We have Lord Keynes’ attitude, which Warren quotes all the time: “We’d rather be roughly right than precisely wrong.” In other words, if something is terribly important, we’ll guess at it rather than just make our judgment based on what happens to be easily countable.
In Thinking Fast and Slow, Daniel Kahneman writes –
Extreme outcomes (both high and low) are more likely to be found in small than in large samples.
So why do we fall for this bias? One reason is our love for stories. Remember the Storytelling mental model we discussed few weeks back? Instead of questioning the accuracy and uncertainty associated with the sample, we focus on the story those numbers are telling us.
Kahneman further explains –
The exaggerated faith in small samples is only one example of a more general illusion – we pay more attention to the content of messages than to information about their reliability, and as a result end up with a view of the world around us that is simpler and more coherent than the data justify. Jumping to conclusions is a safer sport in the world of our imagination than it is in reality.
Luck and Small Numbers
It’s important to understand the context or circumstance where this fallacy becomes more pronounced. Let me use an example to explain my point.
If you were observing Michael Phelps, the swimming legend, compete against few amateur swimmers, even few observations would be enough to make a generalization about future outcomes of such competitive events i.e., Michael Phelps will trounce each one of them every single time. Here you don’t need to worry about the law of small numbers. Why?
The answer lies in understanding the role of skill and luck in any activity.
The magnitude of the fallacy grows larger as the luck-to-skill ratio rises. Be it sports or investing, a lot in our life is governed by luck. Certain games (cricket or poker) have higher element of luck and some are completely devoid of luck (chess or swimming).
So an amateur player can win a game of poker a few times just because of luck, but over a longer period of time i.e., over a large number of hands played, the luck evens out and skill prevails.
A corollary to our law of small numbers would be – over short periods of time, luck is more important than skill. The more luck contributes to the outcome, the larger the sample you’ll need to distinguish between someone’s skill and pure chance.
For that matter, how do you determine if an activity is ruled by luck or pure skill?
There’s a simple and elegant test proposed by Michael Mauboussin in his book, The Success Equation, which is to ask whether you can lose on purpose. If you can’t lose on purpose, or if it’s really hard, luck likely dominates that activity. If it’s easy to lose on purpose, skill is more important.
This has huge implications not only in sports but in investing too. David Einhorn, billionaire hedge fund manager and author of Fooling Some of the People All of the Time, explains –
People ask me “Is poker luck?” and “Is investing luck?”
The answer is, not at all. But sample sizes matter. On any given day a good investor or a good poker player can lose money. Any stock investment can turn out to be a loser no matter how large the edge appears. Same for a poker hand. One poker tournament isn’t very different from a coin-flipping contest and neither is six months of investment results.
On that basis luck plays a role. But over time – over thousands of hands against a variety of players and over hundreds of investments in a variety of market environments – skill wins out.
Law of Small Number in Business and Investing
Business and investing are fields where this fallacy is rampant and that’s because luck plays a significant role, especially in the short term.
As a result, unscrupulous finance professionals (salesmen) mis-sell their useless financial products by decorating them with flawed performance statistics. Similarly, shrewd managements use the same tricks to hide their poor performance.
If a mutual fund manager has had three above-average years in a row, many people will conclude that the fund manager is better than average, even though this conclusion does not follow from such a small amount of data. A prudent way to assess the real performance of a fund manager is to observe his returns and actions over a longer period of time.
Recently, my relationship manager-cum-stockbroker offered me a new fund scheme. He claimed, “Our fund has generated a 60% CAGR in last one year.” It sounded not only misleading but outright hilarious.
CAGR stands for compounded annual growth rate and the idea of compounding isn’t much useful if you’re talking about a period as short as one year. I am sure he would be peddling a different “one-year-60-percent-cagr” product to another gullible investor next year.
Following the same line of inquiry, it’s obvious that while evaluating businesses, you should look at long-term performance numbers. Preferably past ten years. Doing this not only gives you an idea about resilience of the business during downturns, but also shrinks the possibility of extreme numbers (very poor performance in one quarter or extraordinary performance in another quarter) skewing your decision making process.
I am guessing that this law is equally applicable in human relationships too. You can’t judge somebody’s character based on your interaction with him or her a couple of times. Character is revealed by observing a person’s behaviour under diverse set of situations and over extended periods of time.
So the central insight is that activities where luck plays some role, relying on small set of observations can lead you to faulty conclusions. Many decision makers do not understand this fallacy and are often fooled by the high degree of randomness inherent in the small numbers.
The solution is to develop a knack for placing an activity in the skill-luck continuum and maintain a healthy skepticism for the patterns observed in the small samples.
I wish I could go back in time and tell my smoke buddy that his grandfather represented an extremely small sample size and perhaps the luck was heavily on his side. Who knows he could have survived past the age of 100 had he stopped smoking. I am just speculating.
I am guessing some of the readers may already be well-versed with the ideas presented here and might have developed deeper insights on the topic.
I was recently exposed to an idea called “public thinking”. Simply put, it’s a tribe of like minded people coming together and participating in an enriching discussion. Safal Niveshak is one such platform which can be used for “public thinking” and can incubate refreshing insights on a subject.
So dear tribe members, I invite you to participate in this latticework “public thinking” forum by sharing your insights and experiences in the Comments section of this post.
As always, let me confess that my primary purpose in compiling the latticework series is to deepen my own thinking about these mental models and writing about them has accelerated my learning. I hope that you’re also deriving some value out of this experiment.
Once again, the best way to learn an idea is to teach it, so grab hold of one of your buddies (if nobody wants to listen to you, sit in front of the mirror and assume you’re talking to your sibling) and share your knowledge.
Take care and keep learning.
Disclosure: Safal Niveshak participates in the Amazon Associates Program, which simply means that if you purchase a book on Amazon from a link on this page, we receive a small commission. The book does not cost you any extra. We give away 100% of the commission for the betterment of the under-privileged.
Good read Anshul.. While reading this so many small sample size incidents went through my mind.
Anshul Khare says
It is important how to choose a sample size before making a conclusion or generalization. But larger sample size doesn’t always help us to do a better analysis. It needs to add situations or scenarios along with a sample size and then make it larger. For example trading strategies work very well in a bull market doesn’t work in a bear market or a range bound market. In this case a larger sample size may mislead us and even a smaller sample size would be better. Anshul Sir has explained it very well in the above article “You can’t judge somebody’s character based on your interaction with him or her a couple of times. Character is revealed by observing a person’s behavior under diverse set of situations and over extended periods of time.” And if we add the number of observer(“judge”) for a smaller period it may gives us a better result.
Anshul Khare says
My knowledge about trading strategies is approximately less than zero 🙂
Nevertheless, quite an interesting perspective Soumen!
Vishal Khandelwal says
Hey Anshul, great to know that you know that you don’t know about trading. 😉
Great insight on Skill and luck based on law sample size. When markets are on rampant bull phase Analyst start predicting and giving market PE more than 20-22-25 as market moves upward in the short term. They rationalize its move on short term basis. But if you go for long term averaging market it is around 16-17 PEs. same for downwards bear phase too. so all so called analyst – you watch on Tv work based on this mental model only.
Anshul Khare says
Interesting observation Prashant!
Thanks for sharing.
Abhishek Bhattacharya says
Excellent article. In fact, all the articles in the series are an enjoyable read and makes me want to read Latticework On Mental Models soon. Many thanks for writing them. It’s a sheer pleasure to read them.
I have a suggestion regarding the mobile website of Safal Niveshak. There is a small vertical bar which appears with all social media sharing buttons. This bar doesn’t hide itself and floats along the web page. The problem is that it hides the content. It is very irritating and one has to scroll to read the content. In fact, at this moment, I can’t read first two words of top two sentences in this comment because it is getting blocked by the bar. I am using iPad mini but I suppose this problem occurs in Android too.
Anshul Khare says
We’ll look into the issue that you pointed out. It will help if you could take a screenshot and share with us.
Excellent article… Very well written. I am eager to read all your previous posts on lattice work as well…
I have one thought on this article – when it comes to investing, I tend to believe that this law could work in two ways – one of course as you rightly pointed could result in wrong conclusions about funds and investors falling prey for it. The second one I think is that the real opportunity for small investors is when only a few (read small) ppl have discovered a stock. If a large number of people are onto a stock it will more likely be overpriced/overvalued with a very few exceptions like maybe page.
Would love to hear your perspective
Thank you for sharing your knowledge!
Anshul Khare says
I am glad you found the article informative.
Thanks for bringing up your question. Let me make an attempt to add some more dimensions to your line of thought.
One distinction that I can think about is – facts Vs opinions. So when it comes to sample size insensitivity, we are essentially talking about sample observation of facts (financials being one of them). When we talk about number of investors talking about a stock, we move into the area of observing opinions (which may or may not be based on facts). Running statistics on opinions isn’t something which I’m familiar with but my speculation is that it’s not as useful as statistics based on facts.
Now as far as opinions are concerned, there is another interesting distinction where two different mental models point in different directions. Social Proof, a behavioural bias, says that people tend to jump off the cliff (like lemmings) so following the crowd could be hazardous. Another mental model, wisdom of crowds, says that collectively people make better decisions. I will try to write about these subtleties in future in Latticework series.
I hope I have given you some more food for thought.
Interesting thought… Opinions are a function of facts and opinions. So, what you are saying is if opinions are based on fewer facts, it could be dangerous vs. opinions based on a large number of facts. Investing should be based on facts and not opinions, so your thought makes sense to me.
It is interesting to see that statistics won’t always hold true when it comes to opinions due to inherent biases. I wonder how many statistical models built on survey data could have been wrong for a survey is nothing but an opinion unless the questionnaire is built around questions on facts.
Thanks for your response!
Anshul Khare says
Vijay Mariappan says
Thanks for the article on “Law of Large numbers”. Just wondering how this would fit in the perspective of investing?.
Not all models work well with the assumption that more data is better. I understand “Law of large numbers” tends towards the “expected value” with larger sampling size. But what about models, where large input data leading to “statistical overfitting problem” (The model even captures the noise part of the data).
Is large sample data always good? Doesn’t this depend on what we are modelling?
Anshul Khare says
Just like any other mental model, even law of small numbers has its limitations. Having a knowledge about pitfalls of small sample size protects us from arriving at faulty conclusions but, as you pointed out, it doesn’t necessarily mean that increasing the sample size will solve the problem.
Keeping other factors constant (including the noise which affects, more severely, even small sample sizes) the patterns observed in large sample have higher probability of representing the general population as compared to patterns observed in small sample sizes.
I guess the statement “since small sample sizes produce faulty conclusions hence large sample sizes should produce accurate conclusions” wouldn’t always hold true.
So the “law of small numbers” is a mental model which can be used to challenge a proposed theory or conclusion. It’s use in proving something is much limited.
I hope that helps!
Thanks for chipping in and raising a question!