Premium Value Investing NewsletterDownload Free Issue

Latticework Of Mental Models: Hedgehog Vs Fox

On June 16, 2015, Donald Trump announced his candidacy for President of the United States. Most political forecasters and pundits brushed this news as Trump’s another gimmick for seeking attention and creating sensational news.

Sixteen months later, as November 2016 approached, it became frighteningly clear that Trump was very close to winning the elections.

However, when the experts were shaking their heads in disbelief and talking about all the things that were wrong with Trump, there was a cartoonist in San Francisco who had been writing blogs all through 2015 and 2016, claiming that Trump will win the elections in landslide. He received a lot of flak (even threats) but on November 8, 2016, Scott Adams, the creator of Dilbert was proved right.

Not entirely right because Trump’s victory wasn’t exactly a landslide but he did win the elections. But Adams was way ahead than the experts who were sweating over predicting precise numbers by which Trump will lose.

Charlie Munger likes to say, “It’s better to be approximately right than precisely wrong.”

[Read more…]

The Most Powerful Mental Model for Identifying Stocks

For starters, we had our Value Investing Workshop in Chennai yesterday, and here are some moments from the same…

Safal Niveshak's Value Investing Workshop in Chennai - Feb. 2017

The next workshops are in Mumbai (19th Feb), Delhi (25th Feb), and Hyderabad (5th March). In case you wish to join any of these, please click here to register.

Anyways, let’s start with today’s post.



“It’s a funny thing about life; if you refuse to accept anything but the best, you very often get it.” ~ W. Somerset Maugham – English dramatist & novelist (1874-1965)

As I’ve seen in the past 14+ years of investing in the stock market, Maugham’s thought holds a great relevance when it comes to picking up businesses for investment.

Pick up a business with good economics and with good margin of safety, and the probability of making money in the long run is high. Pick up a business with poor economics with any margin of safety, and the probability of losing your shirt, and entire wardrobe, in the long run is very high.

Understanding a business also adds significantly to your margin of safety, which is a great tool to protect yourself against losing a lot of money.

[Read more…]

Latticework Of Mental Models: Planning Fallacy


When I planned my first road trip from Bangalore to Goa, I calculated that the distance (about 560 km) should take little more than 9 hours. Factoring in stopovers and few unexpected events like a flat tyre or traffic, I assumed that 12-15 hours should be sufficient for the road trip. It took 15 hours.

“Good job, Anshul!” I patted myself on the back. Not a bad estimate.

Now based on this, if I had to forecast the time it would take to cover a distance of say 5,000 km, a road trip to cover major cities in India, I might be tempted to extrapolate the Bangalore-Goa trip time. I’ll probably calculate that 560 km took one day so 5,000 km should take 10 days plus 2-3 more days.

Am I being reasonable in my estimation?

What I am forgetting here is that the second road trip is not only longer but more complex and subject to many more unforeseen and unexpected events. My estimation is fraught with over-optimism bias. And I am not alone in making this kind of mistake.

There are many ways a plan can fail and most of those things are too improbable to be anticipated. The likelihood that something will go wrong especially in a big project is high. Overly optimistic forecasts of the outcome of projects are found everywhere.

In fact, how often are you able to complete everything on your to-do list at the end of the day? This shows how absurdly ambitious we’re in planning.

This bias, a phenomenon in which predictions about how much time will be needed to complete a future task display an optimism bias (underestimate the time needed), is called Planning Fallacy. The term was coined by Nobel Laureate Daniel Kahneman and his colleague Amos Tversky.

[Read more…]

Latticework Of Mental Models: Lucretius Problem

It was a Friday on March 11, 2011 when a massive earthquake with an intensity of 9 on Richter scale hit off the coast of Japan at 2:26 pm local time. The epicenter of the quake was 70 kilometer east of the Oshika Peninsula of Tōhoku.

The earthquake triggered powerful tsunami waves that reached heights of up to 40 meters. It took 50 minutes for the largest wave in the tsunami to arrive at the shores of Fukushima. What followed was something totally unimaginable and unexpected for those who take pride in taming the mother nature.

The Fukushima Daiichi nuclear power plant had six separate boiling water reactors, protected by a 10-meter-high seawall to prevent sea waves from entering the plant.

When the tsunami struck the Fukushima coastline, the gigantic waves easily overtopped the plant’s seawall. It took seconds to flood the basements of the turbine buildings and disabling the emergency diesel generators. Soon the backup generator building was also flooded. This resulted in an explosion and leakage of radioactive material to the sea water and created a huge nuclear hazard.

Why would the engineers and designers of Fukushima nuclear power plant build a wall only 10-meter high? What made them believe that the waves can’t breach the 10-meter height? [Read more…]

Latticework Of Mental Models: Chauffeur knowledge

Charlie Munger, in one of his talks, tells the story of famous scientist Max Planck –

I frequently tell the apocryphal story about how Max Planck, after he won the Nobel Prize, went around Germany giving a same standard lecture on the new quantum mechanics. Over time, his chauffeur memorized the lecture and said, “Would you mind, Professor Planck, because it’s so boring to stay in our routine, if I gave the lecture in Munich and you just sat in front wearing my chauffeur’s hat?” Planck said, “Why not?” And the chauffeur got up and gave this long lecture on quantum mechanics. After which a physics professor stood up and asked a perfectly ghastly question. The speaker said, “Well, I’m surprised that in an advanced city like Munich I get such an elementary question. I’m going to ask my chauffeur to reply.

Well, the reason I tell that story is not to celebrate the quick wittedness of the protagonist. In this world I think we have two kinds of knowledge: One is Planck knowledge, that of the people who really know. They’ve paid the dues, they have the aptitude. Then we’ve got chauffeur knowledge. They have learned to prattle the talk. They may have a big head of hair. They often have fine timbre in their voices. They make a big impression. But in the end what they’ve got is chauffeur knowledge masquerading as real knowledge. I think I’ve just described practically every politician in the United States. You’re going to have the problem in your life of getting as much responsibility as you can into the people with the Planck knowledge and away from the people who have the chauffeur knowledge.

On a lighter note the chauffeur had some Planck knowledge of his own, being clever enough to turn that question around!

But in the real world, it is critical to distinguish when someone is “Max Planck,” and when he’s just the “Chauffeur.”

Building Planck knowledge takes deep commitment and large amount of time and effort. Chauffeur knowledge comes from people who have learned to put on a show. Their talks sound impressive and entertaining, they have good voice and may even ooze great charisma but their knowledge is not their own.

[Read more…]

Latticework Of Mental Models: Hyperbolic Discounting

Last year I decided to get a brand new laptop for myself. This got me started on the herculean task of selecting from thousands of choices available on numerous e-commerce websites.

After weeks of an excruciating process of comparing, shortlisting, and researching, I finally zeroed in on my final choice. Then started the wait for online discounts.

Very soon, the so-called online-sale-season arrived which offered a ‘whopping’ Rs. 100 discount on my selection. So much for the patience! But for a self-proclaimed prudent consumer, it still was a good deal.

What happened next is pretty much the story of every online shopper. Just when I was about to place the order, I saw the option of same-day delivery for an additional Rs 100. Guess what I did? Yours truly didn’t hesitate for a moment to take the offer.

Ironically, I waited for a week for a small discount but when the time came for buying I couldn’t wait another day and forked out extra money just to get my toy immediately. What happened to my admirable qualities of patience and prudence?

A little research on Google revealed that the introduction of get-it-now temptation in the deal caused me to behave irrationally. The symptom of this behavioural bias goes something like this.

When I have agreed to wait for six days, I don’t mind waiting for one more day. Well, if I can wait for six, goes the rationale, waiting for seven shouldn’t be a big deal!

But when I am told that I can get something today instead of tomorrow, my temptation refuses to wait for another day.

[Read more…]

Latticework of Mental Models: Illusion Of Control

Every day, shortly before nine o’clock in the morning, a man with a red hat stands at a busy traffic light and begins to wave his cap frantically. After five minutes he disappears.

One day, a policeman comes up to him and asks: “Sir! May I ask what you are doing?”

“I’m keeping the giraffes away,” replies the man.

The puzzled policeman looks around and tells him, “But there aren’t any giraffes here.”

“Well, I must be doing a good job, then.” says the man proudly.

You would conclude that the man with the red hat wasn’t in the pink of his mental health. However, is it just a case of misplaced understanding of causation vs correlation? Actually, there is more to it.

The man’s belief, that absence of evidence (giraffes) is a proof of his prowess in controlling giraffe traffic, is the result of a behavioural bias called Illusion of Control. It’s the tendency to believe that we can influence something over which we have absolutely no sway.

So where does this behavioural quirk come from? In millions of years of evolution, Mother Nature has hard-wired this cognitive bias in the human brain to increase the chances of survival in a hunter-gatherer environment. It’s nature’s way to deal with uncertainty.

[Read more…]

Latticework of Mental Models: Lucifer Effect

Let me ask you a disturbing question.

“Would you electrocute a stranger?”

Most of you would respond with an emphatic no. But perhaps you are underestimating yourself.

I am neither doubting your character nor your sense of morality but empirical evidence suggests that human beings underestimate their own vulnerabilities that can turn them into evil. To support my claim let me take you through a fascinating experiment.

In 1971, Philip Zimbardo, a young psychologist from Stanford University, wanted to study the psychology of imprisonment i.e., study the roles people play in prison situations. So he invited students to participate in an experiment and randomly assigned them the roles of ‘prisoners’ and ‘guards’. Tests showed that all students were normal people and physically and mentally healthy. A simulated prison environment was created to mimic real-life prison conditions, where they lived for several days. As part of the role playing the ‘guards’ behaved aggressively and ‘prisoners’ behaved helplessly. [Read more…]