Let’s start with a catchy song by a Grammy nominee:
You think you know it all
But your mental models are flawed
You're missing out on the truth
That can help you grow and evolve
You're always stuck in your bubble
But you don't have any clue
You need to update your models
To see the world from a new view
(c) Bing
Another round, a few more mental models. “I can do it all day” - once said a hero. Little we knew what he meant…
Parkinson's law
This one is clear and ruthless: Work expands to fill the time available for its completion.
The guys at consuunt.com had eloquently described the gist in a graphic:
Facts
Quoting: “Several corollaries exist. One is that expenses expand to fill an income. Same for expectations and success. In IT, data can expand to fill a given storage level.”
Actionable advice
Let’s listen to Barbara Oakley in LhtL saying “set a time limit for learning/work ⇒ a time of the day you’ll set everything aside and rest”.
Wiio's laws
A couple of humorous laws by Finnish politician Osmo Wiio. They summarize the observations that differences in perception, thoughts, beliefs, and expectations are usually multiplied to the point that holding communication proves difficult:
- Communication usually fails, except by accident.
Even communication that seems foolproof can still fail, apparently going by Murphy’s Law.
If a message can be interpreted in several ways, it will be interpreted in a manner that maximizes the damage.
There is always someone who knows better than you what you meant with your message.
The more we communicate, the worse communication succeeds.
In mass communication, the important thing is not how things are but how they seem to be.
An important and truthful item IMO - bias cases are halo/horns effect, for example.
The importance of a news item is inversely proportional to the square of the distance.
The more important the situation is, the more probable you had forgotten an essential thing that you remembered a moment ago.
Actionable advice
Keeping glossaries to synchronize on high-level abstracted concepts? Asking for feedback? Adapting to the other sides and stakeholders? Ain’t that some bullshit:
Sayre's law
In a dispute, emotions are inversely related to what's at stake.
This goes a bit out of hand with Taleb’s Skin in the game maxim that people should have something to lose or risk when they make decisions or give advice, but come on - there’s Gibson’s law we’ve overviewed just recently stating exactly that: for every ohmygod-im-so-smart-and-philosophic ahem ahoy there’s an equally smart persona with an opposite opinion.
Actionable advice
Keep balance. Don’t rave over things you’re emotionally invested in, for the starters.
Stigler's law
No scientific discovery is named after its original creator.
This may indeed happen for several reasons:
Our world is inherently complex, and we’re not as mysterious in our thinking as the Dude Above presumably is in his ways. Sometimes people just come up with similar, or same, conclusions or solutions.
You can discover a free energy engine, but having 1 Twitter subscriber and no skills in presenting your work leads to nothing. In a world where CapturedAttention \ Time* means everything, you’ve just got to grow your audience.
Actionable advice
Don’t be that “aaaaah that’s my thought/discovery!” default
guygalLearn to capture attention, storytelling - here you can see an example of what can a failure to do so lead to. Yeah, and a love for overly complex sentences.
Mill mistake
Assuming the familiar is optimal.
It’s not so much a mistake but a bias I’d say. Bla bla brain tries to conserve energy and optimize, blah blah neurons that fire together wire together.
Actionable advice
Checklists to minimize, let’s say, system 1 thinking in the words of Mr. Kahneman?
One or another form of tenth man rule maybe? Just something or someone to disagree, or ask “What if this is not the optimum?”
Hickam's dictum
Problems in complex systems rarely have one cause.
We must face it - in an interconnected system cause and effect are, too, connected. Hence diagnosing such a system for the one cause is not always prudent.
I’d reckon that the best approximation is, again, a vector of values i.e. {Cause1: Importance1, Cause2: Importance2, …, CauseN: ImportanceN}. But I may be mistaken of course you can’t, you’re transomniscient - an omniscient in the body of a hallucinating schizophrenic.
Welcome to YAWN/Boi Diaries❣️ You can find the other blogs I try to cross-post to: