“A map is not the territory it represents, but, if correct, it has a similar structure to the territory, which accounts for its usefulness”. – Alfred Korzybski 1
If you spend much time on LinkedIn, you’ll frequently encounter posts seeking to “debunk” various models, concepts, frameworks, etc.
The predilection for debunking is particularly prevalent when the models, concepts, frameworks in question, and their creators, enjoy a certain degree of recognition.
However, as statistician George Box famously pointed out: all models are wrong, but some models are useful.2
And since all models are wrong — i.e. they can never capture every aspect of the reality they represent — it’s always possible to find fault with any framework, map, or model that you want to “debunk”.
You’ve probably used Google Maps and its Street View option. If so, then you’ve probably found it useful for most everyday intents and purposes.
However, when the old Victorian lead water main feeding our house needed replacing with a new MDPE pipe last year, the contractors who dug up the street needed to know where to find the water main, and also needed to know how to avoid damaging other buried services, such as electricity, gas, broadband, and waste water.
For that purpose, Google Maps is useless.
Google could of course update Maps to be more comprehensive by showing these underground services — and maybe even the street’s geological substrata.
But whilst doing so would be more accurate, for most use cases those details would render the maps more cluttered and therefore make them less useful.
I therefore think Box’s aphorism is more helpful with a coda: all models are wrong, but some models are useful, sometimes, under some circumstances, for some purposes.
This coda is also helpful in illuminating where most “debunking” efforts go awry, as I’d like to explore in the remainder of this article.
The Myers Briggs Type Indicator
One of the most popular targets for the attentions of LinkedIn debunkerati is the Myers Briggs Type Indicator (MBTI).
On average, I see probably an article a month attacking MBTI, often using arguments along the lines of its famous 2013 “debunking” by Wharton academic Adam Grant.3
Despite these attacks, MBTI remains one of the most widely applied psychological instruments. Every year it gets inflicted on millions of people, in 29 languages, and proudly boasts its use by 88 Fortune 100 organisations.4
There are plenty of horror stories of how it gets misused. A little knowledge being, as ever, a dangerous thing.
But it’s also used in useful ways by thoughtful practitioners.5
I’m not an MBTI jockey myself, but have seen how it can be useful, especially with individuals who’ve given little or no consideration to how others might perceive things differently.
So, although the MBTI model is, like all models, inevitably wrong, it is useful — at least as a way to help individuals become aware, perhaps for the first time, that other competent well-intentioned colleagues may have different ways of seeing and being.
That can then be a starting point for them to see that diversity of perspectives is not only to be expected, but to be enthusiastically encouraged if an organisation is to create a future-fit culture of innovation, agility, and adaptiveness.6
The problem, as with all models, concepts, and frameworks is when MBTI is used beyond the boundaries of its usefulness — for example in attempting to predict how someone will perform in a real-world organisational context.
MBTI isn’t of much use in that situation because it fails to take into account powerful systemic factors that have much greater influence on an individual’s attitudes and behaviours than the one-of-sixteen buckets they happened to fall into the last time they completed an MBTI assessment.7
But, to be fair, just as it would be foolish to blame a chainsaw for its failure to cut through concrete, it’s not the fault of the wrong-but-useful MBTI model if someone uses it in ways that aren’t useful, or indeed may be positively detrimental. 8
That may sound like I’m championing the MBTI, especially if you side with its debunkers. So let me clarify — I think the alternative of asking people which animal they would like to be, and why, provides similar benefits to an MBTI assessment. 9
Conversely, if you’re a fan of MBTI, my equating it to “My Beast Type Is” will likely seem insulting, especially if you’ve shelled out $3,000 to become a Certified MBTI jockey.
Paying a significant fee for certification in any framework, map, or model is very likely to increase the buyer’s belief in the benefit of the badge, not least because the alternative is to admit — or at least suspect — that you’ve been duped.
Scientific Models are also Wrong
I’ll return shortly to look at a couple more wrong-but-useful organisational models that tend to attract debunkers like moths to halogen headlights.
But first I’d like to make the case that “scientific” models are also always wrong — starting with my own experience of studying physics.
At both school and university in the 1970s, my fellow physics students and I were taught Newtonian mechanics — even though relativity theory and quantum theory had both “debunked” Newton’s models more than half a century earlier.
So why were respected educational institutions still teaching “wrong” science?
Well, in practice, the limitations of Newton's theories only come into play at extremes of space, time, and speed.
In most circumstances encountered in everyday life, Newton’s models are wrong-but-useful.
If you’re not one yourself, you may not be aware that we engineers happily switch between different wrong-but-useful models all the time, all (hopefully) without mistaking them for the reality they represent.
When I used to design electronics for example, various different models of how transistors work proved useful, including Ebers-Moll, Hybrid Pi, and Gummel-Poon.
All these models are different, they are all wrong, but each of them is more useful than the others in particular contexts.
If you’ve read Thomas Kuhn’s seminal book The Structure of Scientific Revolutions you’ll know that science frequently finds itself failing when successful and well-established models become so widely accepted they become mistaken for underlying reality.
When this happens, even leading scientists often demonstrate an inability to let go of the old ”truth” when a better one comes along.
Max Planck, founding father of quantum theory expressed this phenomenon as follows:
“A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.” 10
This is often paraphrased more pithily as “science progresses one funeral at a time”.
So let’s now return to the organisational realm and look at two other wrong models I’ve personally found quite useful, and which often prove red rags to debunker bulls, the NTL Learning Pyramid and Carol Dweck’s Fixed and Growth Mindsets.
The NTL Learning Pyramid
The NTL Pyramid originated from the National Training Laboratory in Maine, USA in the early 1960’s.
It claims that people retain:
5% of what they learn from classroom chalk and talk,
10% of what they learn from reading,
20% of what they learn from audio-visual presentations,
30% of what they learn from demonstrations,
50% of what they learn from group discussions,
75% of what they learn from applying lessons “on the job”, and
90% of what they learn from teaching others.
Despite the suspiciously precise percentages, I found the above hierarchy intuitively appealing when I first encountered it back in 2002.
It resonated with my then 15 years of experience helping organisations create future-fit cultures of innovation, agility, and adaptiveness, and I was curious to know more about its origins.
So I emailed NTL, who by then positioned themselves as “NTL Institute for Applied Behavioral Science” based in Virginia, USA.
Here’s the essence of the message I sent to NTL in August 2002 :
“Dear NTL — I am interested in finding out more about the research behind the figures quoted in The Learning Pyramid. I’d like to know specifically: when the research was conducted; what kind of people were researched; what type of information was being 'retained' and how the 'retention' rates were measured. I imagine that many people have asked this question before and that you probably have an article or URL that you can email me”.
Their reply was as amusing as it was illuminating:
“Thanks for your inquiry of NTL Institute. We are happy to respond to your inquiry about The Learning Pyramid. Yes, it was developed and used by NTL Institute at our Bethel, Maine campus in the early sixties when we were still part of the National Education Association's Adult Education Division. Yes, we believe it to be accurate - but no, we do not any longer have - nor can we find - the original research that supports the numbers. We get many inquiries every month about this - and many, many people have searched for the original research and have come up empty handed. We know that in 1954 a similar pyramid with slightly different numbers appeared on p. 43 of a book called Audio-Visual Methods in Teaching, published by the Edgar Dale Dryden Press in New York. Yet the Learning Pyramid as such seems to have been modified and always has been attributed to NTL Institute.”
So much for learning retention… 🤣
But, despite NTL’s inability to retain and retrieve their own research — and to be fair I’d also have a hard time finding research reports I wrote when I came to Cambridge 40 years ago — I still find the model wrong-but-useful.
Do I accept the quoted percentages as accurate? Not for a second.
Does its hierarchical sequence reflect my own experience? Absolutely.
I’ve repeatedly seen over the past 40 years that people who actually apply models on the job and help others learn how to apply them in practice develop much deeper, practical, embodied knowledge, skill, and experience than those who simply sit in classrooms being chalked and talked at by presenters espousing academic theories.
Carol Dweck — Fixed and Growth Mindsets
I wrote a detailed article about Dweck’s Fixed Mindset and Growth Mindset concepts a couple of years ago, so won’t repeat that content in detail here.
It’s worth noting though that when Satya Nadella became Microsoft CEO in 2014, he applied Dweck’s mindset concepts to help Microsoft shift from what he referred to as a “know-it-all” culture to a “learn-it-all” culture.
When I wrote about this back in 2022, Microsoft’s stock valuation had grown from $318Bn in 2014 to more than $2,000Bn. By September 2024 it had risen to a staggering $3,200Bn — a ten-fold increase in the ten years of Nadella’s tenure.
That ought to be enough to make anyone take a seriously open-minded look at Dweck’s ideas. 11
Debunking debunking
The bottom line is this:— the inherent complexity of life in the real world can never be completely squished into any model, concept, map, schema, or framework.
But in a world prone to scientism, model makers frequently fool themselves and others into mistaking their maps for the territory.
The recognition that all models, maps, and frameworks are wrong isn’t new.
As Shakespeare had Hamlet astutely observe: “There are more things in heaven and earth, Horatio, than are dreamt of in your philosophy”. 12
Any concept, map, scheme, or framework can inevitably be shown to be wanting.
In short, they can all be “debunked” simply by focusing on what’s wrong with them.
It seems a shame to do that when we could focus instead on what’s useful in frameworks, models, and maps.
Doing that might inspire more of us to lace up our boots, leave our comfort zones, and explore for ourselves the territory they point us at out in the real-world.
That would be much more useful — especially if we came back and shared with others what we discovered and learned in our own actual applied real-world experience.
Questions for Reflection
Which maps, models, concepts, or frameworks do you consider “debunked”?
Which ones do you prefer instead — and why?
What do you think of my models, maps, and frameworks…? 🤣
Science and Sanity 5th Edition p58 Institute of General Semantics 1994 (1st edition 1933). Alfred Habdank Skarbek Korzybski (1879 – 1950) was a Polish-American independent scholar who developed the field of general semantics.
Adam Grant’s article “Say Goodbye to MBTI, the Fad That Won't Die” on LinkedIn.
“MBTI Facts & Common Criticisms” by MBTI vendor The Myers Briggs Company is worth a read.
I don’t know Dirk Verburg personally, but he makes a sound case for why, and how, he still uses MBTI in 2024 here.
See the previous article Moving beyond "us" and "them".
For more on these systemic influences see the previous article The seven channels of culture.
Hopefully the MBTI certification process makes practitioners aware of the MBTIs limitations.
You couldn’t attend any kind of teambuilding workshop a couple of decades ago without encountering this rigorous psychological assessment exercise…
Scientific Autobiography and Other Papers, (1949), translated by F. Gaynor, pp. 33–34, 97
My previous article titled Growth mindset - crucial but insufficient, addresses how future-fit cultures need to take a further step — from Fixed, to Growth, to 2D3D mindsets. It also describes how John Vervaeke, Professor of Cognitive Science at the University of Toronto, resolves the main scientific criticisms of Dweck’s work.