The narrative fallacy and education
I recently read this article on the Ludicity blog about the narrative fallacy, and it got me thinking about how this applies to education.
Just to make sure we’re all on the same page, the narrative fallacy is a term coined by Nassim Nicholas Taleb in his book, The Black Swan, that refers to people’s proclivity to create (and prefer) straightforward causal stories to explain events, even when reality is complex and random. Taleb’s notion of the narrative fallacy draws from Kahneman and Tversky’s stuff on heuristics and biases (also check out Kahneman’s book, Thinking Fast and Slow), which describes the many reasons people make poor decisions.
Basically the idea behind the narrative fallacy is that people prefer stories, and one particularly compelling type of story is a cause-and-effect story, so we tend to tell cause-and-effect stories to help make sense of the world around us. Especially if we’re trying to explicitly teach a lesson – think about all of the little moralistic cause-and-effect stories we tell young children, like The 3 Little Pigs or The Gingerbread Man. Sometimes these stories are benign or even beneficial. If The Gingerbread Man’s misadventure with the fox teaches my kids not to get in a stranger’s car, that’s a win.
The problem ends up being that, in the real world, we often try to construct these stories ex post facto. We try a new workout plan and, if it doesn’t work, we construct a story explaining why it failed. We apply for a job and, if we don’t get it, we construct a story explaining why. Our school’s pass rate on the state standardized 3rd grade reading assessment dropped by 3%, and some higher-up asks us to explain why, so we construct a story.
But the real world is messy, and even well-funded, carefully-planned research studies conducted by very smart experts in research design can struggle to detect true causality once the studies leave the confines of the researchers’ laboratory and are conducted out in the real world. So why do we think that the typical person is able to go back and retrospectively isolate the true cause of some event? You might tell yourself you didn’t get a job because you didn’t have a certain credential, but it could just as easily be true that the hiring manager was tired and didn’t read your resume carefully, or that the company had an internal applicant they were always going to hire anyway, or that another applicant had more experience than you.
Likewise, if a principal is asked why their 3rd grade reading test scores decreased, they might say it was because they had 2 new 3rd grade teachers this year. This sounds like a compelling story, and maybe it’s partially true, but the decrease could also be due to the fact that different students tested, or that students were tested on different standards, or that one of the passages kids had to read and respond to was about some boring-ass topic like soap, or that the testing room was too cold. Or all of these things!
Or maybe it was literally just due to random chance. Like I wrote about last week, all test scores consist of some error, and although it’s not terribly likely, it’s possible that this error, compounded across multiple students, resulted in a lower pass rate.
But that’s not a compelling story.
To some extent, I think root cause analysis (RCA) is a widespread practice in proliferating the narrative fallacy. To greatly oversimplify the process, RCA asks a team of people to identify issues – usually by reviewing data – then ask a series of questions encouraging deeper exploration until they arrive at the root of the problem. The RCA process emphasizes getting perspectives from a diverse set of people, with the idea that the team can examine themes or commonalities across these perspectives to arrive at the root cause. Although this feels more robust than a single person identifying a root cause, I’m still skeptical that even a “diverse team” is going to be great at retrospectively identifying a true root cause. Because remember, the whole premise of Taleb’s narrative fallacy and Kahneman and Tversky’s heuristics and biases is based on the findings that humans, on the whole, make systematic misjudgments in uncertain situations. So if people tend to be systematically wrong, it kinda doesn’t matter if we have 10 different opinions that are all wrong in similar ways.
This also doesn’t get into any of the interpersonal power dynamics that go into group decision-making, but that’s beyond the scope of this post.
So what’s the solution? To put on a black turtleneck and smoke an unfiltered clove cigarette and nihilistically throw up our hands in defeat? Probably not.
Trying to come up with some explanation of a root cause, and then some solution for that root cause, is obviously a better approach than not trying anything, both for organizational improvement and our own psychological wellbeing.
I suppose my point is that, when we do come up with some post-hoc cause-and-effect story that attempts to diagnose the root cause of an issue, we need to approach this with the acceptance that it’s probably wrong – or at the very least, it’s oversimplified. Accepting that our diagnosis is likely wrong or incomplete can be powerful. It can make us more willing to actually test it (rather than merely gather data that we hope will prove a point we “know” is true) and adapt or abandon our solution if it’s not working. It makes our ideas less dear to us. And it probably makes us more likely to actually improve in the long run.
If you’re enjoying reading these weekly posts, please consider subscribing to the newsletter by entering your email in the box below. It’s free, and you’ll get new posts to your email every Friday morning.