The Cognitive Architecture of Religion
A tour through the Cognitive Science of Religion in 13 ideas
You know what’s super weird?
That of all the myriad cultures that have existed throughout human history, none of them, so far as we know, started out atheist. Given that atheism looks to many of us like the more obvious, more ontologically economical view of the world, you’d expect someone to have landed on it – but nope, all of them started out religious.
Now, if all of these different cultures are arriving at religion independently, then the most obvious conclusion to draw1 is that there must be certain features of the human mind that are consistently giving rise to religious belief and behaviour. Cognitive Science of Religion (CSR), in a nutshell, is the study of these features.
Back when I was doing research for my piece on the Science of Belief, I came across multiple mentions of CSR. These mentions were all very complimentary, citing the field’s empirical successes and methodological maturity, yet up until then, I'd never heard of it. I’m interested in Cognitive Science and I'm interested in religion – so I started digging.
As it turns out, CSR is every bit as interesting as its name suggests. It’s an interdisciplinary field spanning cognitive science, religious studies, anthropology, developmental psychology, philosophy, and more. Unlike earlier approaches to the study of religion, CSR makes no attempt to understand Religion as a singular entity; instead, it studies the recurrent features of religion that occur across cultures, e.g. things like rituals, supernatural beliefs, morality, etc. (More on this shortly.)
Not only is CSR interesting – it’s also enlightening. There are still lots of live debates and topics of contention within the field, but many of its best-supported ideas have helped me clarify and deepen my thinking on other topics that I'm interested in too: things like meaning in life, the nature of reality, etc.
So, with all of that said, this piece is intended as an intro to CSR. I’m going to start out by giving a general overview of the field, its approach and methods, and then I'm going to run through what I take to be the most interesting – and most valuable – ideas that have come out of CSR so far.
Here goes.
Contents:
What even is religion? A bottom-up, fractionating approach
One of the main goals of CSR is to understand the cognitive architecture that gives rise to religion…but what actually is religion? This might seem like quite an academic question – we all know what religion is, damn it! – but CSR takes a specific stance on this one, so it’s worth spelling out.
Past attempts to define religion mostly fall under one of three categories:
Substantive definitions – you try to capture all of the myriad manifestations of religion in terms of some feature(s) or quality(ies) that they share. For example, Classical Anthropologist Edward Tylor defined religion as ‘belief in spiritual beings’; William James defined it as ‘the feelings, acts, and experiences of individual men in their solitude, so far as they apprehend themselves to stand in relation to whatever they may consider divine.’
Functionalist definitions – you try to define religion based on the function it serves either in society or in the lives of individuals. For example, Emile Durkheim defined religion as a system of beliefs and practices that unites people into ‘one single moral community called a Church.’
Deflationary definitions – you say that religion is a western category, that most cultures don’t even have a word for religion, that any attempt to shoehorn the pullulating diversity of religious practice into our own definitions is destined to fail, ergo let’s throw in the towel and do away with definitions altogether.
As varied as these approaches may be, they all have one thing in common: they look at religion from on high and try to make broad claims about what it is or isn’t. CSR takes a different approach. Rather than saying ‘Religion is X’ or ‘Religion isn’t Y’, CSR simply sets aside the whole problem of defining religion and instead focusses on trying to understand and explain the recurrent features of religion across the world. So, for example, rather than trying to study all cultural practices that foster community – in line with Durkheim’s ‘community making’ definition – CSR simply takes a single recurrent feature of religion – something like belief in supernatural agents or ritual – and tries to understand the cognitive architecture that gives rise to this feature cross culturally.
The idea is that while religious beliefs and practices vary widely across the world, there are certain structural commonalities that these beliefs and practices share. By identifying those commonalities – the recurrent features of religion – and attempting to understand the cognitive mechanisms that give rise to them, we gain a far deeper understanding of religious practice in general.
In some sense, this means that CSR adopts a piecemeal approach: it studies each feature of religion in isolation, and it makes no real attempt, as far as I can see, to provide any kind of overarching theory. From one vantage point, this is slightly unsatisfying – who doesn’t want a grand, unifying theory!? – but from another, it’s hard not to admire the modesty and rigour of the approach: you take each feature of religion on its own terms and you use various different methods to understand and explain that feature in isolation. Sometimes these explanations support and complement one another – often they don’t.
The CSR Paradigm
Before diving into some of the main ideas and findings from CSR, it’s worth dedicating a quick section to the paradigm (as I'm calling it) that undergirds CSR research. As far as I've been able to work out, most of this research sits on top of a dual-systems conception of the mind. Dual systems theory is already pretty well known thanks to Daniel Kahneman and his book Thinking Fast and Slow, but for those of you that aren’t familiar with it, here’s a quick tl;dr of dual systems theory and how it relates to CSR.
Dual systems theory states that the mind is made up of two systems:
System 1 - processes that are fast, effortless, unconscious, and intuitive. For example, recognizing a familiar face, understanding simple sentences, reacting to a sudden loud noise, etc.
System 2 - processes that are slow, effortful, conscious, and reflective. For example, solving a complex maths problem, learning a new language, weighing the pros and cons of a major decision, etc.
CSR posits two distinct kinds of cognitions that sit on top of this dichotomy: intuitive ideas and reflective beliefs. Intuitive ideas are part of our system 1 processing – they’re automatic and unreflective, they’re inherited rather than learned, and they’re shared by all human beings. A good example of an intuitive idea is teleological reasoning:
When there’s a natural disaster, people often search for the reason behind the disaster – why did it happen? Notice, though, that they’re not usually looking for a causal explanation, e.g. two tectonic plates converged, causing the Earth's crust to fracture and release seismic energy. Instead, they’re looking for a teleological explanation: they want to know the purpose behind the disaster. CSR argues (quite convincingly, I would say) that human beings are endowed with the assumption that things – outcomes, artifacts, behaviours – are best explained in terms of their purpose. In certain contexts, when system 2 processes are in play, we’re able to override this default assumption, but in others, perhaps when our system 2 processes become confounded by emotion, we fall back on automatic system 1 assumptions. The assumption that things are best explained in terms of the purpose for which they were intended would be an example of an intuitive idea. (More on this later.)
Just to draw your attention to a potential point of confusion here: intuitive ideas aren’t really ideas in the usual sense of the word. They’re not truth-apt propositions or anything like that. Instead, they’re more like default inclinations to see the world in certain ways and to believe certain things, e.g. an inclination to seek out teleological explanations and find them particularly compelling.
Reflective beliefs, within this framework, are the opposite of intuitive ideas: they’re controlled by system 2 processing, and they are consciously held, articulable beliefs. An example of a reflective belief is the official doctrines of Catholicism, which have to be actively learned and which can be communicated from one person to another.
A fairly good metaphor for all of this is a palimpsest: back in the old days when paper was scarce, scholars would often write on paper that had already been written on. This meant that there was a base layer of ink, which was faded but still just about legible, and there was a surface level of ink, which was much clearer. CSR views religious thinking in a similar way: the base layer of ink is made up of our system 1 intuitive ideas, which have been baked into us by evolution. The surface level of ink is our system 2 reflective beliefs, which are learned from experience and which are conscious and readily accessible. Much of CSR is basically about trying to identify and understand intuitive beliefs while explaining how they contribute to our reflective beliefs and religious behaviours.
But as this metaphor emphasises, system 1 processes can’t be observed directly, so CSR researchers are forced to try and triangulate on the facts by using a range of research methodologies from a range of disciplines. They might look at the development of religious attitudes in young children across different cultures to try and understand the intuitive ideas young children share and how these ideas are influenced by culture. Or they might ask Christians to answer quickfire questions about their religious convictions to see whether (and how) peoples’ quickfire answers diverge from their considered religious beliefs.
As I mentioned earlier, CSR is an intensely interdisciplinary field. This is one of its big strengths. Most of its strongest ideas are supported by a range of different data and explanation types, and I think this is probably a big reason for the field’s fruitfulness so far.
A tour through CSR in 13 ideas/findings/debates
OK, so, now that I've laid the foundations, I'm going to talk through thirteen of the field’s most interesting findings/ideas/debates. This will hopefully give you a good lay of the CSR land, while also painting a convincing picture as to the features of cognition that give rise to religion.
Note: as you might expect, most of what follows isn’t completely settled science (what is!?). Debate still rages. Some of these findings are more accepted than others, and I’ve definitely done a bit of cherry picking based on what I find most interesting and most compelling. I’ll do my best to signpost my biases throughout the rest of this piece, but worth bearing this in mind before we get started!
1. Promiscuous teleology
I touched on this idea briefly in the ‘CSR paradigm’ section above, but it’s worth going into some more detail because it’s a really important idea – to my mind, at least – wrt the topic of meaning in life.
Have you ever noticed that under certain conditions, people make weird metaphysical claims that have almost no connection with their considered beliefs about the world? There are loads of examples of this, but the most common – and most telling – is the saying ‘everything happens for a reason.’ In one breath, a person will tell you that they can’t believe in God or the soul or free will because these religious ideas are unreconcilable with our best scientific theories. In the next – usually as a consequence of some existentially unsettling event, e.g. a medical diagnosis – they’ll tell you that everything happens for a reason.
I’ve always found this sort of thing both weird and fascinating. Turns out CSR has quite a fleshed out answer as to what’s happening here.
First, recall the stuff I said earlier about dual process theory: namely, that we have intuitive ideas that are associated with fast, automatic system 1 processing, and we have reflective beliefs that are associated with slow, conscious system 2 processing. CSR claims that human beings are naturally inclined towards teleological reasoning: we are constantly attempting to infer the purpose for which things are created. This kind of teleological reasoning is a fast, system 1 process. It can be consciously overridden – as it often is when we’re looking for causal explanations of phenomena – but it quickly reasserts itself when system 2 processes are taken offline or compromised for some reason, e.g. when we’re under cognitive load, emotional stress, existential anxiety, etc.
Evolutionarily speaking, the tendency to reason teleologically probably conferred certain advantages upon our ancestors. If we could look at an artifact or a tool and infer its purpose – what it’s for – then we could begin using that artifact to further our own ends. Likewise, if we could infer the intentions behind the actions of our tribesmen – or even, I guess, our enemies – then we could navigate our social world far more effectively.
But as will probably be obvious, there are lots of contexts in which teleological reasoning simply isn’t appropriate. If you want to understand why an apple just fell from a tree, you’d probably be making a mistake if you tried to explain this in terms of, e.g. the intentions of the tree or the purposes of the wind. A better explanation – one that sheds light on the phenomena and how to control it in the future – would probably relate to the ripeness of the fruit, the time of the season, the strength of the wind, etc.
The natural tendency to seek out teleological explanations in all domains of life – including those where it isn’t appropriate – is referred to in CSR as promiscuous teleology. Here’s a quick run through of the type of evidence used to argue that promiscuous teleology is our natural, system 1 default way of approaching the world:
Children show a strong bias for teleological reasoning but adults are more judicious about where they seek out teleological explanations - e.g. if you ask children and adults questions about different objects, children are much more likely to try and explain the properties of both living and non-living things in terms of teleological explanations, e.g. ‘Why is the stone pointy?’ ‘So that animals won’t sit on them and smash them.’
When adults are given time to reflect on the answer to questions about what things are ‘for’, they’ll often reject teleological explanations when it’s appropriate to do so. However, if you put them under time pressure, they tend to revert to teleological explanations.
Likewise, scientists are particularly good at rejecting teleological explanations in response to questions about what things are for, but if you put them under time pressure, they give you as many teleological explanations as anyone else, e.g. they’ll say that ‘trees produce oxygen so that animals can breathe’ is true.
The tendency to reason teleologically occurs across cultures and isn’t just a western, Abrahamic thing, e.g. adults and children from countries with a lower degree of religiosity, like Britain, China, and Finland, all show a tendency to apply teleological reasoning to both living and non-living things.
The tendency to infer intentions and mental states emerges extremely early in children.
Promiscuous teleology begins to show up at around the same age that children start to understand artifact design, suggesting the two things may be underwritten by the same basic ability.
Everything I've said so far about promiscuous teleology has been disconnected from the subject of religion, so let me try and tie this back in. Teleological reasoning, when used effectively, allows us to understand the purposes for which specific artifacts were created, but what happens when we apply teleological reasoning to things like an earthquake or even the entire universe?
In such cases, we naturally try to infer the purpose for which these things were brought about. But notice: there’s no such thing as an intention without an intender. What kind of being could create something like the world or an earthquake? It would have to be something pretty powerful – something supernatural. So the suggestion here, as I understand it, is that while promiscuous teleology doesn’t necessitate belief in supernatural beings, it does lead us to ask the kinds of questions, and engage in the kinds of reasoning, that presupposes the existence of such entities.
2. Hyperactive agency detection device
Human beings have a tendency to project human-like attributes onto nonhuman beings, e.g. plants, animals, artifacts, etc. This is referred to as anthropomorphism, and it’s been known and talked about since the days of Hume, Spinoza, Nietzsche, etc. CSR scholars push this idea one step further by positing what they call a Hyperactive Agency Detection Device (HADD).
HADD refers to our automatic tendency to attribute intentional agency to the ambiguous stimuli that we encounter in our environment. For example, imagine I'm a prehistoric man walking through the forest and I suddenly hear a crashing sound to my right. My automatic reaction is to assume that some kind of agentic being – whether a tiger or an enemy clan member – is responsible for the crashing sound.
Or, if you want a modern example: imagine I'm home alone one evening and I hear the sound of a window opening. Naturally, I assume this sound signifies the ingress of an axe-wielding murderer into my home. When ten minutes pass and I'm still alive, I go to investigate and discover that the window was pushed open by the wind outside.
CSR scholars make the case that HADD is evolutionarily advantageous. If I assume that the crashing in the bush was caused by a predator but it was actually caused by the wind (false positive), then there’s no real downside (other than the odd heart attack). If I assume that the crashing in the bush was caused by the wind but it was actually caused by a ravenous tiger (false negative), then I'm toast.
HADD has been used to explain why people see faces in clouds and Jesus Christ on a piece of toast: we read agency in ambiguity.
From my perspective, what I find particularly interesting is the potential interaction between HADD and promiscuous teleology. If I am naturally inclined to assume that everything was created with a purpose, and if I am constantly meeting with the intuition that the ambiguous stimuli in my environment are being caused by an intentional agent, then I'm really only one small step away from positing the existence of something supernatural. Notice, though: it’s not that these intuitive ideas entail belief in the supernatural; they simply create the psychological conditions within which beliefs of this kind become particularly plausible.
3. Psychological essentialism
There’s converging evidence that human beings have a natural tendency to attribute unchanging essences to things in the world. So, for example, what makes a tiger a tiger? Is it its stripes? Or its whiskers ? Or its claws? Psychological essentialism is the implicit belief that members of a category, e.g. tigers, share deep commonalities, i.e. essences, that determine their membership in that category. In other words, what makes a tiger a tiger isn’t its superficial attributes – its stripes, its whiskers, its claws – but rather the fact that it possesses the essence of a tiger.
At first glance, this may all sound slightly dubious, so let me share an example to sharpen up your intuitions:
The ship of the Greek hero Theseus was preserved in Athens as a memorial of his life and acts. Every year or so, a plank of wood from the ship would rot and have to be replaced by a new one. After many years, the entire ship – from the hull to the steering wheel to the sails – had been replaced. Is this still Theseus’s ship?
As the first planks of wood are replaced, most of us would say that, yes, it’s still Theseus’s ship. We might even cling to this intuition until there’s only a single plank of wood left from the original ship. But once the entire ship has been replaced, suddenly this position becomes hard to defend.
Psychological essentialism is the idea that we attribute an unchanging essence to Theseus’s ship. This unchanging essence is what makes Theseus’s ship Theseus’s ship (in our minds, at least). You can swap out a few planks here and there, but the essence of the ship remains unchanged. At a certain arbitrary point, however, all of us will suddenly intuit that the ship in question is no longer Theseus’s ship. It’s as if the essence – which apparently wasn’t tied to any feature in particular – suddenly dissipates at the removal of a specific feature.
So, yeah, the upshot of psychological essentialism is that when we encounter things in the world – trees, squirrels, pens – we seem to attribute essences to these things. These essences are what persists over time as outward features change, and they are ultimately what confers the status of tree-ness or squirrel-ness or pen-ness on those things.
Now, you might be asking, what does this have to do with religion?
Well, one of the suggestions within CSR is that psychological essentialism gives religious theories, e.g. creationism, a cognitive advantage over scientific theories, e.g. evolution. After all, creationism is perfectly compatible with the idea that animals and plants possess unchanging essences – presumably essences conferred by God – but evolution is not. Evolution tells us that all life forms evolved from a single ancestor; that all species are constantly changing and evolving; that there can be considerable diversity within species; and that there are no immutable essences that ultimately confer species status on any animal.
This line of reasoning – as well as lots of the other ideas presented in this piece – is used to argue for the naturalness-of-religion thesis, i.e. that religion is cognitively natural, that science is not, and that we almost have to go against the grain of our cognitive predispositions in order to absorb and integrate scientific theories into our models of the world.
From my perspective, psychological essentialism is also interesting because it bears on a number of debates within philosophy too.
To give one example: when metaphysicians try to investigate the ‘ontological structure of concrete particulars’, most of them end up in one of two places: bundle theory or substratum theory. Bundle theorists say that concrete particulars are just made up of bundles of properties. So, for example, they’d say that my pen is made up of the property of being metal and the property of being black and the property of weighing 50 grams. If you change any of these properties, then the pen ceases to be the same pen. Substratum theorists claim that concrete particulars are made up of bundles of properties PLUS a bare substratum that holds all of these properties together. Under this view, the substratum is what gives the pen its identity; you could change literally every property and the pen would continue to be the same pen because the substratum stays the same.
Now, from my (probably ignorant) perspective, I think this whole debate is completely misbegotten. I don’t think these philosophers are arguing about the ontological structure of concrete particulars at all; I think they are arguing about the structure of their cognitions – about essences. The substratum theorist invents an incomprehensible metaphysical entity to defend some of the intuitions provided by psychological essentialism; the bundle theorist rejects these intuitions altogether (usually in order to defend other commitments that are equally dubious).
I think that this debate only makes sense to a brain that naturally attributes essences to objects in the world. Is this pen the same pen if it loses one of its attributes? The word ‘same’ here is so ridiculously vague – the same in what respect? Materially the same? Functionally the same? – and yet I think people take it seriously because they intuitively know which type of sameness we’re referring to: essentially the same.
Note: it’s not that I think the acknowledgement of psychological essentialism solves these debates. I just think it casts them in an interesting light – a light that might help explain why certain intuitions arise and why we think these debates are even worth having in the first place. This makes me wonder if we not only need a Cognitive Science of Religion but also a Cognitive Science of Philosophy.
Other topics in philosophy that psychological essentialism seems to bear upon:
The problem of universals
Personal identity theory
Parmenides vs. Heraclitus
(Reading this back, I worry the last few paragraphs might seem a little bit like axe-grinding. Apologies if this is the case!)
4. Immanent justice reasoning
You know earlier on I was talking about the cognitive dissonances that emerge between our intuitive ideas and our reflective beliefs? Here’s another example:
‘What goes around comes around.’
This kind of claim – the assumption that good things happen to good people – is mostly baseless if you think about it. It runs in the face of our best scientific theories – there are no naturalistic causal links that could bring these effects about – and we have many examples of this assumption being violated.
In CSR, this kind of thinking is referred to as immanent justice reasoning, and there’s growing evidence that it’s a product of both cognition and culture. On the cognition side of things, we know that young children engage in immanent justice reasoning, which suggests it’s something that manifests before enculturation has taken effect. We also know that immanent justice reasoning shows up in a wide variety of different cultures, suggesting it’s probably the expression of our shared cognitive machinery rather than contingent cultural factors.
On the culture side of things, there’s evidence that adults engage in more immanent justice reasoning than children, suggesting this tendency is probably strengthened by culture. There’s also evidence that people who watch lots of fictional television (which typically contains lots of ‘just world’ narratives) employ more immanent justice reasoning than those who don’t.
We have some reason to believe that immanent justice reasoning might be derived from – or maybe built on top of – our evolved sense of fairness. One characteristic study supporting this claim involved giving participants action-outcome scenarios – a man insults a beggar (action) and then trips over his own shoelace (outcome) – and asking them to explain why the outcome occurred. When the character’s action, e.g. insulting the beggar, was proportional to the outcome, e.g. tripping over their shoelace, people tended to say that the character tripped over their shoelace because they insulted the beggar. In contrast, when the character’s action, e.g. insulting the beggar, was disproportional to the outcome, e.g. being hit by a car and killed, people tended to say that the character’s death was not caused by the action. So, in other words, participants seemed to view misfortune as a means of dispensing justice, but only when the misfortune was proportional to the offence, i.e. when it was fair.
If immanent justice reasoning is built on top of our sense of fairness, then it becomes quite easy to develop an evolutionary account of this tendency. The survival of human beings was heavily dependent on our ability to cooperate with one another. Individuals who abided by and enforced the laws of fairness were more likely to be cooperated with. Ergo, we evolve a sensitivity to violations of fairness.
Much as with promiscuous teleology, immanent justice reasoning seems to look a bit like a category error: teleological reasoning is legit when we are looking at intentional actions and artifacts, but it’s inappropriate when we use it to try and explain the world. Immanent justice reasoning is legit when we are looking at social relations – e.g. where we often see moral infractions met by proportional retribution – but it’s inappropriate when we try to explain the wider world.
Again, returning to philosophy: when I look at immanent justice reasoning and the research surrounding it, it feels like I'm looking at the cognitive foundations of moral realism. This would explain why it feels more cognitively natural to believe in some kind of moral order, and why so many philosophers are unwilling to take the leap into something like moral antirealism.
5. Coexistence reasoning
One of the patterns that runs through everything I've written about so far is the distinction between intuitive system 1 ideas and reflective system 2 beliefs. A question that naturally emerges from this distinction is whether the slow system 2 beliefs eventually come to displace the innate system 1 ideas, or whether the two coexist in some way. Based on some of the stuff I've said already – e.g. the palimpsest metaphor at the start – you might not be surprised to hear that the coexistence account is the far more compelling of the two. Some of the research supporting this position is fascinating.
A famous example comes from the work of the Anthropologist E.E. Evans-Pritchard, who spent a ton of time doing ethnographic fieldwork with the Azande people in North Central Africa. Every now and then, the Azande’s granary – a structure made from wood and cement and used to house the tribe’s grain supplies – would collapse. Sometimes this would happen and nobody would be harmed. At other times, the granary would collapse on members of the tribe and they would be injured as a result.
The Azande were not ignorant of the natural causes of the collapse. They knew it was a result of termite damage and decay. Here’s the interesting thing, though: when the granary collapsed and nobody was harmed, the tribe mostly relied on naturalistic explanations to account for the event. In contrast, when the granary collapsed and people were injured, the tribe cited magical explanations to account for what had happened, e.g. they’d say that witchcraft had caused the granary to collapse at that particular moment, when those particular tribe members were underneath it.
So, in other words, the tribe was aware of both types of explanation, and the specifics of the situation – whether or not it caused direct physical harm – determined the kind of explanation they would lean upon most heavily.
A similar finding comes from the work of Cristine Legare, who investigated the way that South African adults and children reason about AIDS. In line with Evans-Pritchard, she found that these people used a combination of natural and supernatural explanations to account for the illness. Interestingly, she also showed that people used supernatural explanations, e.g. witchcraft, to account for the ‘why’ question of the illness – why did this happen? – and that they used natural explanations, e.g. biology, to account for the ‘what’ of the illness – what happened to cause this?
To me, this last insight is fascinating. Notice two things: first, that in line with the teleological reasoning stuff discussed above, ‘why’ questions seek out a reason – but reasons belong to intentional agents. Trouble is, it's often hard, maybe impossible, to imagine a natural intentional agent that could have brought about such an effect through natural means, so these ‘why’ questions force us to either posit a supernatural agent or else a supernatural mechanism, e.g. witchcraft.
Second, notice that the only time people generally seek out answers to the why of an event are when those events have human significance. For example, if a volcano erupted on a distant island and didn’t harm, or even affect, any humans, we’d probably be happy to chalk it up to natural forces and leave it at that. But if a volcano erupted and killed my entire family, you better believe I'm going to be looking to the sky asking why this happened. Incidentally, this insight might also account for the Azande’s selective application of supernatural explanations: when the granary collapse didn’t harm any tribe members, its human significance was limited, so naturalistic explanations were sufficient; when the granary collapse did harm tribe members, human significance was great, so teleological explanations were required.
Another interesting finding re: coexistence reasoning is that people are sensitive to context when producing explanations. In one typical study, researchers provided participants with a narrative that either highlighted the biological or the spiritual aspects of death. Turns out that if you highlight the biological aspects of death, people tend to give clean-cut naturalistic explanations – e.g. bodily functions have ceased – whereas if you highlight the spiritual aspects of death, they are more likely to assert that the ‘living functions’ continue in some way.
So, in summary, it seems that human beings are capable of holding both natural and supernatural explanations at the same time, which actually ties in quite neatly with the fragmented theory of belief storage I wrote about in my last piece. It might well be the case that the framing of the situation determines the specific fragments that are activated, which in turn determines the kind of explanation that’s generated.
6. Why do people believe in an afterlife?
Afterlife beliefs show up everywhere: where there are humans, there are afterlife beliefs. CSR has lots of plausible ideas about why this might be the case, but most of the ones I've come across feel a bit partial and disconnected. I’m going to pull three of them together to try and offer a convincing picture as to why afterlife beliefs emerge so consistently.
First, one that we’ve already talked about: psychological essentialism. It looks like we automatically attribute unchanging essences to objects in the world. We can swap out half of the planks on Theseus’s ship, yet still maintain that it is the same ship. Now think about human beings: a human being undergoes immense physical and mental change throughout their life, yet we have no difficulty representing this as a single person that persists across these changes.
When asked questions about the continuity of the self after death, most people say that the mind is more likely to continue than the physical body, but that the soul is more likely to continue than either. Here’s my question: if the soul is neither the mind nor the body, then what the hell is it? To me it looks suspiciously like an unchanging essence.
Next up, Theory of Mind (ToM). ToM is a hypothesized cognitive ability that allows us to represent other peoples’ thoughts, desires, intentions, goals, etc. This ability involves making inferences about other peoples’ unobservable mental states, which in turn allows us to better predict and understand their behaviour. Pascal Boyer and others argue (quite plausibly, i think) that when a person dies, the theory of mind that we have for them continues to exist. This makes it almost impossible to stop thinking about the deceased as psychologically active in some way.
Finally, offline social reasoning. When a person leaves a room, do they cease to exist? Of course not – and our minds are easily able to represent this fact. This is called offline social reasoning: the ability to represent others who are not physically present as continuing to exist.
Bringing these three ideas together:
Psychological essentialism makes it easy for us to represent a person as continuing to exist even when they undergo radical physical and mental alterations. This might mean that it’s not particularly difficult for us to imagine someone as continuing to exist in some way even after they have died.
Theory of mind makes it extremely difficult for us to stop representing dead people as being psychologically active in some way.
Offline social reasoning makes it easy for us to imagine deceased people as continuing to exist somewhere else.
Along the same lines as other ideas presented so far, I’m not saying that these cognitive propensities entail afterlife beliefs; I'm just saying that it looks like they create the mental conditions within which afterlife beliefs become possible, if not probable.
7. Minimally counterintuitive concepts
Another question CSR attempts to answer is ‘why do some religious concepts survive and propagate, while others don’t?’
One of the most famous answers that the field has been able to come up with is the idea of minimally counterintuitive concepts, which was formalised by Justin Barrett into MCI theory. MCI theory states that a concept will achieve transmissive success if it violates our intuitive expectations by a minimal amount. Here, when they say ‘minimal amount’, they mean something like ‘along one dimension.’ So, for example, a rock that can reproduce would violate our intuitive expectation of a rock along one dimension; a rock that can reproduce and fly and read minds would violate our intuitive expectations along three dimensions.
MCI states that if a concept lines up perfectly with our intuitive expectations, it won’t be memorable and therefore won’t achieve transmissive success. Likewise, if a concept deviates too far from our intuitive expectations, possibly along too many dimensions, it becomes less memorable and therefore won’t achieve transmissive success.
An example that is sometimes given of a minimally counterintuitive concept is ghosts. Ghosts fit our intuitive concepts of people in the sense that ghosts mostly look like people, they have thoughts and intentions, they (kind of) act like people (given their ghostly thoughts and intentions), etc. But there’s one primary difference between ghosts and people: ghosts are incorporeal – they can pass through physical matter. The fact ghosts violate our intuitive expectations along one, and only one, dimension is given as the reason for their huge transmissive success.
As I say, MCI theory is quite a famous idea within CSR, so I felt like I had to include it here, but it doesn’t really strike me as particularly interesting or enlightening. In fact, I feel like MCI theory’s transmissive success could well be chalked up to the minimal counterintuitiveness of the theory itself: it doesn’t really go very far beyond what most of us would pretheoretically assume anyway.
Anyhow, it turns out that support for MCI theory is quite patchy. Some research finds that children show a cognitive bias for minimally counterintuitive ideas, but other research has failed to bear out MCI theory’s predictions. Also, some researchers have found it tricky to actually categorise minimally counter intuitive concepts with any real rigour, e.g. does a ghost really only violate our concept of ‘human’ along one dimension? These are usually thought to be beings that exist in some kind of liminal state between life and the afterlife proper, so surely that’s another dimension where they’re violating our intuitive concepts isn’t it?
8. Do we need religion for morality?
One of the better supported findings within CSR is that both young children and non-human animals are capable of a kind of proto-morality. For example,
Paul Bloom found that infants as young as six months old showed a preference for helpful puppets that assisted in pushing a ball up a hill vs. unhelpful puppets that tried to push the ball down the hill.
When Bloom exposed 18-month olds to the same scenario, he found that they explicitly labelled the helpful puppets as ‘the good guys’, and that they chose to give treats to the helpful puppets over the unhelpful ones.
Frans de Waal housed two Capuchin monkeys in side-by-side self-contained cages. Both monkeys were trained to give a token to a researcher in exchange for a cucumber. In one trial, the researcher gave one of the monkeys a grape instead. (Grapes = much more desirable.) This new protocol was then repeated, and the monkey that received the cucumber became progressively more and more agitated, eventually throwing the cucumber back at the researcher in anger. This feels a bit like a monkey form of immanent justice reasoning, although notice that the monkey who received the grape (as far as i know) didn’t seem to kick up too much of a fuss.
This kind of research offers a pretty convincing rebuttal of the idea that morality wouldn’t exist without religion. Having said that, I don't think it allows us to go all the way to saying that religion plays no role in morality. We may come with certain inbuilt moral inclinations, but it seems to me like we still need a way of representing, formalising, and justifying these inclinations to ourselves. It’s like we have the system 1 ideas in place, but we need justifiable system 2 beliefs to map onto those ideas and help us make sense of them. That’s where something like religion is helpful and possibly conducive to prosociality.
(As a broader point – one that I think most of these researchers would get behind – it's clear that culture and religion play a fairly large role in determining how our inbuilt premoral sentiments eventually come to express themselves.)
If I'm right on this point, then you’d expect religious people to be more moral than nonreligious people, but the evidence for this claim isn’t great. Some surveys show that religious people tend to donate more to charity, but lab experiments on prosociality fail to bear this out. One explanation is that religious people care more about being perceived as moral, so they give survey responses that are aimed at creating that impression. Another explanation is that religious people truly are more charitable, but that this charity is directed to people within their in-group, so it doesn’t show up in lab experiments. This final point definitely vibes with the idea that religion fosters in-group solidarity.
Interestingly, there’s loads of research showing that when religious people are primed with supernatural agent concepts (e.g. God), they act more charitably in economic games. One quite plausible explanation for this is that religious priming reminds religious people of a monitoring supernatural agent, and that it is the awareness of being monitored that brings about the prosocial behaviour. This chimes with the fact that non-believers do not act more prosocially when primed with God concepts but that they do when primed with concepts about social institutions like the law, police, etc.
Side point: if it’s all just about monitoring rather than anything else, then wouldn’t it be reasonable to expect religiosity to correlate with prosociality? After all, the more religious I am, the more frequently I'm going to be self-priming with God concepts.
9. Morality-as-cooperation theory
Morality-as-cooperation theory is a popular theory within CSR and evolutionary psychology about the purpose of morality. The idea is that human beings are ultra-social animals; that our survival both as individuals and as groups is highly dependent on our ability to cooperate; and that morality is something like the glue that binds us together and facilitates cooperation. Under this view, morality – and the cognitive architecture that supports it – evolved to foster cooperation and harmony within groups.
Philosophically speaking, morality-as-cooperation is pretty interesting on a few levels. First, because it seems to impute a functionalist definition to morality: rather than define morality based on any feature, it suggests that morality is best defined by its function, which is to foster cooperation. In some sense, this then almost takes us to a rule consequentialist account of morality: moral sentiments, which are like proto-rules, are determined based on the extent to which they foster cooperation. This means that rather than maximise utility or self-interest or some other central value, morality’s evolutionary purpose is to maximise cooperation.
Now, if you were trying to put a normative spin on all of this, you might say that when we evaluate different rules and actions, we should really evaluate them based on the extent to which they foster cooperation. On the one hand, I find this idea quite compelling: rather than pluck values and rules out of the sky, why not dance to the beat of our evolutionary programming?
On the other hand, I'm not sure that the general rule of trying to derive normative claims from evolutionary facts is always sound. Here I'm thinking of the kinds of people who choose to have fifty children under the justification that this is their evolutionary purpose and they are giving their genes what they want. Just because our genes want something doesn’t necessarily mean we should give it to them.
10. Big Gods theory
Consider the following observations:
For most of human history, people belonged to relatively small tribes.
A shift happened during the last 12,000 years, resulting in most people now living and cooperating within anonymous, large-scale societies.
There’s evidence that for most of human history prior to the last 12,000 years, supernatural agents tended to have limited powers and be unconcerned with morality. These supernatural agents required propitiation via rituals and rites, but they weren’t, for the most part, overly bothered about how people treated each other.
During the last 12,000 years, a number of omnipotent, moralizing supernatural agents – or ‘Big Gods’ – have burst onto the scene and taken the civilised world by storm. The vast majority of the world’s religious believers now belong to ‘Big God’ religions. These gods definitely are bothered about how we treat one another, possibly above all else.
Big Gods theory takes these four observations and draws the conclusion that the rise of Big, moralizing, prosocial Gods is what made the cooperation necessary for large-scale societies possible.
Aside from the prima facie, correlational plausibility of the idea, there are a few other bits of evidence that Big Gods proponents offer up to support their case. For one, a number of naturalistic and lab experiments point to the fact that people tend to be more prosocial when they’re being monitored and when they expect to be held to account for their actions. Since Big Gods are pretty much the ultimate form of monitoring and accountability, it’s reasonable to think they’re probably bringing about some prosocial effects via these mechanisms.
Another line of evidence used to support Big Gods comes from the fact that the more moralizing a society’s Gods, the more cooperative that society tends to be. From my perspective, it also just seems reasonable to think that if I believe that a system of morality is endorsed – and apparently enforced – by an omnipotent being, then the likelihood that I will accept and internalise that system is going to be far higher than if it was not endorsed by such a being.
As always, the Scandinavian countries offer a refractory data point: if moralizing Big Gods are required for large-scale cooperation, then how comes the secular Scandavians are so damn cooperative? One answer might be that Big Gods were important when making the leap from small tribes to large-scale societies – that they allowed a shared system of morality to form and take root – but that once this morality is well established and instantiated in culture and law, it’s possible, under certain conditions, to enjoy its benefits while sloughing its religiosity. Something like that.
11. Modes of religiosity theory
So far, I’ve yet to say anything much about ritual. Let’s fix that.
When we look at religious rituals around the world, we see a huge amount of diversity. There’s firewalking, ritual piercing, tattooing, hunts, sacrifices, feasts, incantations, liturgies, prayers, hymns. Trying to organise this immense diversity into meaningful categories may seem like a futile effort, but that’s exactly what social anthropologist Harvey Whitehouse attempted to do with his Modes of Religiosity (MOR) theory.
Back in the 80s when Whitehouse was doing ethnographic fieldwork in Papua New Guinea, he noticed a pattern: for the large, mainstream religious group he was observing (the Pomio Kivung), rituals tended to be highly repetitive and involve lots of public speaking (much like, say, Catholic mass). Every now and then, however, a smaller religious group would break away from the Pomio Kivung, and when this happened, the rituals of these breakaway groups tended to be emotionally intense and involve ordeals of some kind or another.
From these observations plus a whole load of other research, Whitehouse suggested that rituals fall into two broad patterns, which he called modes of religiosity:
The doctrinal mode – characterised by rituals that are highly repetitive with low sensory pageantry and little emotional arousal. The rituals of modern christianity would fall into this category.
The imagistic mode – characterised by rituals that are infrequent but come with high sensory pageantry and emotionally intense experiences. Intense rites of passage would fall into this category.
An important feature of Whitehouse’s MOR theory is that each mode of religiosity corresponds with a specific form of sociopolitical organisation: the doctrinal mode tends to occur in larger mainstream religious groups like the Pomio Kivung, whereas the imagistic mode tends to occur in smaller groups like the Pomio Kivung breakaways.
To explain all of this in cognitive scientific terms, Whitehouse borrows some ideas from the memory research that was coming out when he was first developing his theory. When we experience something emotionally intense, the memory of that experience comes to form part of our autobiographical self. When we share an emotionally intense experience with someone else – such that the memories forming our autobiographical selves overlap – we come to be fused with that person in some way. (Think soldiers in a warzone.) Imagistic rituals are essentially a means of manufacturing this kind of emotional experience, allowing bonds between members to be formed and reinforced.
Trouble is, imagistic rituals don’t really scale. It’s hard to have thousands or millions of people go through the same highly intense emotional experience together, so doctrinal rituals are a way of overcoming this obstacle. Doctrinal rituals are highly systematised, meaning they can be easily transplanted from one context to another. What’s more, through repetition, these ritual behaviours can help push points of doctrine from explicit to implicit memory, where they become firmly rooted in long-term memory. This is comparable to learning a poem: you repeat the poem enough times and it eventually comes to be stored as a procedure – a sequence of actions that follow one after the other – rather than as a set of semantic units, say.
(It’s hard not to think of the illusory truth effect here: the more we hear a statement, the more likely we are to perceive that statement as being true. If we’re regurgitating dogma week after week for years on end, you have to assume that this is going to influence the strength of our beliefs.)
So the idea, as I understand it, is that the imagistic mode allows intense bonds to form between members of small religious groups, but that the doctrinal mode produces cohesion when the group gets too large to support imagistic rituals. Since its inception in the 80s, lots of empirical support has accumulated to support Modes of Religiosity theory, so it’s still very much considered to be a live theory within CSR. The one substantial change is that the two modes of religiosity are now widely taken to be poles on a continuum rather than binaries.
Interestingly, Whitehouse and others use archaeological data from around the world to argue that doctrinal rituals played a central role in the rise of agriculture and the formation of largescale civilisations. They also use this evidence to argue that big moralising gods – a la Big Gods theory - follow, rather than precede, increases in social complexity.
12. What’s the point of ritual?
Rituals are weird. They’re behaviours that lack a physical-causal mechanism: people often understand the intended result of the ritual but not how the ritual is supposed to bring that result about. Rituals are highly specific and involve actions that appear redundant: people will perform specific actions in specific sequences, and the exact number and order is often crucial to the outcome. Rituals are frequently performed to produce a specific result, but when the result fails to materialise, people continue to perform the ritual in the future.
(Incidentally, these last two points seem to be linked: the more specific and elaborate a ritual, the easier it is to attribute its inefficacy to a poor execution rather than to the ritual itself.)
So, on the surface, rituals appear to be highly irrational, but there’s actually lots of good evidence that they serve some important functions. These functions break down into two broad types: those that affect individuals and those that affect groups. I’ll start with the former.
The basic finding – which is extremely well-supported at this point – is that rituals reduce anxiety caused by uncertainty. The earliest evidence came from Bronislaw Malinowski’s work with Melanesian islanders. He found that the extent to which Melanesian fisherman engaged in ritual behaviour was determined by the unpredictability of the fishing conditions: when the men fished in the calm waters of the lagoon, they didn’t bother performing any rituals at all; when they fished in choppy, shark-infested waters, where outcomes were uncertain, they performed all kinds of rituals.
Since then, evidence supporting this anxiety reduction account has come from all over, e.g.
Richard Sosis studied religious women in Israel during the Lebanon war. He found that psalm recitation was associated with lower rates of anxiety in women who stayed in the north during the war, but that there was no relationship between psalm recitation and anxiety in women who left the warzone. Interestingly, Sosis makes the case that the psalm recitation helped these women deal with the uncontrollable conditions of war, but that it made no difference in controlling the more controllable stressors of everyday life.
Participating in ritualised behaviour after writing about the death of a loved one increased feelings of control and reduced feelings of grief when compared to a control group that sat in silence after the writing task.
75 participants from the Marathi Hindu community in Mauritius were exposed to anxiety through public speaking. The control group were simply asked to sit and relax, while the ritual group were asked to perform a habitual ritual in the local temple. Those in the ritual condition showed both lower perceived and lower physiological anxiety.
Research with animals and kids suggests that when you introduce unpredictability into an environment, ritual behaviour begins to emerge, e.g. B.F. Skinner found that if you present food to pigeons at random intervals rather than on a predictable schedule, they exhibit weird proto-ritualistic behaviour like side-to-side hopping and twirling around.
So, yeah, seems like the evidence for the anxiety reduction account is pretty strong. Next question: how does this work? The most convincing account I've come across is that by focusing exclusively on a ritual behaviour, a person can swamp their working memory with the task at hand and thereby crowd out any anxiety-inducing thoughts. This account might also offer some insight into things like OCD: if highly specific, ritualistic behaviours reduce anxiety, then it doesn’t seem like a stretch to suggest that our brains might learn to reach for those behaviours whenever anxiety emerges. The more we do this, the more compulsive the behaviour becomes, until the relationship between anxiety and the quasi-ritual is almost automatic.
One final point before I move onto the function of ritual for groups: how are rituals formed?
I’ve not seen any talk about this while I've been doing my research – it probably exists somewhere – so I wanted to put forward a theory based on some of the ideas in my dopamine piece. Dopamine is in large part about stamping in a relationship between a behaviour, a context, and a reward. So, for example, when I step into a pub (context), I walk to the bar and order a pint (behaviour) and a beer appears (reward). When I drink the beer, it triggers a dopamine spike, which stamps in the relationship between these three components. That means the next time I walk into the pub, the contextual cues result in a dopamine spike that activates the behaviour and that, presumably, results in the reward being obtained.
But now, what happens when I have no control over the rewards in my environment, e.g. where I walk over to the bar and order a beer but it does not appear? And then where, ten minutes later, it just spontaneously materialises for no apparent reason?
Maybe my brain erroneously stamps in a relationship between the context, the reward, and whatever behaviour I was performing directly prior to receiving the reward. For example, I might have been scratching my chin when the beer suddenly appeared at the bar, so maybe my brain then stamps in an association between the beer, the pub context, and the behaviour of scratching my chin. The next time the desire for another beer arises, I then scratch my chin to try and pull the requisite causal lever but it doesn’t work. So then I thought: where was I standing when I scratched my chin? What was I wearing? What was I doing directly before the chin scratch? I begin trying to recreate the conditions perfectly, and then at some point during my re-enactment, a beer appears! Dopamine stamps in the relationship once again. Now I have strong evidence that this ritual is causing the beers to appear. I just need to make sure I perform the ritual exactly as I did the first time. If the beer fails to appear, it is because the ritual was not performed correctly.
(This is a poor, slightly tongue-in-cheek example – in this instance, I would obviously just assume that the reason the beer took ten minutes to materialise was because the service was slow rather than that the causal lever i’d pulled was defective – but if you swap out the beer for some kind of uncontrollable reward like, e.g. rain, the point carries across.)
Anyhow, moving onto the function of ritual wrt the group. The recurring theme across a range of slightly different theories is that ritual improves group cohesion and fosters cooperation. This is extremely important because, all things being equal, the more cooperative a group, the more likely it is to survive.
This role of ritual as aid to cooperation isn’t particularly surprising, but some of the theories as to how ritual brings about these effects are pretty interesting. I’ll run through a couple of the more influential ones.
Fusion – we’ve already talked about this in the MOR section, but as a quick recap: rituals that involve emotionally intense experiences help fuse individuals together via overlapping autobiographical memories.
Costly signalling theory – many rituals involve costly practices, e.g. praying for hours each day, going for days without food, penance, etc. Costly signalling theory argues that these costly practices may serve as a hard-to-fake signal that the communicator is a genuine cooperator and that they can be trusted. Think of it like this: how do you separate people who genuinely believe in a religion’s dogma from those that don’t? You create rituals that are sufficiently resource expensive – in terms of time, calories, etc. – that they would deter an imposter from engaging in them.
Credibility enhancing displays (CREDs) – the idea of CREDs (as i understand it) is basically an extension of costly signalling theory: if ritual behaviours are costly and help signal true belief, then this has two effects: first, it helps enhance the credibility of the individual performing the ritual; second, by demonstrating commitment to the group’s beliefs, it transmits this commitment to others. Interestingly, one prediction of this theory is that when prominent figures within a religion fail to act in a way that signals belief, this actually undermines religious commitment in others. There’s some evidence to support this idea, e.g. the US catholic abuse scandal resulted in a significant and long-lasting decline in religious participation in the zip code where the scandal occurred.
13. Religion as adaptation or by-product?
To tie this section off, I’m going to run through one of the big, framing debates within CSR: is religion an advantageous adaptation or simply a by-product of other advantageous adaptations?
Adaptationists argue that at least some aspects of religion evolved because they confer competitive benefits on individuals or groups. There’s definitely some good evidence to support this. For example, religious commitment and ritual participation are associated with improved physical wellbeing. Religion promotes moral behaviour. Religion fosters cooperation within communities. Etc.
Those on the by-product side of the debate argue that religion is just a by-product of certain cultural and cognitive predispositions that evolved for reasons that have nothing to do with religion. As one example of many, consider HADD again: CSR scholars sometimes make the claim that HADD, while evolved for completely unrelated purposes, is actually supportive of religious belief. This doesn’t seem like a stretch. If we’re the kinds of animals that automatically perceive agency in ambiguous stimuli, then we are, in some sense, psychologically prepared for certain kinds of religious belief, e.g. the belief that God is omnipresent and that he constantly exerts an influence on the world. HADD doesn’t necessarily take us the full way to religious belief, but it certainly tilts our credence-allocations in this direction.
Anyhow, the way I've presented this debate so far is a bit simplistic. In reality, I think most scholars within the field would accept that the truth lies somewhere in the middle. Certain cognitive propensities evolved for reasons that have nothing to do with religion. Religion was then a by-product of these propensities – but a by-product that itself conferred all kinds of evolutionary advantages too.
An easy analogy is feathers. So far as we know, the earliest feathers belonged to dinosaurs that weren’t capable of flight. That means they – the feathers – must have evolved for some other reason, e.g. to keep the animal warm or to help it attract a mate. Fast forward a couple million years and feathers have become essential to birds’ flight, despite this function having nothing to do with their original raison d’etre. CSR scholars suggest that something similar has probably happened with religion.
Final thoughts: Does CSR discredit religion?
If you’ve made it this far, then you should hopefully now have a good high-level understanding of CSR. That’s great – but if you’re anything like me, then you might feel like all of this research begs one massive question:
Do these findings discredit religion?
CSR seems to be a bit of a Rorschach test here. Religious people take a look at the findings from this field and exclaim ‘See! Proof! God literally planted the seeds of faith and religious belief into our cognitive architecture. If this doesn’t convince you, nothing will!’ Atheists take the opposite stance: ‘See! Proof! Not only are many religious beliefs demonstrably false – we now also have evidence explaining why we are cognitively inclined to believe these mistaken things too!’2
While I’ve been working my way through this literature, my first impulse has always been to come down on the atheist side of the debate. We now have evidence that much of our cognitive architecture is biasing us in favour of religious beliefs. Here’s a clumsy analogy:
Imagine you and I are arguing about the colour of the sky. I’m adamant that the sky is blue, but you’ve read scientific theories stating that it is actually green. Now imagine that new findings within the science of perception come out telling us that the human mind has a weird tendency to apply a blue pigment to everything. This doesn’t mean that everything I once thought was blue isn’t actually blue, but it does mean that my perceptual machinery has been systematically increasing the likelihood that I assign blue status to different objects in the world. So now when I look at the sky and I see that it is blue, my confidence in its noumenal blueness should be reduced, because I know that my perceptual machinery has been artificially tilting my colour-allocations in this direction. The sky might still be blue – but I should be a bit less confident that it is.
Initially, I felt like the findings from CSR were a bit like this fictional discovery that the mind applies a blue tint to everything – only imagine that the tint is really really strong. But here’s the thing: if you’re someone who endorses, say, theistic evolution – i.e. if you think that evolution is real but that it was caused by God in some way – then you might see the fact that we’ve evolved the propensity to believe in God as evidence that God was probably the architect of this whole thing. After all, doesn’t this seem prima facie more plausible than that the random, unintending forces of the universe would implant us with a propensity to believe in God?
(Now that I write this out, I find it hard to see how someone who believes in both God and evolution could arrive at any other conclusion. This interpretation does feel like the more plausible of the two if you’re looking at these findings through this particular lens.)
So, yeah, in summary: although my credences were definitely impacted in a specific direction by this research, I actually think, on reflection, that CSR’s findings are completely ambiguous on this point. They can be interpreted in a number of ways, and the specific colouring of your interpretation will probably depend upon your background beliefs about the existence of God, the source of the universe, evolution, etc.
The actual most obvious conclusion to draw is to say that ‘they can’t all be wrong!’, but the problem with this is that most of these religions disagree with each other, so even, best case scenario, one of them is right, then it still means that almost all of them are wrong.
Interestingly, researchers working within the field seem to be adamant that CSR has no bearing on the credibility of religious ideas one way or the other. My first impulse here was to assume that this kind of stated agnosticism is just an agreed upon front so that religious people will keep letting them into their communities. Now I’m not so sure.
This is an incredibly useful summary. I anticipate coming back to it often. Thank you!
Brilliant mapping of belief's cognitive machinery. But CSR charts the fossils while missing the living thing. I see religious cognitive architecture differently: not evolutionary leftovers, but consciousness learning to perform itself into group coherence. The first collective rehearsal.
CSR misses that these "biases" might be leftover pieces of how ancient humans actually experienced reality. Research suggests early consciousness was bicameral, people literally heard divine voices, like Moses hearing God. Not metaphor, but direct auditory experience from a different brain configuration that went extinct around 1200 BCE.
The teleological reasoning, agency detection sound like fragments of that older consciousness, when everything feeling alive and purposeful wasn't bias but lived experience. However, CSR ignores altered states entirely. Psychedelics, deep meditation, ritual trance; these technologies can temporarily restore these extinct ways of perception. When someone on mushrooms hears plants speaking, they're not so much hallucinating per se as they're accessing the original neural wiring.
Here's what some research posits; religion was consciousness learning to dance in groups through shared altered states. Culture survives not because the gods are real, but because the trance around the fire convinced us we were real together. Modern religion feels hollow because we've lost the foundation. We're doing the dance after long since forgetting the trance.
I'd be curious to hear your thoughts on this. Thanks for the read!