A New and Better Theory of Supernatural Beliefs
A recent paper to be published soon in Human Nature implicitly supports Jordan Peterson's thesis from his first book Maps of Meaning.
Since at least the 1980s, the cognitive science of religion has labored under the assumption that religious beliefs are nothing but superstitious bullshit. It is, of course, never put quite so bluntly in the textbooks and journal articles, but this is the tacit assumption that has driven much research and theorizing. This assumption has never sat well with me, although I am not a traditionally religious person. My objection to this assumption is articulated nicely by Jordan Peterson in his first book Maps of Meaning:
How is it that complex and admirable ancient civilizations could have developed and flourished, initially, if they were predicated upon nonsense? If a culture survives, and grows, does that not indicate in some profound way that the ideas it is based upon are valid?… Is it actually sensible to argue that persistently successful traditions are based on ideas that are simply wrong, regardless of their utility? Is it not more likely that we just do not know how it could be that traditional notions are right, given their appearance of extreme irrationality? Is it not likely that this indicates modern philosophical ignorance, rather than ancestral philosophical error? (Peterson, 1999, p. 19)
Indeed. It seems unlikely that ancient civilizations could have survived and proliferated so successfully if the stories they used to organize their social lives — stories that consumed a large portion of their attention and energy — were nothing but superstitious bullshit. This is not to say that those stories must be literally true (they clearly aren’t), but it is to say that we should at least consider the possibility that they have some functional utility, and therefore some kind of validity.
A new paper accepted at Human Nature (but not officially published there yet) finally puts the pieces together and articulates what is by my estimation a much better view on the cognitive and evolutionary origins of religious beliefs than the current byproduct theories (which tend to depict these beliefs as the misfiring of a hyper-active agency detector or something like that). Aaron Lightner and Ed Hagen’s new paper is entitled “All models are wrong, and some are religious: Supernatural explanations as abstract and useful falsehoods about complex realities.” Their paper puts forward a very different explanation from the byproduct theories. For Lightner and Hagen, supernatural beliefs are useful falsehoods that help us to reason and communicate about complex, noisy phenomena.
Here is a summary: Complex systems are systems that have many interacting parts. Many problems in the world require us to deal with complex systems, including predicting the behavior of other animals, other people, the weather, social trends, politics, warfare, etc. Virtually any problem that requires us to think about the medium to long-term future will involve complex systems. The thing about complex systems is that they are inherently unpredictable, even in principle. We cannot build detailed models of them because doing so would simply be impossible. There are too many moving parts. We cannot predict their behavior over the medium to long term because small perturbations in the system can cause massive differences in the system’s trajectory. How, then, do we deal with complex systems? For the most part, we simplify them by using heuristics. I don’t need to have detailed knowledge of weather patterns to know that if it’s dark and cloudy outside then I should grab an umbrella. I don’t need to have detailed knowledge of other people to know that if people suddenly start acting distant around me then I have probably violated some social norm or another. I don’t need to have detailed knowledge of my car to know that the gas is running low, etc.
As Lightner and Hagen point out, one useful way of simplifying complex systems is to treat them as intentional systems, with beliefs, values, goals, etc. In other words, you can usefully simplify a complex system by personifying it. This can not only make it easier to reason about the system, but also to communicate about it. This means that personification is not necessarily the result of a “hyper-active agency detector”, as many of the byproduct theorists would have it. Personification is, rather, a cognitive strategy that we sometimes use to reason and communicate about complex systems. Supernatural explanations that personify aspects of the natural world are useful fictions that (when they work correctly) allow us to pragmatically predict, control, and communicate about the world better than an alternative explanation. As Lightner and Hagen put it:
…supernatural explanations are the ordinary and abstract output of our intuitive theories, which can assume a variety of increasingly abstract stances, or levels of explanation. Our capacity for an intuitive psychology, which generates anthropomorphic explanations, is especially well-suited for modeling unobservable, uncertain, and complex processes in terms of high-level concepts, such as intentions[…] We therefore propose that the utility of supernatural explanations is that, although they invoke entities that do not exist, they can usefully map onto parts of the abstract structure of the world. (p. 21)
This thesis sheds new light on Jordan Peterson’s 1999 book Maps of Meaning. In that book, he argued that mythological narratives were implicitly and narratively portraying the process by which individuals and societies update themselves in the face of anomalous information. The characters in these narratives, he argued, are personified representations of different aspects of this process (e.g., order, chaos, and the process that mediates between order and chaos). I have argued elsewhere (in a commentary that is currently under peer review) that the process portrayed in Maps of Meaning is equivalent to the structure of a phase transition in complex systems. I made that argument before I knew anything about Lightner and Hagen’s paper. This is important because, if Lightner and Hagen are right, then it should be expected that the most fundamental mythological narratives would reflect the structure of the behavior of complex systems (i.e., phase transitions). I believe that this idea is implicit in Maps of Meaning.
In a future post I will go into more detail about the contours of this process (i.e., the relation between Maps of Meaning and the structure of phase transitions). For now I simply want to point out that at the time of writing Maps of Meaning, Jordan Peterson was way out of step with mainstream thought on the cognitive science of religion which, as I pointed out at the beginning of this post, generally regarded mythological narratives as nothing but superstitious bullshit. Lightner and Hagen’s paper, however, changes things. This is a high-profile publication in a prestigious journal putting forward a thesis that is not only compatible with the main ideas from Maps of Meaning, but potentially supportive of them.
You may find this interesting. On religious practices as multiscale active inference: Certainties emerging from recurrent interactions within and across individuals and groups
July 2022 https://www.researchgate.net/publication/361691550_On_religious_practices_as_multiscale_active_inference_Certainties_emerging_from_recurrent_interactions_within_and_across_individuals_and_groups
Would love to chat to you about generative models and behaviour
The “selfish” in Dawkins’ The Selfish Gene lends strong credence to this idea, even for atheists.