If you read this passage from Peterson’s Maps of Meaning through the hemispheric lens it takes on a significance that I think of practically weekly. Perhaps the censor is his right hemisphere, connected to the Tao, myth. And the left the analytical, articulate intellectual, but abstracted and confabulating. I believe that battle still wages on within him, and you can hear it in the two different ways he speaks.
He’s writing about a breakdown in his earlier life:
“Something odd was happening to my ability to converse. I had always enjoyed engaging in arguments, regardless of topic. I regarded them as a sort of game (not that this is in any way unique). Suddenly, however, I couldn’t talk - more accurately, I couldn’t stand listening to myself talk. I started to hear a “voice” inside my head, commenting on my opinions. Every time I said something, it said something - something critical. The voice employed a standard refrain, delivered in a somewhat bored and matter-of-fact tone:
You don’t believe that.
That isn’t true.
You don’t believe that.
That isn’t true.
The “voice” applied such comments to almost every phrase I spoke. I couldn’t understand what to make of this. I knew the source of the commentary was part of me, but this knowledge only increased my confusion. Which part, precisely, was me - the talking part or the criticizing part? If it was the talking part, then what was the criticizing part? If it was the criticizing part - well, then: how could virtually everything I said be untrue? In my ignorance and confusion, I decided to experiment. I tried only to say things that my internal reviewer would pass unchallenged.
This meant that I really had to listen to what I was saying, that I spoke much less often, and that I would frequently stop, midway through a sentence, feel embarrassed, and reformulate my thoughts. I soon noticed that I felt much less agitated and more confident when I only said things that the “voice” did not object to. This came as a definite relief. My experiment had been a success; I was the criticizing part. Nonetheless, it took me a long time to reconcile myself to the idea that almost all my thoughts weren’t real, weren’t true - or, at least, weren’t mine.
All the things I “believed” were things I thought sounded good, admirable, respectable, courageous. They weren’t my things, however - I had stolen them. Most of them I had taken from books. Having “understood” them, abstractly, I presumed I had a right to them - presumed that I could adopt them, as if they were mine: presumed that they were me. My head was stuffed full of the ideas of others; stuffed full of arguments I could not logically refute. I did not know then that an irrefutable argument is not necessarily true, nor that the right to identify with certain ideas had to be earned.”
Wonderful synthesis as always. In my own thinking I've been wrestling a lot with the " ... and thus" of these arguments, having spent a year or three in the well of incredible work that the authors you mention (yourself included) have been doing. Certainly John's "After Socrates" which I've been enjoying immensely is a very individual "... and thus." But to me the most interesting set of idea-meets-reality on this topic has been in AI. I would *love* to get your thoughts on how transformer-based LLMs have kind of reverse-engineered a bit of this -- quite literally improving their performance by learning how to tell what's important.
To my deeply ignorant and uneducated perspective (English major here), this seems to be right down the middle of the discussion. Everyone seems to be experiencing first hand the emergent properties of complex systems ... would love any connections your smarty-pants group of followers may be able to surface.
Wonderful article Brett, thanks for sharing your thoughts! A few things come to my mind:
1) There is overlap with Phil Tetlock's work on superforecasters. Phil uses the analogy of a hedgehog and a fox. The fox knows many things, the hedgehog just one big thing. He uses this metaphor to describe the fact that superforecasters are like foxes, they are able to change their mind in the face of contrary evidence. Hedgehogs, on the other hand have an ideology and desperately try to fit the data to that ideology (bed of procrustes like), which is how the left hemisphere functions (i.e., as an echochamber).
2) The works of Gerd Gigerenzer and others show that heuristics (like stereotypes) are often better for predictive purposes in radically uncertain environments. This fits nicely in your story. Hence, where Kahneman and Tversky want us to be like the left hemisphere, Gigerenzer balances that with his heuristics and thus right hemisphere narrative.
3) Would it be fair to say that the two hemispheres are in an opponent processing relationship? Hence, they are at the edge of order and chaos, just like any complex system.
3) Finally, one thought. Our big data world tries to use more-and-more data and it assumes that, in doing so, we will make better decisions. I think that this is highly questionable given the evidence that you have presented here.
Thanks Auke. I looked up Gigerenzer and it looks like I have some reading to do :)
Yes, I think they are in an opponent processing relationship. Maybe I should have used that language a little bit instead of just talking about tradeoffs.
Is it is any way tangential to the idea of the dissolution of the bicameral mind hypothesis of Julian Jaynes?
As regards the left-right dichotomy, likely a silly example, but I think I have encountered it when trying to generalise my view of the world into a coherent system - which in turn angered Redditors... because my maps generalise, are subjective and not precise.
I have seen Varvaeke's name in conjunction with AI research lately.
You have tied so many things together in a way that is incredibly instructive.
I have one question on a topic that I am still trying to reconcile. Do you think that Daniel Kahneman's Thinking Fast 'System 1' lines up with a particular hemisphere?
In a recent podcast Iain McGilchrist said something to the effect of 'everyone thinks the right hemisphere is the 'thinking fast' one, but it is the opposite, the right hemisphere is thinking slow'.
I wonder though if this is just a false distinction. Maybe 1&2 don't fit neatly into a hemisphere? Something 'feeling wrong intuitively' is a right brain mode, and that is extremely fast, so it doesn't reconcile for me. Using the wrong mental model is also fast, and that is left brain behaviour. Kahneman describes System 2 as deliberate, logical, and analytical thinking - that sounds like very left-brain behaviour. Specifically, System 2 sounds to me like left brain behaviour when it *builds* models. Meanwhile System 1 sounds like left-brain behaviour when it *uses* existing models - and the cognitive errors come precisely because we use the wrong model. In other words I see both System 1 and 2 occurring in the left hemisphere. Perhaps both occur in both hemispheres?
We have a left vs. right brain duality, and so we may have a tendency to fit every other duality (such as System 1&2, Fast and Slow) into that, but perhaps that would lead us astray in this example.
Thank you Matt. I'm not a fan of system 1 vs. system 2. IMO what we call 'intuition' or system 1 is not one thing that corresponds to one system but many different systems in the brain that work unconsciously and intuitively. These may have little if anything to do with each other in either location in the brain or in how they work. E.g., you have "moral intutions" where you automatically find some things morally repulsive but you also have "intuitions" that make you automatically find someone attractive and social intuitions about how to behave appropriately and there's no reason to think these things occur in the same location in the brain or really have much to do with each other except that they all work quickly and unconsciously. So I don't think there's any real relation to hemispheric differences there. Hopefully that quickly typed response was coherent.
Matt and I were talking about this last night. But McGilchrist specifically takes apart Kahneman in The Matter with Things.
“Evidence suggests that things are likely to be more complex than Kahneman’s two-system model suggests. Contrary to his assumption, reasoning based on beliefs, assumed to be automatic, can, it turns out, be effortful; while reasoning on the logical structure of an argument can be accomplished fairly automatically.....
...As far as ‘fast’ and ‘slow’ thinking goes, jumping to conclusions (LH) is fast, but so is flawless intuition, as in the case of Franck Mourier (RH); following algorithms is slow (LH), but so, at least relatively speaking, is acting as devil’s advocate (RH)."
Good God, this is too important to get lost amid the thicket of internet information. I wish I were capable of re-expressing it in a way that would bring it to a much wider audience. In the meantime, I have to think about it.
If you read this passage from Peterson’s Maps of Meaning through the hemispheric lens it takes on a significance that I think of practically weekly. Perhaps the censor is his right hemisphere, connected to the Tao, myth. And the left the analytical, articulate intellectual, but abstracted and confabulating. I believe that battle still wages on within him, and you can hear it in the two different ways he speaks.
He’s writing about a breakdown in his earlier life:
“Something odd was happening to my ability to converse. I had always enjoyed engaging in arguments, regardless of topic. I regarded them as a sort of game (not that this is in any way unique). Suddenly, however, I couldn’t talk - more accurately, I couldn’t stand listening to myself talk. I started to hear a “voice” inside my head, commenting on my opinions. Every time I said something, it said something - something critical. The voice employed a standard refrain, delivered in a somewhat bored and matter-of-fact tone:
You don’t believe that.
That isn’t true.
You don’t believe that.
That isn’t true.
The “voice” applied such comments to almost every phrase I spoke. I couldn’t understand what to make of this. I knew the source of the commentary was part of me, but this knowledge only increased my confusion. Which part, precisely, was me - the talking part or the criticizing part? If it was the talking part, then what was the criticizing part? If it was the criticizing part - well, then: how could virtually everything I said be untrue? In my ignorance and confusion, I decided to experiment. I tried only to say things that my internal reviewer would pass unchallenged.
This meant that I really had to listen to what I was saying, that I spoke much less often, and that I would frequently stop, midway through a sentence, feel embarrassed, and reformulate my thoughts. I soon noticed that I felt much less agitated and more confident when I only said things that the “voice” did not object to. This came as a definite relief. My experiment had been a success; I was the criticizing part. Nonetheless, it took me a long time to reconcile myself to the idea that almost all my thoughts weren’t real, weren’t true - or, at least, weren’t mine.
All the things I “believed” were things I thought sounded good, admirable, respectable, courageous. They weren’t my things, however - I had stolen them. Most of them I had taken from books. Having “understood” them, abstractly, I presumed I had a right to them - presumed that I could adopt them, as if they were mine: presumed that they were me. My head was stuffed full of the ideas of others; stuffed full of arguments I could not logically refute. I did not know then that an irrefutable argument is not necessarily true, nor that the right to identify with certain ideas had to be earned.”
Brett,
Wonderful synthesis as always. In my own thinking I've been wrestling a lot with the " ... and thus" of these arguments, having spent a year or three in the well of incredible work that the authors you mention (yourself included) have been doing. Certainly John's "After Socrates" which I've been enjoying immensely is a very individual "... and thus." But to me the most interesting set of idea-meets-reality on this topic has been in AI. I would *love* to get your thoughts on how transformer-based LLMs have kind of reverse-engineered a bit of this -- quite literally improving their performance by learning how to tell what's important.
To my deeply ignorant and uneducated perspective (English major here), this seems to be right down the middle of the discussion. Everyone seems to be experiencing first hand the emergent properties of complex systems ... would love any connections your smarty-pants group of followers may be able to surface.
I'm currently reading an incredible piece on how the US government has turned the counter disinformation complex against its own people (https://www.tabletmag.com/sections/news/articles/guide-understanding-hoax-century-thirteen-ways-looking-disinformation) and its already taking up my time! And now you release your article with a title like: "Relevance Realization, Cerebral Hemispheres, and the Reconciliation of Science and Mythology"?!
Thank you! I'm excited!
Wonderful article Brett, thanks for sharing your thoughts! A few things come to my mind:
1) There is overlap with Phil Tetlock's work on superforecasters. Phil uses the analogy of a hedgehog and a fox. The fox knows many things, the hedgehog just one big thing. He uses this metaphor to describe the fact that superforecasters are like foxes, they are able to change their mind in the face of contrary evidence. Hedgehogs, on the other hand have an ideology and desperately try to fit the data to that ideology (bed of procrustes like), which is how the left hemisphere functions (i.e., as an echochamber).
2) The works of Gerd Gigerenzer and others show that heuristics (like stereotypes) are often better for predictive purposes in radically uncertain environments. This fits nicely in your story. Hence, where Kahneman and Tversky want us to be like the left hemisphere, Gigerenzer balances that with his heuristics and thus right hemisphere narrative.
3) Would it be fair to say that the two hemispheres are in an opponent processing relationship? Hence, they are at the edge of order and chaos, just like any complex system.
3) Finally, one thought. Our big data world tries to use more-and-more data and it assumes that, in doing so, we will make better decisions. I think that this is highly questionable given the evidence that you have presented here.
Thanks Auke. I looked up Gigerenzer and it looks like I have some reading to do :)
Yes, I think they are in an opponent processing relationship. Maybe I should have used that language a little bit instead of just talking about tradeoffs.
Is it is any way tangential to the idea of the dissolution of the bicameral mind hypothesis of Julian Jaynes?
As regards the left-right dichotomy, likely a silly example, but I think I have encountered it when trying to generalise my view of the world into a coherent system - which in turn angered Redditors... because my maps generalise, are subjective and not precise.
I have seen Varvaeke's name in conjunction with AI research lately.
I'm glad you reminded me of that. Yeah I think Julian Jaynes was basically correct, or at least wasn't too far off from the truth.
This is a masterpiece Brett.
You have tied so many things together in a way that is incredibly instructive.
I have one question on a topic that I am still trying to reconcile. Do you think that Daniel Kahneman's Thinking Fast 'System 1' lines up with a particular hemisphere?
In a recent podcast Iain McGilchrist said something to the effect of 'everyone thinks the right hemisphere is the 'thinking fast' one, but it is the opposite, the right hemisphere is thinking slow'.
I wonder though if this is just a false distinction. Maybe 1&2 don't fit neatly into a hemisphere? Something 'feeling wrong intuitively' is a right brain mode, and that is extremely fast, so it doesn't reconcile for me. Using the wrong mental model is also fast, and that is left brain behaviour. Kahneman describes System 2 as deliberate, logical, and analytical thinking - that sounds like very left-brain behaviour. Specifically, System 2 sounds to me like left brain behaviour when it *builds* models. Meanwhile System 1 sounds like left-brain behaviour when it *uses* existing models - and the cognitive errors come precisely because we use the wrong model. In other words I see both System 1 and 2 occurring in the left hemisphere. Perhaps both occur in both hemispheres?
We have a left vs. right brain duality, and so we may have a tendency to fit every other duality (such as System 1&2, Fast and Slow) into that, but perhaps that would lead us astray in this example.
Thank you Matt. I'm not a fan of system 1 vs. system 2. IMO what we call 'intuition' or system 1 is not one thing that corresponds to one system but many different systems in the brain that work unconsciously and intuitively. These may have little if anything to do with each other in either location in the brain or in how they work. E.g., you have "moral intutions" where you automatically find some things morally repulsive but you also have "intuitions" that make you automatically find someone attractive and social intuitions about how to behave appropriately and there's no reason to think these things occur in the same location in the brain or really have much to do with each other except that they all work quickly and unconsciously. So I don't think there's any real relation to hemispheric differences there. Hopefully that quickly typed response was coherent.
Matt and I were talking about this last night. But McGilchrist specifically takes apart Kahneman in The Matter with Things.
“Evidence suggests that things are likely to be more complex than Kahneman’s two-system model suggests. Contrary to his assumption, reasoning based on beliefs, assumed to be automatic, can, it turns out, be effortful; while reasoning on the logical structure of an argument can be accomplished fairly automatically.....
...As far as ‘fast’ and ‘slow’ thinking goes, jumping to conclusions (LH) is fast, but so is flawless intuition, as in the case of Franck Mourier (RH); following algorithms is slow (LH), but so, at least relatively speaking, is acting as devil’s advocate (RH)."
Good God, this is too important to get lost amid the thicket of internet information. I wish I were capable of re-expressing it in a way that would bring it to a much wider audience. In the meantime, I have to think about it.
If you are new to Brett, watch his riveting backstory here https://youtu.be/qNoYNRSnKEA
Have you heard of Lucy Suchman's work, Plans and Situated Action?
Some highlights here https://twitter.com/mostlynotworkin/status/1408903455905705987
Ty for this piece. Listened to it via Substack audio. They just need the AI to take your interview voice and auto read :-)
Reminds me of PVK's often citing of spirit of finesse (RH?) vs spirit of geometry (LH?)