I really enjoyed this essay. I'm pretty skeptical (perhaps too skeptical) of a lot of behavioral research, and, at the same time, I appreciate the time and effort you've devoted to exploring empirical support for the different theories at play here.
With respect to the section on Affirmations, my personal experience leads me to believe that the basic idea here is valid. Over the last few years, I've put a lot of time and effort into (what I sometimes think of as) reprogramming myself. At a very high level, this consists of reading books and listening to podcasts to seek out better thought patterns, and then implementing practices related to them (i.e., habits and behaviors). Most of these involve, in part, repeated invocations of the desired thought patterns.
Even just in the last couple months, I've noticed what feel like pretty substantial changes in my default perspectives on various parts of my life (e.g., relationships with friends, family, and coworkers), with the new defaults reflecting the new, better thought patterns, and indirectly reflecting the gradual fading of old, less functional/useful thought patterns.
So, N = 1 and all that. This has not at all been a scientific effort on my part, even if it has been pretty systematic, so take this with an appropriately-sized grain of salt.
One of the key things has been meditation, which facilitates awareness of the stories and I tell myself and the beliefs I have. It also helps me not identify too strongly with these stories and beliefs, which helps reduce the intensity of emotional reactions associated with the stories/beliefs (and which seems like one possible avenue for making one's identity small).
Another has been journaling, which I use, in part, to come up with and play around with words or short phrases to use as reminders for new, better thought patterns. So, for example, I use the phrase "simple description" as a reminder of the idea that events in the world don't have any inherent meaning, that my interpretations and judgments construct meaning. So, when I first started using this phrase, I would practice describing things that had happened using very simple, plain language to draw my attention to the bare facts of the matter and the distinction between these facts and any interpretation I was adding to them. Eventually, this became more automatized, so now I don't usually need to do any explicit describing.
I have a set of six or seven phrases that I use these days. In my journaling each morning, I reflect on how well I did with them the previous day, and I set my intention to practice them again today.
My Substack gets into some of this stuff. It is, broadly speaking, about my efforts to reprogram myself, focusing on how I have used a couple video games in this process.
One important dimension of this problem complex that I don't believe you addressed directly is the relationship between belief and knowledge.
Since Gettier we know we can't glibly answer that knowledge is justified true belief. But if it isn't then what is it?
This is surely important to your thinking because if you are proposing that we adopt beliefs more judiciously in the service of your better life goal quite what does this mean? Is there a general answer?
This leads us into questions of epistemology and the philosophy of science.
I believe there is ample evidence that for all the failings of our rationality (see for example Sperber & Mercier 'The Enigma of Rationality') science does 'work'. This remarkable social enterprise, whilst often flawed in its exercise regularly uncovers remarkable non intuitively obvious truths about reality.
For all the frailties of individual beliefs humankind continues to make epistemic progress thanks to science.
The intriguing question is not whether science as a social enterprise makes such progress in refining the quality of its beliefs - it does.
The question is whether there is any difference between those who have and those who have not been exposed to and trained in epistemically rigorous disciplines. I don't doubt one can create experiments to show that in various context we are all prey to cognitive shortcomings. But do such people regularly attach a greater weight to data and argument over a broader range of topics and is this difference sufficient to argue that there is a material difference in cognitive operation wrt belief formation.
it's funny you say this because i actually just the other day wrote an essay on Mercier and Sperber's argumentative theory of reason. I was coming at it from the angle of 'why are scientists able to arrive at convergence of belief but philosophers aren't?' Borrowing a few ideas from thomas kuhn (and simplifying quite a bit), i basically made the case that scientists are able to use empirical data to disconfirm their beliefs, whereas philosophers can't. In the absence of this capability, it's extremely difficult to use reason alone to arrive at agreement. I'm going to polish it up but will be sharing on here some time soon :)
Btw I don't know if you already subscribe to his content but the work of Dan Williams is excellent. He is a good example of a young contemporary philosopher who constantly accesses the empirical data available in various sciences to make his arguments. If you haven't yet subscribed I recommend him highly.
I certainly agree that reason without data is toothless.
I have posted little on substack but you might find what I have posted provocative. It does bear on your interests. I have just come to a rather disturbing conclusion. ))
The wonderful thing about science is that consensus emerges from discord and is always provisional.
On a point of detail, there are topics on which philosophers can converge on belief eg it's presently hard to find someone assuming that Gettier was wrong.
I understand the broad assertion that scientists can argue from data whereas philosophers can't but I suggest that that too is overly broad. Philosophers can and do use empirical data, certainly in some fields more than others. Philosophers like Dennet or Paul and Priscilla Churchland had constant recourse to empirical data and the whole approach of synthetic philosophy has recourse to such days
Not convinced by everything here. In particular I think narrative and emotion play a much more central role, and I also think there's likely different types of belief. But it's a great write-up!
It might also be worth looking into how experts in a domain form beliefs as opposed to undergrads in a lab. The Johnson and Seifert isn't at all surprising for undergrads, but would shock me if it held true for people who work in fire insurance claims. Experts are better at integrating information across "fragments." It's also worth looking into Gary Klein's criticisms of Confirmation Bias. Often what's dismissed as Confirmation Bias is adaptive in real life situations.
Thanks for the thoughtful reply (as always) Jared. When you say narrative has a bigger role, what sort of thing are thinking of?
The main reason that i find the Johnson and Seifert experiment compelling is that i sometimes engage in this sort of reasoning - making inferences based on beliefs that i know, on reflection, are wrong - myself. Do you have anything you can point me to re: how experts integrate information across fragments? Might be useful for something else i'm working on.
Also, the Gary Klein article is super interesting. I think i'm going to have to do some more digging here. I had assumed confirmation bias was a well-established thing, but maybe its not as solid as i'd though!
I've been doing a ton of research around the argumentative theory of reason, which i'm going to write up soon, but one of the main pieces of evidence they use to support their theory is the fact that they're able to explain confirmation bias as a feature rather than a bug of reason.
The key takeaway is not the merits of the finer points of their argumentative theory of reason but the question they ask.
If reason is so wonderful, so powerful, them why is it reason so flawed and why on earth would evolution have served up a version of reason so flawed!?
Brilliant question.
This integration of philosophy and social science with the theory of evolution is an incredibly powerful insight.
I'm thinking of Gary Klein's famous story where a firefighter walks into a burning house, but then gets a bad feeling and tells his team to get out. Moments later, the house collapses. Years later, the firefighter tells Gary that he thought it was ESP. The firefighter couldn't explain his own decision to evacuate except in mystical terms.
Gary interviews the firefighter and discovers that the firefighter had noticed some anomalies that day. The firefighter had believed the fire was in the kitchen, but then why was the living room so hot? And why wasn't the fire louder when he entered?
The anomalies seemed to be enough to trigger the firefighter to lose confidence in his understanding of the situation, and that loss of confidence is what triggered the decision to leave. No ESP required.
Basically, the firefighter has been telling himself a story that the fire was originating in the kitchen, and it was anomalies that seemed inconsistent with that story which triggered the loss of confidence in that story. I think a lot of beliefs are like this. Coherence with a story is the key thing we look for in beliefs. (see research into how juries make decisions)
Additionally, the loss of confidence of the firefighter was not experienced quantitatively, but emotionally. He *felt* something was off, and it made him anxious enough to tell everyone to get out. There seems to be an emotional quality to confidence (and maybe that is what we are quantifying when we form priors). Gigerenzer talks about emotion as a "stopping rule."
"emotions can also function as heuristic principles for guiding and stopping information search. For instance, falling in love can be seen as a powerful stopping rule that ends the current search for a partner (at least temporarily) and strengthens commitment to the loved one. Similarly, feelings of parental love, triggered by one's infant's presence or smile, can be described as a means of preventing cost-benefit computations with respect to proximal goals, so that the question of whether it is worthwhile to endure all the sleepless nights and other challenges associated with baby care simply never arises."
I don't love the "stopping rule" framing, so here is perhaps an even worse analogy. Emotion often seems to act like a p-value. Intense feelings happen not just because you are confident that something is true, but as a sort of phenomenological p-value, signaling you have reached enough confidence to act as if the proposition were true.
"I don't know for sure that there is not someone who is a better match for me than my girlfriend, but I am willing to take that chance and commit to her anyways." -> "I love her."
"I don't know what is actually happening inside this burning building, but my fading confidence is sufficient to change my plan of action." -> "I have a bad feeling about this."
"I don't know for sure there is a god, but even a low probability of something that significant is enough to cause my to act" -> "I have faith" (In my experience, believers treat faith as more emotional than propositional. Personally, 20% confidence in an intelligent God would be (and is) more than enough to send me into existential and life changing shock. My threshold for action is quite low when it comes to things which could be significant.)
As for the idea that experts integrate fragments better, I had never heard the term "fragment" before so don't know specific research on that, and can't think of particular studies. But it is sort of just a known thing in Naturalistic Decision-Making.
We were interviewing an expert in Cyber Defense yesterday, and he talked about how the biggest difference between novice and experts is that experts are better able to keep track of an integrate all the relevant information. It's not just knowing the information, but being able to integrate it in the moment during an attack that sets experts apart.
I've also seen this on my own when I work with novice Behavioral Scientists. They often have difficulty designing solutions which meet all the business constraints and goals. They are focused on the various "effects" they learned in school, and are just trying to find ways to implement them, but they do so in ham fisted ways that are not sensitive to the surrounding context. I imagine that there has to be some research into how students generalize what they learned into new settings which talks about this.
I really enjoyed this essay. I'm pretty skeptical (perhaps too skeptical) of a lot of behavioral research, and, at the same time, I appreciate the time and effort you've devoted to exploring empirical support for the different theories at play here.
With respect to the section on Affirmations, my personal experience leads me to believe that the basic idea here is valid. Over the last few years, I've put a lot of time and effort into (what I sometimes think of as) reprogramming myself. At a very high level, this consists of reading books and listening to podcasts to seek out better thought patterns, and then implementing practices related to them (i.e., habits and behaviors). Most of these involve, in part, repeated invocations of the desired thought patterns.
Even just in the last couple months, I've noticed what feel like pretty substantial changes in my default perspectives on various parts of my life (e.g., relationships with friends, family, and coworkers), with the new defaults reflecting the new, better thought patterns, and indirectly reflecting the gradual fading of old, less functional/useful thought patterns.
So, N = 1 and all that. This has not at all been a scientific effort on my part, even if it has been pretty systematic, so take this with an appropriately-sized grain of salt.
Ah thanks Noah. If you don't mind, could you share what sort of stuff you've been doing to bring about these changes? You've piqued my interest :)
Happy to!
One of the key things has been meditation, which facilitates awareness of the stories and I tell myself and the beliefs I have. It also helps me not identify too strongly with these stories and beliefs, which helps reduce the intensity of emotional reactions associated with the stories/beliefs (and which seems like one possible avenue for making one's identity small).
Another has been journaling, which I use, in part, to come up with and play around with words or short phrases to use as reminders for new, better thought patterns. So, for example, I use the phrase "simple description" as a reminder of the idea that events in the world don't have any inherent meaning, that my interpretations and judgments construct meaning. So, when I first started using this phrase, I would practice describing things that had happened using very simple, plain language to draw my attention to the bare facts of the matter and the distinction between these facts and any interpretation I was adding to them. Eventually, this became more automatized, so now I don't usually need to do any explicit describing.
I have a set of six or seven phrases that I use these days. In my journaling each morning, I reflect on how well I did with them the previous day, and I set my intention to practice them again today.
My Substack gets into some of this stuff. It is, broadly speaking, about my efforts to reprogram myself, focusing on how I have used a couple video games in this process.
Interesting.
One important dimension of this problem complex that I don't believe you addressed directly is the relationship between belief and knowledge.
Since Gettier we know we can't glibly answer that knowledge is justified true belief. But if it isn't then what is it?
This is surely important to your thinking because if you are proposing that we adopt beliefs more judiciously in the service of your better life goal quite what does this mean? Is there a general answer?
This leads us into questions of epistemology and the philosophy of science.
I believe there is ample evidence that for all the failings of our rationality (see for example Sperber & Mercier 'The Enigma of Rationality') science does 'work'. This remarkable social enterprise, whilst often flawed in its exercise regularly uncovers remarkable non intuitively obvious truths about reality.
For all the frailties of individual beliefs humankind continues to make epistemic progress thanks to science.
The intriguing question is not whether science as a social enterprise makes such progress in refining the quality of its beliefs - it does.
The question is whether there is any difference between those who have and those who have not been exposed to and trained in epistemically rigorous disciplines. I don't doubt one can create experiments to show that in various context we are all prey to cognitive shortcomings. But do such people regularly attach a greater weight to data and argument over a broader range of topics and is this difference sufficient to argue that there is a material difference in cognitive operation wrt belief formation.
I believe that there is.
it's funny you say this because i actually just the other day wrote an essay on Mercier and Sperber's argumentative theory of reason. I was coming at it from the angle of 'why are scientists able to arrive at convergence of belief but philosophers aren't?' Borrowing a few ideas from thomas kuhn (and simplifying quite a bit), i basically made the case that scientists are able to use empirical data to disconfirm their beliefs, whereas philosophers can't. In the absence of this capability, it's extremely difficult to use reason alone to arrive at agreement. I'm going to polish it up but will be sharing on here some time soon :)
Btw I don't know if you already subscribe to his content but the work of Dan Williams is excellent. He is a good example of a young contemporary philosopher who constantly accesses the empirical data available in various sciences to make his arguments. If you haven't yet subscribed I recommend him highly.
I certainly agree that reason without data is toothless.
I have posted little on substack but you might find what I have posted provocative. It does bear on your interests. I have just come to a rather disturbing conclusion. ))
The wonderful thing about science is that consensus emerges from discord and is always provisional.
On a point of detail, there are topics on which philosophers can converge on belief eg it's presently hard to find someone assuming that Gettier was wrong.
I understand the broad assertion that scientists can argue from data whereas philosophers can't but I suggest that that too is overly broad. Philosophers can and do use empirical data, certainly in some fields more than others. Philosophers like Dennet or Paul and Priscilla Churchland had constant recourse to empirical data and the whole approach of synthetic philosophy has recourse to such days
Not convinced by everything here. In particular I think narrative and emotion play a much more central role, and I also think there's likely different types of belief. But it's a great write-up!
It might also be worth looking into how experts in a domain form beliefs as opposed to undergrads in a lab. The Johnson and Seifert isn't at all surprising for undergrads, but would shock me if it held true for people who work in fire insurance claims. Experts are better at integrating information across "fragments." It's also worth looking into Gary Klein's criticisms of Confirmation Bias. Often what's dismissed as Confirmation Bias is adaptive in real life situations.
https://www.google.com/amp/s/www.psychologytoday.com/us/blog/seeing-what-others-dont/201905/the-curious-case-of-confirmation-bias%3famp
Thanks for the thoughtful reply (as always) Jared. When you say narrative has a bigger role, what sort of thing are thinking of?
The main reason that i find the Johnson and Seifert experiment compelling is that i sometimes engage in this sort of reasoning - making inferences based on beliefs that i know, on reflection, are wrong - myself. Do you have anything you can point me to re: how experts integrate information across fragments? Might be useful for something else i'm working on.
Also, the Gary Klein article is super interesting. I think i'm going to have to do some more digging here. I had assumed confirmation bias was a well-established thing, but maybe its not as solid as i'd though!
I've been doing a ton of research around the argumentative theory of reason, which i'm going to write up soon, but one of the main pieces of evidence they use to support their theory is the fact that they're able to explain confirmation bias as a feature rather than a bug of reason.
The Enigma of Rationality is IMHO brilliant.
The key takeaway is not the merits of the finer points of their argumentative theory of reason but the question they ask.
If reason is so wonderful, so powerful, them why is it reason so flawed and why on earth would evolution have served up a version of reason so flawed!?
Brilliant question.
This integration of philosophy and social science with the theory of evolution is an incredibly powerful insight.
Once grasped this insight colors everything.
I'm thinking of Gary Klein's famous story where a firefighter walks into a burning house, but then gets a bad feeling and tells his team to get out. Moments later, the house collapses. Years later, the firefighter tells Gary that he thought it was ESP. The firefighter couldn't explain his own decision to evacuate except in mystical terms.
Gary interviews the firefighter and discovers that the firefighter had noticed some anomalies that day. The firefighter had believed the fire was in the kitchen, but then why was the living room so hot? And why wasn't the fire louder when he entered?
The anomalies seemed to be enough to trigger the firefighter to lose confidence in his understanding of the situation, and that loss of confidence is what triggered the decision to leave. No ESP required.
Basically, the firefighter has been telling himself a story that the fire was originating in the kitchen, and it was anomalies that seemed inconsistent with that story which triggered the loss of confidence in that story. I think a lot of beliefs are like this. Coherence with a story is the key thing we look for in beliefs. (see research into how juries make decisions)
Additionally, the loss of confidence of the firefighter was not experienced quantitatively, but emotionally. He *felt* something was off, and it made him anxious enough to tell everyone to get out. There seems to be an emotional quality to confidence (and maybe that is what we are quantifying when we form priors). Gigerenzer talks about emotion as a "stopping rule."
"emotions can also function as heuristic principles for guiding and stopping information search. For instance, falling in love can be seen as a powerful stopping rule that ends the current search for a partner (at least temporarily) and strengthens commitment to the loved one. Similarly, feelings of parental love, triggered by one's infant's presence or smile, can be described as a means of preventing cost-benefit computations with respect to proximal goals, so that the question of whether it is worthwhile to endure all the sleepless nights and other challenges associated with baby care simply never arises."
I don't love the "stopping rule" framing, so here is perhaps an even worse analogy. Emotion often seems to act like a p-value. Intense feelings happen not just because you are confident that something is true, but as a sort of phenomenological p-value, signaling you have reached enough confidence to act as if the proposition were true.
"I don't know for sure that there is not someone who is a better match for me than my girlfriend, but I am willing to take that chance and commit to her anyways." -> "I love her."
"I don't know what is actually happening inside this burning building, but my fading confidence is sufficient to change my plan of action." -> "I have a bad feeling about this."
"I don't know for sure there is a god, but even a low probability of something that significant is enough to cause my to act" -> "I have faith" (In my experience, believers treat faith as more emotional than propositional. Personally, 20% confidence in an intelligent God would be (and is) more than enough to send me into existential and life changing shock. My threshold for action is quite low when it comes to things which could be significant.)
As for the idea that experts integrate fragments better, I had never heard the term "fragment" before so don't know specific research on that, and can't think of particular studies. But it is sort of just a known thing in Naturalistic Decision-Making.
We were interviewing an expert in Cyber Defense yesterday, and he talked about how the biggest difference between novice and experts is that experts are better able to keep track of an integrate all the relevant information. It's not just knowing the information, but being able to integrate it in the moment during an attack that sets experts apart.
I've also seen this on my own when I work with novice Behavioral Scientists. They often have difficulty designing solutions which meet all the business constraints and goals. They are focused on the various "effects" they learned in school, and are just trying to find ways to implement them, but they do so in ham fisted ways that are not sensitive to the surrounding context. I imagine that there has to be some research into how students generalize what they learned into new settings which talks about this.
(apologies for the length)
Thanks Jared. That's all super interesting. I'm gonna have to give one of Gary Klein's books a read one of these days.
Also, your p-value framing of emotion is connecting some dots in my brain. I'll have to think on it some more.. Super interesting.