Skip to content

Details

(Scroll down for topic intro)

THE VENUE: Caffè Nero

It's winter so we will meet indoors for the next few months.

When we meet indoors, we run the same event in two locations: Caffè Nero and Starbucks, so as to provide capacity for as many people who would like to attend, without overwhelming any one venue. Thus, there will be two events published, and you can choose which one to attend. Please don't sign up for both. This event is for the Nero location.

We meet upstairs at Caffè Nero. An organiser will be present from 10.45. We are not charged for use of the space so it would be good if everyone bought at least one drink.

An attendee limit has been set so as not to overwhelm the venue.

Etiquette
Our discussions are friendly and open. We are a discussion group, not a for-and-against debating society. But it helps if we try to stay on topic. And we should not talk over others, interrupt them, or try to dominate the conversation.

There is often a waiting list for places, so please cancel your attendance as soon as possible if you subsequently find you can't come.

WhatsApp groups
We have two WhatsApp groups. One is to notify events, including extra events such as meeting for a meal or a drink during the week which we don't normally put on the Meetup site. The other is for open discussion of whatever topics occur to people. If you would like to join either or both groups, please send a note of the phone number you would like to use to Richard Baron on: website.audible238@passmail.net. (This is an alias that can be discarded if it attracts spam, hence the odd words.)

THE TOPIC: Can - and should - we choose our beliefs?

This week's topic was inspired by a recent question in the Reddit r/askphilosophy sub. Thank you to Richard for providing some supplementary thoughts and helping me tie it all together.

Evidentialism holds that beliefs ought to be proportioned strictly to evidence: you believe p to the degree that the evidence supports p. But many philosophers (e.g., those influenced by William James's "The Will to Believe," or discussions of doxastic involuntarism by thinkers like Bernard Williams) argue that belief isn't under direct voluntary control. We can't just decide to believe something contrary to what seems overwhelmingly evident to us, any more than we can decide to feel pain or see red as green.

This actually creates an interesting puzzle when it comes to rational belief revision and accusations of fallacious reasoning:

  • If someone says, "I know the evidence points against my belief in X, but I can't help believing it anyway; it's psychologically fixed," is it fair to accuse them of irrationality or some kind of epistemic fallacy (like wishful thinking, motivated reasoning, or even a performative version of the appeal to emotion)?
  • Conversely, if we insist that rationality requires immediately dropping or suspending a belief the moment counter-evidence appears, are we committing a kind of "voluntaristic fallacy"? Pretending that beliefs are like choices or actions that we can toggle at will, when in reality they are more passive responses to appearances?

Three examples were given:

  1. A philosopher strongly believes in moral realism because of deep intuitive seeming. They encounter powerful error-theoretic arguments (e.g., Mackie-style queerness arguments) and admit the arguments are strong, but they still can't shake the intuition. Is it fallacious or vicious for them to continue believing moral realism while saying "I wish I could believe otherwise, but the belief won't budge"? Or is forcing themselves to choose disbelief the real epistemic error?
  2. Someone raised in a highly religious community believes in God due to lifelong immersion and emotional salience. As an adult, they study philosophy of religion and find naturalistic explanations more parsimonious and evidentially supported. Yet the belief persists as a "gut-level" conviction. When challenged ("Why don't you just drop it if the evidence is against it?"), they reply, "It's not a choice; it's how things seem to me." Is this an instance of illicitly excusing irrational belief via appeal to involuntarism, or is demanding voluntary disbelief itself a misunderstanding of how doxastic attitudes work?
  3. Imagine a person who, after a bad breakup, believes their ex is "fundamentally untrustworthy" despite new evidence of the ex's growth and honesty. The belief feels involuntary ("I can't just flip a switch and trust them again"). Critics might call this confirmation bias or sunk-cost reasoning. But if belief isn't voluntary, is the correct response therapeutic (e.g., exposure to disconfirming evidence over time) rather than logical accusation of fallacy?

Firstly, can we choose our beliefs?

These seem to be deeply-held beliefs with a strong emotional basis. What about beliefs that we are not deeply invested in or where there is no urgent need to decide either way ? We might then rationally weigh the evidence for and against and eventually come to decision. Or not.

There is an indirect route to choice. Say you have been convinced by Pascal (in his Wager) that it would be to your advantage to believe in God. You cannot just decide. But spend time with religious people, go to church, and so on, and you may come round to belief.

James' central argument in "The Will to Believe" hinges on the idea that access to the evidence for whether certain beliefs are true depends crucially upon first adopting those beliefs without evidence. As an example, James argues that it can be rational to have unsupported faith in one's own ability to accomplish tasks that require confidence. Importantly, James points out that this is the case even for pursuing scientific inquiry. James then argues that like belief in one's own ability to accomplish a difficult task, religious faith can also be rational even if one at the time lacks evidence for the truth of one's religious belief.

And it seems that we do change our beliefs, but it happens slowly and sometimes painfully, as we become open to and absorb new evidence.

Should we choose our beliefs?

There is a clear argument that we should not. Belief is supposed to aim at truth. The direction of fit is supposed to be mind-to-world. So gather the evidence, reason from it, and put up with whatever conclusion you reach. Wishful thinking as a way of believing is wicked in itself, as undermining the very notion of belief, as well as being a route to folly.

But what if a bit of wishful thinking, which is psychologically bearable because contrary evidence is not staring you in the face, will make your life much better - either for yourself or for others? If you overestimate your ability at your job and that makes you more ambitious, or you overestimate the merits of your spouse and thereby avoid rows, is that such a bad thing? Maybe you can only do this with beliefs like these, the criteria for the truth of which are so vague that testing them properly would be impractical.

Then there is the problem that if we could choose our beliefs, and knew that we could, the nature of belief would change in our eyes to the point where it would no longer count as belief - a bit like promises not counting as promises if we thought there was no ethical problem with breaking them.

We look forward to seeing you on Sunday, if you believe there is sufficient evidence that the event will in fact take place.

Some background reading:
https://en.wikipedia.org/wiki/Doxastic_voluntarism
https://en.wikipedia.org/wiki/The_Will_to_Believe
https://iep.utm.edu/doxastic-voluntarism/
https://plato.stanford.edu/entries/doxastic-voluntarism/

Related topics

Events in Cambridge, GB
Critical Thinking
Intellectual Discussions
Philosophy
Conversation
Self Exploration

You may also like