On May 2, 2026, our group of four continued exploring Steven Pinker’s Rationality, and we delved into probability. We can calculate the chances of an event occurring. Suppose two black and two white marbles are mixed in an urn, and two marbles are blindly taken one at a time. There are four outcomes for any two marbles picked from the urn: A. black then black, B. black then white, C. white then black, and D. white then white. Each combination has a 25% chance of occurring. If a black marble is the first, this event eliminates combinations C and D, leaving a 50% chance for the remaining combinations A or B. If a black marble were either the first or second picked, this would eliminate combination D, leaving combinations A, B, or C, each with a ⅓ chance.
Probabilities can also be calculated for any set of events, such as being attacked by a Great White Shark or struck by lightning. We can increase the odds of these happening by changing the circumstances, such as swimming in shark-infested waters or playing golf during a lightning storm. Would choosing to swim in shark-infested waters or to play golf during a lightning storm be a rational decision?
We can further refine the odds using Bayes' theorem, a formula for calculating conditional probabilities by finding the likelihood of an event occurring based on prior knowledge related to that event. For example, what is the chance that a patient has cancer if he gets a positive test that has a 90% True Positive Rate, finding cancer in 9 out of 10 cancer patients, and if this cancer occurs at a rate of 1% in a population?
Let’s see how Bayes’ Theorem works out using a population of 1,000 people. The Population Base Rate is 1% with cancer, which gives us 10 people with cancer and leaves us with 990 people who do not have it. Since the test gives 90% True Positive results for people with cancer, which is 0.90 times 10 people with cancer, only 9 have a positive result. Out of the 990 people without cancer, the test mistakenly gives a False Positive test for 10%, which is 990 times 0.10, giving 99 healthy people a scary test result. Adding the 9 with a True Positive test with the 99 with a False Positive result, gives us a total of 108 people who get a positive test. The probability for a patient with a True Positive test result having cancer is found by dividing 9, the number of True Positive test results, by 108, all who tested positively, giving the patient an 8.3% chance of having cancer.
Bayes’ Theorem calculates the base rate of a hypothesis with additional consideration given to prior conditions. When this problem is given to doctors, they overestimate the chances of a patient having the cancer, increasing the likelihood of unnecessary surgery, radiation, and chemotherapy. What would be a prudent next step to take?
Auto and life insurance premiums are also examples of using Base Rate logic. Actuaries calculate insurance risks using demographics to set a starting price, which makes auto insurance expensive for teenagers, but their life insurance premiums are much cheaper. Is this fair? Would it be fairer to apply the same premium for everyone?
Probability Theory goes back to the philosopher and mathematician, Blaise Pascal, who proposed a wager as an argument, treating the existence of God as a probability problem. A wise decision is to bet that God exists, since if God exists, you gain all; if God does not exist, you lose nothing. Is this the ultimate insurance policy? Is Pascal’s Wager less about the existence of God and more about Risk Management?
For that matter, is this a logical way to place a bet? Is this the way to make rational decisions? We invite you to find out more about Expected Utility in Steven Pinker’s Rationality: What It Is, Why It Seems Scarce, Why It Matters, BF441.P56 2021 on May 30, 2026, from 2:30 PM to 4:30 PM.