Group Discussion: Should Skeptics Defer to the Experts?


Details
We're currently hosting our discussions at Café Walnut, not too far from our summer We're currently hosting our discussions at Café Walnut, not too far from our summer meeting spot in Washington Square Park. The cafe is near the corner of 7th & Walnut in Olde City. The cafe's entrance is below street level down some stairs, which can be confusing if it's your first time. Our group meets in the large room upstairs.
Since we're using the cafe's space, they ask that each person attending the meetup at least purchase a drink or snack. Please don't bring any food or drinks from outside. If you're hungry enough to eat a meal, they have more substantial fare such as salads, soups & sandwiches which are pretty good and their prices are reasonable.
The café is fairly easy to get to if you're using public transit. With SEPTA, take the Market-Frankford Line & get off at the 5th Street Station (corner of 5th & Market), and walk 2 blocks south on 5th and then turn right on Walnut Street and walk 1 block west. With PATCO, just get off at the 9th-10th & Locust stop and walk 3 blocks east. For those who are driving, parking in the neighborhood can be tough to find. If you can't find a spot on the street, I'd suggest parking in the Washington Square parking deck at 249 S 6th Street which is just a half block away.
----------------------------------------------------------
SHOULD SKEPTICS DEFER TO THE EXPERT CONSENSUS?
INTRODUCTION:
Those who've been involved in the skeptic movement and have seen some debates between skeptics and those they consider "quacks" and "science denialists" probably know how often the "expert consensus" among scientists or scholars becomes a key issue. Skeptics tend to come down on the side of the expert consensus, while their opponents often argue the experts are paid off or engaging in groupthink or otherwise compromised. This group discussion will focus on a series of short entries from RationalWiki (a wiki designed for the skeptic community) as well as a series of essays from some of the well-known skeptic celebrities & upcoming skeptic bloggers that deal with issues of expert consensus & contrarianism.
Although the "appeal to authority" and "appeal to majority" are normally considered to be informal logical fallacies which cannot prove a case definitively, we'll discuss cases where looking at what the majority of authorities on a subject believe can still give laymen an idea of what is more likely to be true. We'll also look at why disbelieving the expert consensus in non-scientific fields is justified but tricky and depends on a philosophical problem called the "demarcation problem". Lastly, we'll examine four debates in the skeptic community over whether laypeople would ever be justified in disputing the expert consensus in empirical fields.
-----------------------------------------------------------------------------
DIRECTIONS ON HOW TO PREPARE FOR OUR DISCUSSION:
The outline for this discussion is a bit different than the usual ones. Instead of a series of videos, I've linked a series of articles. So that you don't have to read all of them, I've sketched out notes beneath each of them that summarize the articles' main points.
In terms of the discussion format, my general idea is that we'll address the topics in the order presented here. I figure we'll spend about 20 minutes each on Sections I-III, and then about 60 minutes on Section IV since it's comprised of 4 debates.
I. EXPERT CONSENSUS & LOGICAL FALLACIES: WHY ARE "ARGUMENT FROM AUTHORITY" & "ARGUMENT FROM MAJORITY" NOT ALWAYS LOGICAL FALLACIES? WHY SHOULD WE RELY ON EXPERT CONSENSUS RATHER THAN ON ONE OR TWO OF OUR FAVORITE EXPERTS? DOES THE FACT THAT SCIENTISTS HAVE BEEN WRONG BEFORE & THAT SCIENTISTS DON'T KNOW EVERYTHING MEAN THAT SCIENTIFIC CONSENSUS IS USELESS FOR ASCERTAINING WHAT'S TRUE?
- Rationalwiki, "Argument from Authority"
http://rationalwiki.org/wiki/Argument_from_authority
RW clarifies that an argument from authority refers to two kinds of logical arguments:
(1) A logically valid argument from authority grounds a claim in the beliefs of one or more authoritative source(s), whose opinions are likely to be true on the relevant issue. Notably, this is a Bayesian statement -- i.e. per Bayesian probability, it is likely to be true, rather than necessarily true. As such, an argument from authority can only strongly suggest what is true -- not prove it.
(2) A logically fallacious argument from authority grounds a claim in the beliefs of a source that is not authoritative. Sources could be non-authoritative because of their personal bias, their disagreement with consensus on the issue, their non-expertise in the relevant issue, or a number of other issues. (Often, this is called an appeal to authority, rather than argument from authority.)
- Rationalwiki, "Argumentum ad populum"
http://rationalwiki.org/wiki/Argumentum_ad_populum#Legitimate_use
In its entry for "argumentum ad populum" (argument from popularity), RW has a section on legitimate use that pertains to the scientific consensus:
What's the difference between most people believe X and scientific consensus which is, at the end of the day, most scientists believe X? Doesn't this make out scientists to be somehow superior to the rest of the population?
There are two significant differences:
(1) Scientific consensus doesn't claim to be true, it claims to be our best understanding currently held by trained professionals who study the matter in depth. Scientific claims for truth are always tentative rather than final, even if they are often very impressive tentative claims for truth.
(2) Scientific consensus is built upon a foundation of logic and systematic evidence - the scientific method - rather than popular prejudice. The consensus comes not from blindly agreeing with those in authority, but from having their claims thoroughly reviewed and criticized by their peers. (Note that even long-established scientific consensus can be overthrown by better logic and better evidence, typically preceded by anomalous research findings.)
- Rationalwiki, "Internet Laws - Shank's Law" & Wikipedia, "Cherry Picking"
http://rationalwiki.org/wiki/Internet_law#Shank.27s_Law
https://en.wikipedia.org/wiki/Cherry_picking
The rationalist Scott Alexander coined the term "Lizardman's Constant" to refer to the small percentage (~4-5%) of people who will give crazy responses in polls for a variety of reasons, e.g. saying they believe the president is really a lizardman in disguise. Rationalwiki notes that we can also find a similar percentage of credentialed academics who occasionally endorse irrational beliefs like creationism, homeopathy, ancient astronauts, or various outlandish conspiracy theories.
The realization that there is a small percentage of people with advanced academic degrees that can be found to endorse almost any irrational belief is known as "Shank's Law": "The imaginative powers of the human mind have yet to rise to the challenge of concocting a conspiracy theory so batshit insane that one cannot find at least one PhD-holding scientist to support it." Shank's Law is a good argument for favoring expert consensus rather than just allowing people to highlight only the experts that agree with their pet position.
Basing one's arguments for a position only on the experts that support one's preferred position is similar to the fallacy of "cherry picking", i.e. pointing to individual cases or data that seem to confirm a particular position while ignoring a significant portion of related cases or data that may contradict that position. In order to be robust, scientific theories should be based on the entire body of evidence, weighted according to the rigor of each study's methods.
- Rationalwiki, "Science was wrong before" & "Science doesn't know everything"
https://rationalwiki.org/wiki/Science_was_wrong_before
https://rationalwiki.org/wiki/Science_doesn%27t_know_everything
A common objection to deferring to the scientific consensus is that "science has been wrong before". RW notes that this is an example of both the continuum fallacy (i.e. it misrepresents how science actually works by forcing it into a binary conception of "right" and "wrong" rather than assessing its accuracy on a continuum) and the nirvana fallacy (i.e. since science isn't perfect, the claim is that it's useless).
The "science was wrong" argument also conflates specific scientific theories with the entire methodology of science. That specific scientific theories can be proven "wrong" in the sense of falsification is a feature, not a bug, of the scientific method, as one of the differences between science and pseudoscience is that science is a self-correcting & cumulative enterprise, whereas pseudoscience holds onto claims despite evidence to the contrary.
The "science was wrong before" argument is also sometimes simply a red herring, in so far as it often has little to do with the specific subject at hand. For example, the fact that phlogiston theory was wrong has no bearing on whether or not evolution is correct.
A closely related argument is that "science doesn't know everything". There's usually one of two implications from this, both non sequiturs: (1) The implication that because science doesn't know everything, science knows nothing, and (2) The implication that, because science does not have an answer (or a sufficiently good answer) for a particular phenomenon already, any claim can take its place, even though it has no supporting evidence.
II. ARGUMENTS FOR DEFERRING TO THE EXPERT CONSENSUS & REMAINING AGNOSTIC IN THE ABSENCE OF EXPERT CONSENSUS:
- Chris Hallquist, "Trusting Expert Consensus"
http://lesswrong.com/lw/iu0/trusting_expert_consensus/
Hallquist suggests we should calibrate our confidence on empirical questions based on the strength of the expert consensus:
-
When the data show an overwhelming consensus in favor of one view (say, if the number of dissenters is less than Scott Alexander's "Lizardman's Constant" ( http://slatestarcodex.com/2013/04/12/noisy-poll-results-and-reptilian-muslim-climatologists-from-mars/ )" - i.e. the 4-5% or so of people who give crazy answers in polls), this almost always ought to swamp any other evidence a non-expert might think they have regarding the issue.
-
When a strong but not overwhelming majority of experts favor one view, non-experts should take this as strong evidence in favor of that view, but there's a greater chance that evidence could be overcome by other evidence (even from a non-expert's point of view).
-
When there is only barely a majority view among experts, or no agreement at all, this is much less informative than the previous two conditions. It may indicate agnosticism is the appropriate attitude, but in many cases non-experts needn't hesitate before having their own opinion.
-
Expert opinion should be discounted when their opinions could be predicted solely from information not relevant to the truth of the claims. This may be the only reliable, easy heuristic a non-expert can use to figure out a particular group of experts should not be trusted.
- This last point means if a group of experts' opinions tend to all cluster into a standard political ideology, they should be discounted (i.e. we should reduce our credence, although not entirely to zero). This recalls Eliezer Yudkowsky's essay, "Policy Debates Should Not Appear One-Sided," which argues that we shouldn't expect to see a convergence of evidence around political policy issues because they usually deal with multi-factorial phenomena and so almost any course of action has both costs/risks & benefits, and these issues combine positive & normative elements. If experts in a field all prefer the same types of tradeoffs and have the same norms/values, this may make policy debates artificially appear one-sided. - http://lesswrong.com/lw/gz/policy_debates_should_not_appear_onesided/
- Daniel Loxton, "What, If Anything, Can Skeptics Say About Science?"
http://www.skepticblog.org/2009/12/22/what-if-anything-can-skeptics-say-about-science/
Loxton, a skeptic who blogs over at Skepticblog, has a very similar article to Chris Hallquist's post on Less Wrong, and suggests a similar tier of responses by amateur in light of expert consensus:
-
Where both scientific domain expertise and expert consensus exist, skeptics are (at best) straight science journalists. We can report the consensus, communicate findings in their proper context — and that’s it.
-
Where scientific domain expertise exists, but not consensus, we can report that a controversy exists — but we cannot resolve it.
-
Where scientific domain expertise and consensus exist, but also a denier movement or pseudoscientific fringe, skeptics can finally roll up their sleeves and get to work. This is where skeptics can sometimes do even better than scientists, due to their familiarity with popular science rhetoric (i.e. shorter "inferential distance" from the audience) and experience with debunking pseudoscience.
-
Where a paranormal or pseudoscientific topic has enthusiasts but no legitimate experts, skeptics may perform original research, advance new theories, and publish in the skeptical press. In this shadowy, fringe realm, skeptics can indeed critique working scientists. There is no mainstream of consensus science on, say, ghosts; skeptics are the relevant domain experts.
- Daniel Loxton, "Due Diligence: Never Say Anything That Isn't Correct"
http://www.skepticblog.org/2010/02/16/due-diligence/
The title gives you the general gist, but he lays out a specific series of questions skeptics should ask themselves before making public pronouncement on scientific matters:
-
Do I have the expertise to express an opinion about this?
-
Have my facts been reviewed by anyone who knows what they’re talking about?
-
Would those I’m critiquing agree that I’ve described their position accurately?
-
Have I given undue weight to fringe positions?
-
Have I given enough weight to criticisms of my own position?
-
Have I accurately described the uncertainties and assumptions of my position?
-
Am I using arguments that science has already considered and debunked?
-
Have I sought out the primary sources?
-
Can I prove what I’m saying? (Really? Am I sure?)
- Michael Shermer, "Consilience and Consensus"
Shermer explains the value of expert consensus, and that it's strongest when we see "consilience" - i.e. a convergence of evidence from various scientific fields that all points to the same phenomenon. He explains that consilience is more important than a poll that shows the majority of experts favor a certain view, because when we see convergence of evidence from multiple lines of inquiry that all converge to a singular conclusion it provides a very high probability that the conclusion is correct.
Shermer explains that "AGW [anthropogenic global warming] doubters point to the occasional anomaly in a particular data set, as if one incongruity gainsays all the other lines of evidence. But that is not how consilience science works. For AGW skeptics to overturn the consensus, they would need to find flaws with all the lines of supportive evidence and show a consistent convergence of evidence toward a different theory that explains the data. (Creationists have the same problem overturning evolutionary theory.) This they have not done."
http://www.michaelshermer.com/2015/12/consilience-and-consensus/
III. THE COURTIER'S REPLY, THE MYERS' SHUFFLE, GISH GALLOPS & THE DEMARCATION PROBLEM: CAN YOU CAN SAFELY IGNORE THE "EXPERT CONSENSUS" IN NON-SCIENTIFIC FIELDS? HOW DO WE DETERMINE WHICH FIELDS ARE NON-SCIENTIFIC? HOW MUCH READING DO YOU HAVE TO DO BEFORE CONCLUDING THAT AN ARGUMENT IS MOSTLY BUNK?
- Rationalwiki, "Courtier's Reply"
http://rationalwiki.org/wiki/Courtier%27s_Reply
The Courtier's Reply is a term popularized by biologist/blogger PZ Myers to describe an informal logical fallacy that boils down to: "But you haven't read enough on it!" His answer to the fallacy is to say that telling a non-believer that he should study theology before he can properly discuss whether a god exists is like scolding the child in the fable "The Emperor's New Clothes" who cries out that the Emperor is naked. It's as if a courtier of the Emperor argued that that anyone claiming the Emperor's naked must first study a library full of texts that describe the intricate details of the Emperor's clothes. Essentially, it's a particularly ham-handed argument from authority where the position's proponent attempts to bury the opponent under a pile of detail which is largely irrelevant to the opponent's argument.
RW notes that denunciation of this particular fallacy is quite easy to misuse. Whenever one is told to read more about a subject that he disagrees on, it is easy to accuse one's contradictors of giving a "Courtier's Reply". The element of the Courtier's Reply that is being forgotten here is that it asks the questioner to "read more" about a subject that begs the question. Therefore it cannot be used, for instance, by people not liking that they are being asked to read more about global warming if they deny it. In addition, it is not fallacious to tell, for example, a creationist to read more on evolution, if they clearly do not understand what they are talking about, and are basing their evidence on false premises (such as, for example, believing that abiogenesis = evolution).
- Scott Alexander, " "The Courtier's Reply and the Myers Shuffle"
http://squid314.livejournal.com/324594.html
For a fairly detailed critique of PZ Myers' use of the "Courtier's Reply" to shut down Christians who tell him he needs to read & understand the major tracts in theology, check out the above essay by Scott Alexander. Scott links to an essay by the Roman Catholic philosopher Edward Feser where he called the "Courtier's Reply" a rhetorical "pseudo-defense" employed as a "clever marketing tag" in order for members of the New Atheist movement to avoid criticism of their arguments. Feser terms PZ Myers' use of the Courtier's Reply "the Myers Shuffle".
The "Myers shuffle" criticism points out that the Courtier's Reply rhetoric is usually a summation of logical fallacies or sophistry, and he characterizes the assertion that criticism of someone's philosophical or theological ignorance is irrelevant when the existence of God is disputed as a case of the special pleading (https://en.wikipedia.org/wiki/Special_pleading) fallacy. Feser further claims that asserting that the "average believer" is not well informed about theology is a red herring (https://en.wikipedia.org/wiki/Red_herring), since something being true does not depend on how many people believe it is true.
Although he's an atheist, Scott thinks Feser is making a good point. Any version of the Courtier's Reply strong enough to shut down people who want to force you to spend the rest of your life studying pseudoscientific theories is also strong enough to shut down people who are correct and merely want you to have some idea what you're talking about before engaging in a debate about a legitimate scholarly field. Scott says the best solution he can think of is to read books in areas where one's opinion differs from the opinion of a bunch of other people whom one considers relatively smart and rational. This suggests the theist should read more books about Darwinism, since all those Nobel Prize winners and biology Ph.Ds believe it. The atheist should read more books about religion, since many people whom one would otherwise judge as smart and rational believe that too.
- Rationalwiki, "Gish Gallop"
https://rationalwiki.org/wiki/Gish_Gallop#How_to_respond
The Courtier's Reply bears resemblance to another debate tactic employed creationists, the "Gish Gallop." Also known as "proof by verbosity", the Gish Gallop is the tactic of drowning your opponent in a flood of individually-weak arguments in order to prevent rebuttal of the whole argument collection without great effort. The Gish Gallop is a belt-fed version of the "on the spot fallacy" (https://rationalwiki.org/wiki/On_the_spot_fallacy), as it's unreasonable for anyone to have a well-composed answer immediately available to every argument presented in the Gallop. The Gish Gallop is named after creationist Duane Gish, who often abused it. Although it takes a trivial amount of effort on the Galloper's part to make each individual point before skipping on to the next (especially if they cite from a pre-concocted list of Gallop arguments), a refutation of the same Gallop may likely take much longer and require significantly more effort (per the basic principle that it's always easier to make a mess than to clean it back up again).
RW suggests several possible ways to rebut a Gish Gallop without trying to respond to every single point. The three best methods are probably the small sample rebuttal (first 10 arguments or 10 arguments picked at random), overriding theme rebuttal, and the best point rebuttal. This can be considered a practical way to apply Scott Alexander's recommendation to read more from our ideological opponents. We can't typically afford to read everything our oppenent's side has ever written on the topic in question, but we can read a sample of it or ask them which books or articles they consider to be the most authoritative.
- Rationalwiki, "Demarcation Problem"
http://rationalwiki.org/wiki/Demarcation_problem
So if the Courtier's Reply should only be directed at non-science & pseudo-science, how do we clearly distinguish it from real science? That's called the "demarcation problem". This is one of the central topics of the philosophy of science, and it has never been fully resolved. In general, though, a scientific theory must be:
- falsifiable (http://rationalwiki.org/wiki/Falsifiability)
- parsimonious (http://rationalwiki.org/wiki/Occam%27s_Razor)
- logically consistent (https://en.wikipedia.org/wiki/Consistency)
- reproducible (http://rationalwiki.org/wiki/Reproducibility)
- For a more in-depth treatment of the demarcation problems within skepticism, check out Massimo Pigliucci & Maarten Boudry's article, "Philosophy of Pseudoscience: reconsidering the demarcation problem".
http://rationallyspeaking.blogspot.com/2013/08/philosophy-of-pseudoscience.html
IV. FOUR DEBATES OVER WHETHER SKEPTICS SHOULD EVER DISPUTE THE EXPERT CONSENSUS IN A FIELD THAT'S EMPIRICAL:
- Julia Galef, "Should Non-Experts Shut Up? A Skeptic's Catch-22"
http://rationallyspeaking.blogspot.com/2010/07/should-non-experts-shut-up-skeptics_14.html
Julia Galef, the founder of the Center For Applied Rationality (CFAR) and host of the "Rationally Speaking" podcast, thinks intelligent laypeople trained in logic can occasionally spot problems with the expert consensus, especially in the social sciences. The Catch-22 situation Julia mentions is as follows: "The only people who are qualified to evaluate the validity of a complex field are the ones who have studied that field in depth — in other words, experts. Yet the experts are also the people who have the strongest incentives not to reject the foundational assumptions of the field, and the ones who have self-selected for believing those assumptions."
Julia suggests 2 possible ways around this Catch-22: (1) find people who are experts outside of the field in question but are experts in the particular methodology used by that field and see how they judge whether that field is applying the methodology soundly (e.g. have statisticians judge stat-based sociology research), and (2) see if the field in question makes testable predictions and has a good track record of correct predictions.
Julia has a interesting debate with Massimo Pigliucci, a philosophy professor at CUNY, and several others in the comments section. Julia thinks there may be a way for non-experts to judge whether the foundations of an academic field are solid, whereas Massimo doubts that non-experts can ever judge the consensus without spending several years mastering the basics of the field.
- NOTE: For a look at a more extended debate between Julia Galef & Massimo Pigliucci on this issue, check out Episode #16 of the Rationally Speaking podcast entitled "Deferring to Experts".
http://rationallyspeakingpodcast.org/show/rs16-deferring-to-experts.html
-----------------------------------------------------------------------
- Neuroskeptic, "I Just Don't Believe Those Results"
Neuroskeptic (NS) is an anonymous neuroscientist who blogs at Discover Magazine and is often critical of the finding in his field. NS points out that he occasionally judges the validity of studies by their results and asks if his skepticism about this result justifiable: "If I’m free to decide that a result is just unbelievable, how am I any different from (say) a creationist who maintains that it’s just too incredible that natural selection produced humans and other life? To put it another way, how can I call myself a scientist if I sometimes reject scientific evidence that conflicts with my intuitions?".
He notes that disbelieving strange results can be justified in terms of Bayesian probability: "We would say that my 'prior probability' of a [a particular result] is very low. If my prior is low, it is perfectly rational for me to remain unconvinced after seeing one study in favor of [a highly unexpected result] — it might take ten such studies to convince me."
- Patrick Watson, "Neuroskeptic: Science Hipster"
https://medium.com/@patrickdkwatson/neuroskeptic-science-hipster-50a9ff1c1dca#.8ms4ejgu8
Patrick Watson compares Neuroskeptic to a hipster who's always interested in the cutting edge and has such discerning taste that even what most of us would consider good still isn't good enough. He says he respects NS's skepticism on neuroscience, but points out that NS is a "trustworthy expert selling himself as an outsider. Like all experts, he sells himself as having special access to the truth. He doesn’t, but we should still trust his ideas (mostly), because he has proved reliable in the past, and has done a good job of explaining his reasoning. If he has concerns about peer-review and about science journalism we should listen. His predictions aren’t good because they’re doubtful. They’re good because his rat-smeller is finely honed."
(In terms of Bayesian probability, this means that Neuroskeptic's prior probability of a result being true or false is probably much closer to the underlying reality than the average layperson's — or at least his prior is closer to the current expert understanding of the underlying reality.)
However, Watson doesn't think this type of skepticism towards the results of scientific research can be respected when it comes from people without professional credentials like anti-vaxxers, climate-change deniers, and creationists — as he says, "These groups need less critical thinking, and more credulity".
----------------------------------------------------------------------
- David Gorski, "On The 'Right' to Challenge Medical or Scientific Consensus"
https://sciencebasedmedicine.org/on-the-right-to-challenge-a-medical-or-scientific-consensus/
David Gorski summarizes an earlier debate between science journalists Chris Mooney and John Horgan about whether or not laypeople have the "right" to challenge scientific expertise. Gorski points out that while laypeople have the legal & moral right to challenge the expert consensus based on exercising their "freedom of speech", this doesn't mean that others can't use their freedom of speech to point out that the questioning of a scientific consensus by a non-expert isn't very valuable in most cases.
Gorski explains that another major point of contention between Mooney & Horgan was whether insiders or outsiders are more objective judges of a scientific field's results. Mooney, drawing on the work of the sociologist Harry Collins, argues that even well-read laypeople who have some "primary source knowledge" still don't have the "interactional expertise" to tell whether a paper is being taken seriously by the scientific community or not. (Gorski's reasoning here aligns with the Science-Based Medicine blog's tendency to argue that expert consensus & basic science are the best guides to the "prior plausibility" of a particular study's results.)
- Note: For a more in-depth look at the research of Harry Collins into scientific expertise, check out Chris Mooney's article and the embedded podcast interview: https://www.motherjones.com/environment/2014/05/harry-collins-inquiring-minds-science-studies-saves-scientific-expertise/
- John Horgan, "Journalist Chris Mooney Is Wrong, Again, about 'Experts'”
Horgan didn't respond to Gorski, but he did respond to a later article by Mooney entitled, "The Science of Why You Should Really Listen to Science and Experts". Mooney cites the psychologist Philip Tetlock's research as part of "a growing trend toward robustly defending and reaffirming the importance of experts." Mooney's citation of Tetlock is bizarre, because his 2005 book Expert Political Judgment—far from a defense of experts—is a devastating critique of them. Tetlock researched the accuracy of political forecast made by 284 professional pundits who regularly comment on politics both in scholarly journals and mass media, and he found that most of them did no better than "a dart-throwing monkey" (i.e. chance).
How can Mooney possibly interpret Tetlock's book as a defense of experts? He seizes on Tetlock's finding that a few experts were significantly better forecasters than chance would predict. They tended to be not what Tetlock calls "hedgehogs," who explain the world in terms of one big unified theory, but "foxes." Foxes, Tetlock explains, "are skeptical of grand schemes" and "diffident about their own forecasting prowess." In other words, the most credible experts are those who are wary of experts. Mooney is oblivious to this irony and blithely concludes, "experts... really are different from non-experts. Now, all we have to do is listen to them." Horgan, however, draws the opposite conclusion: "Think for yourself."
[NOTE: Notice how both Mooney & Horgan improperly tried to generalize the results from Tetlock's political forecasting research to all of science, and they both drew incorrect conclusions — believe all experts or believe none of them.]
---------------------------------------------------------------------
- Richard Carrier, "On Evaluating Arguments from Consensus"
http://www.richardcarrier.info/archives/5553
Richard Carrier is a classical historian that argues that Jesus Christ was a myth rather than a historical figure, which is a contrarian position within the field of biblical history. Carrier argues that intelligent, rational laypeople trained in logic can evaluate the value of both an individual expert's opinion as well as any expert consensus. He notes that both laypeople & experts "need to make these evaluations without themselves having to re-do all the research and study that that consensus is based on, as otherwise we would be demanding an absurd scale of inefficiency in the expert group, by nixing the ability to divide their labor, and instead requiring every expert to reproduce all the work of every other expert, a patent impossibility."
Carriers argues that "a consensus has zero argumentative value when the individual scholars comprising that consensus have neither (a) examined the strongest case against that consensus nor (b) examined enough of it to be able to identify and articulate significant errors of fact or logic in it. So it is fallacious (indeed, a conspicuously unreliable practice) to just cite the consensus on anything, without first ascertaining whose opinions within that consensus actually count... The second cull comes from eliminating from the pool of experts to count, those who articulate their reasons for their conclusion and those reasons are self-evidently illogical (you can directly observe their conclusion is arrived at by a fallacious step of reasoning) or false (you can reliably confirm that a statement of fact they made is false)." Carrier says that this type of evaluation can help determine which experts should be counted when trying to ascertain the consensus, as well as the relative strength (or weakness) of the expert consensus.
- Aaron Adair, "Critical Thinking and Expert Consensus"
https://gilgamesh42.wordpress.com/2014/07/05/critical-thinking-and-expert-consensus/
Aaron Adair criticizes Richard Carrier & others who think they can evaluate the expert consensus outside their own field. He points out the examples Carrier gives of classical historians making fallacious arguments are valid, but "if you are not a historian, let alone a classicist, how would you know to even evaluate this argument? Or even prior to that, why would you think it was suspect? It’s only when you have the background knowledge that you realize how bad the position is. Thus even a critical thinker would read past [most logical errors] without any red flags popping up. On the other hand, if you are suspicious of every statement by [a given expert], then you will have to effectively redo his research, and in that process you will need to become an expert, the very thing you hoped to avoid in order to save time."
Adair raises several other problems with non-experts trying to evaluate experts:
(1) "You can’t just look it up. First, you have to know what to look up. What would be a relevant fact, what would not? You may not even realize what are the sorts of things you need to know in order to evaluate a claim." Adair references a concept from constructivist educational theory, the "zone of proximal development" (ZPD), which is the difference between what a learner can do with and without help - i.e. if a question is within a person's ZPD, it essentially means they can only grasp it with the help of someone more qualified, and that they'll probably go astray if they try to handle it alone.
https://en.wikipedia.org/wiki/Zone_of_proximal_development
(NOTE: Adair doesn't mention this, but a similar concept to ZPD is "inferential distance" — the gap between the background knowledge and epistemology of a person trying to explain an idea, and the background knowledge and epistemology of the person trying to understand it. If the inferential gap between the experts who normally handle a topic and the interested layperson is too great, it means there's too many layers of knowledge for the layperson to traverse by themselves - https://wiki.lesswrong.com/wiki/Inferential_distance )
(2) "You also need to know if your source is reliable or not. Unless you have background knowledge about what are good sources for your subject of interest, you will have a hard time. If you are ignorant, you can’t really tell the difference between bad and good sources; you don’t have the background knowledge to say something seems fishy."
(NOTE: Point #2 is similar to Harry Collins's concept of "interactional expertise" cited by David Gorski & Chris Mooney above.)
(3) "You also have the problem of finite working memory: you can only have a few ideas in your conscious memory at any given moment. If you are looking up fact after fact you will forget things. You will also have a very hard time remembering and evaluating facts and arguments if it’s all new and sudden."
(4) "Given the research from political science... more information can polarize people. Worse still, the more educated/knowledgeable/rational people become the most polarized by the same data."
- For a more in-depth look at Richard Carrier's arguments for allowing informed, rational laymen to scrutinize the expert consensus, check out the "Inspiring Doubt" podcast where Greg Breahe interviews Carrier:

Group Discussion: Should Skeptics Defer to the Experts?