Plato's Cave - The Orlando Philosophy Meetup Group Message Board › Any thoughts on epistemic solutions to the problem of solipsistic / Pyrrhoni

Any thoughts on epistemic solutions to the problem of solipsistic / Pyrrhonian degrees of skepticism, fellow philosophers?

Ben Forbes G.
Kissimmee, FL
Post #: 299
I've been thinking about philosophical as well as methodological skepticism a lot recently (about the crucial importance of both as well as their limits and problems). Sound skeptical epistemology is essential in refuting illogical, demonstrably unlikely, or pseudoscientific ideas — but it's also important to reflect more fundamentally (and theoretically rather than in practice) on what constitutes a reliable epistemic framework, superior evidentiary criteria, and ideal

I'll start off by addressing one formulation of the problem of solipsistic / Pyrrhonian degrees of skepticism famous in philosophical circles (which partially inspired The Matrix): according to Hilary Putnam, if we are brains in a vat, then the sentence — “we are brains in a vat” — would be a self-refuting supposition. One of the major concepts at issue here, on his view, is reference. Putnam argues that, like primitive peoples who believe in spirit names that must be kept secret so that others can’t gain magical powers over any individual, we sometimes erroneously “operate with a magical theory of reference, a theory on which certain mental representations necessarily refer to certain external things and kinds of things” (Putnam, 187, 198). He holds that concepts have only “contextual, contingent, [and] conventional” connections with their referents (Putnam, 187). Putnam’s argument attempts to derive logical conclusions from empirical assumptions; in doing this he operates within some of Quine’s dogmas of empiricism: the mind/world dichotomy, and the necessary/contingent dichotomy, as I see it. Although, in the case of the later, Putnam tries to bridge the “gap” in a neo-Kantian way, failing to collapse it entirely, but not really trying to.

What I most take issue with in Putnam’s argument, however, is the tactic of relying heavily on hyper-philosophical contrived examples or unlikely possible worlds. This seems to me perhaps the most dangerous kind of “magical thinking” when entering into an investigation “of the preconditions of reference and hence of thought — preconditions built in to the nature of our minds themselves, though not… wholly independent of empirical assumptions” (Putnam, 199). Putnam’s problem is how to relate concepts to external objects: not intrinsically or a priori, and not by normal everyday empirical examples, but by absurd theoretically empirical counterexamples which constitute exceptions to generalizations that could be made. Moreover, Putnam uses the problem of the criterion in a way that seems excessive to problematize justifications for referential thinking. The exception to this is the second type of tree example Putnam gives, which I find somewhat compelling. It demonstrates that because I can’t tell elm and beech trees apart “the determination of reference is social and not individual… meanings just aren’t in the head” (Putnam, 201).

Interestingly, then, unlike similar Cartesian examples, Putnam’s examples seem to depend on the existence of multiple brains in a vat having a “collective hallucination.” Thus, even when “signs are ‘mental’ and ‘private,’ the sign itself apart from its use is not the concept. And signs do not themselves intrinsically refer” to anything actual (Putnam, 200). Disagreeing with phenomenologists, Putnam holds that “one’s understanding of one’s own thoughts… is not an occurrence but an ability” (Putnam, 202). This relates to Turing’s test and a few of his other examples too.

Ultimately, Putnam wants to argue, from an omniscient onlooker perspective that “the use of ‘vat’ in vat-English has no causal connection to real vats (apart from the connection that the brains in a vat wouldn’t be able to use the word ‘vat,’ if it were not for the presence of one particular vat — the vat they are in…)” (Putnam, 198). The problem with this, however, is the concept of this “causal connection” and from what perspective we can come to know it. There seems to me no satisfactory reason provided by Putnam why actually meaning something else, something beyond what we could express from a brain in a vat perspective (when we say that we are brains in a vat), is so problematic. I agree though that the statement would be false due to the brain in the vat’s ignorance about its condition.confused

Putnam, Hilary. Brains In A Vat. From: Knowledge & Inquiry, Readings in Epistemology. Ed. Brad Wray. (Toronto, Ontario: Broadview Press, 2002).

  • Other important aspects of this philosophical problem (or related topics to consider) would include use/mention errors and problems of “confusing the map for the territory” — and it would also be interesting to discuss Foundherentism as a possible epistemic alternative to potentially pragmatically “escape” the problem of infinite regress while avoiding dogmatic foundationalism or logically-circular coherentism (and potentially avoiding or somehow eschewing the Münchhausen trilemma, aka Agrippa the Skeptic's trilemma). Finally, what about logical-absolutes and tautologies?
Rami K.
Orlando, FL
Post #: 547
As framed there is no account of from where the 'brains' come, where the memories come, or from where experience comes. Allow me introduce you to Edward Feser
Ben Forbes G.
Kissimmee, FL
Post #: 367
Are you talking about Hilary Putnam’s “Brains in a vat” (the thought experiment) — or are you talking about animal brains such as humans’? … because we actually do have an excellent account of what is currently the most plausible and evidence based evolutionary understanding of how brains have almost certainly evolved, where memories of experiences are stored, etc. However, in the thought experiment, it matters little for the sake of the Pyrrhonian-skeptic/solipsistic argument (i.e. the “Brains in a vat” could have evolved naturally or been somehow grown artificially — but were then presumably harvested into, artificially transplanted into, and/or grown in the vat by an evil-trickster, “deus deceptor” / Cartesian-demon, an evil alien, a mad-scientist, etc.). Regardless, in either scenario (empirical reality or “Brains in a vat”) memories and experience clearly are experienced in the brain.

Regarding Edward Feser… an Aristotelian/Thomistic theistic Catholic perspective is rather primitive and unrealistic, in terms of ontological philosophy… Catholic “philosophers” (theologians, really) like to accuse atheists of being “unserious” with seemingly clever but actually fallacious sophistry and apologetics. It’s clear from Feser’s website that he sloppily misunderstands and criticizes straw-man versions of a variety of atheistic arguments. Catholics (and monotheists, in general) are also risibly sanctimonious in their arrogance in assuming that their idiosyncratic versions of magical thinking are purportedly “superior” to paganism, polytheism, animism, etc. (which they either literally demonize or else spuriously dismiss as false “superstition”) — but they have no good evidence for this supercilious supposition. A sarcastic response is definitely warranted in responding to Christians of any denomination who imagine that their theology constitutes anything even close to rigorous philsophy.wink Historically, there is an interesting difference between propitiation and supplication (animisms, polytheisms, and “paganisms” in general tend toward the former in certain significant ways, while monotheism strongly tends toward the latter, but this distinction isn’t absolute in all cases). Regardless, both are deplorable fraud that takes advantage of well understood cognitive errors such as confirmation bias and intermittent-reinforcement in order to perpetuate naïve and unwarranted supernatural beliefs. In general, there is a sense in which, the more “supernatural” a God-concept becomes, the more inscrutable, unfalsifiable, vacuous, and useless it becomes… but the harder it is to conclusively disprove (unless it can be absolutely disproved by inherently self-contradictory definition or adequately inductively refuted due to definitions demonstrably incompatible with reality).tongue

Furthermore, most of us “know” reality is real (or at least we operate as if we do), but the epistemic problem I am posing on this thread is: how can we best epistemically justify that? Dogmatic divine foundationalism is epistemically unjustifiable, fallacious special-pleading, and does not actually “solve” the problem of the criterion — and religious supernatural “explanations” of consciousness only stupidly raise more questions than they purport to “answer”… Some much more interesting possibilities to consider (besides patently asinine and illogical orthodox religious beliefs codified by deluded and sanctimoniously myopic medieval priests or modern apologistsdevilish) include foundherentism, various versions of contextualism, less extreme versions of skepticism, etc. I’ve posted several rigorous philosophical treatises in the files section pertaining to the fundamental philosophical question of epistemology…cool
Ben Forbes G.
Kissimmee, FL
Post #: 368
Speaking generally, the deductive versus inductive methodological distinction is an interesting one... A typical object or target of epistemology — according to many traditions in ancient, modern, and contemporary philosophy — is absolute truth, certainty, or unimpeachability that is impervious to doubt. A perfect procedure recognizes an independent criterion for a correct outcome and a method whose results (if any) are guaranteed to satisfy that criterion. Another benefit is that a perfect procedure is characterized by a test that yields no false positives. Thus P(erfect)PE is foundationalist, and committed to the correspondence theory of truth. It posits an external world about which there are facts of the matter that exist independently of agents and their inquiries. On this view, moreover, acceptability is not relative to background information, available evidence, or other contextual factors. Whatever passes its test, and nothing else, is epistemically acceptable. And the test itself makes no concession to context. Certainty is a necessary condition for knowledge, while contingency precludes its possibility — or so proponents of most such theories claim. Bridging the mind / world gap that these ideas imply, however, is problematic — if not impossible — in practice because the kinds of objects of knowledge that P(erfect)PEs tend to yield are things such as internally consistent symbol systems, trivial tautologies, or a priori truths; it becomes difficult, then, from this limited but ostensibly “solid” bedrock standpoint, to deduce very much additional knowledge that is of any pragmatic use from these “first principles” or “self-evident” truths: at least while preserving the same “solid” or rigorous degree of warrant or justification. On the P(erfect)PE view, thus, knowledge is very divorced from everyday or pragmatic epistemic problems and questions. Such epistemology ascends to a restrictive meta-level, sacrificing scope and explanatory power to obtain certainty. In other words, P(erfect)PE theorizes that to acknowledge the possibility of error is to abandon hope of certainty and security against error is a prize worth considerable epistemic sacrifice. An interesting incidental side effect of adopting these kinds of methodologies is that “gut feelings” or “emotional sensitivities” cannot count as knowledge regarding any sorts of questions by P(erfect)PE’s lights: mostly because such candidates are fallible or corrigible, as their claims are based on merely probabilistic intuition or induction from experience. Indeed, most matters commonly thought of and spoken of as “knowledge” become mere “beliefs” according to such an epistemology.

Imperfect Procedural Epistemology, in contrast to P(erfect)PE, accepts and acknowledges the epistemic role of intuition to some degree, because induction, depth of data, and experience are the methodological paradigm cases of epistemic inquiry for fallibilist and contextualist frameworks. A second important benefit of IPE over P(erfect)PE is that it can employ analogical, metaphorical, and emotive reasoning (especially regarding more abstract or ethical questions that are difficult or impossible to test). Embracing IPE has decided advantages over limiting oneself to P(erfect)PE, on my view, as long as IPE is not taken to relativistic extremes, due in large measure to its superior scope and explanatory power; for some philosophers, though, these benefits come only at a high price. An imperfect procedure recognizes an independent criterion for a correct outcome but has no way to absolutely guarantee that the criterion is satisfied. The realistic and common sense implication of this is that Imperfect Procedural Epistemology prefers the possibility of error to ignorance. Moreover, most wise and experienced people realize — through the various mistakes that have been made throughout history and in present times — that even the conclusion of a sound inductive argument may yet eventually be proved false. This acknowledgement is in keeping with the scientific method (which involves Kuhnian falsificationism), as well as with the generally prudent, reserved, skeptical, and enlightened-common-sense modes of epistemic comportment among contemporary, wise, and educated people, such that it can be argued: even when an epistemological product appears unexceptionable, we do not accept it without reservation. Rather, we accord it provisional credibility, realizing that further findings may yet discredit it. In other words, Imperfect Procedural Epistemology is prepared to criticize, modify, reinterpret, and — if need be — renounce constituent ends and means. Implicit in the ideological framework, or worldview, of IPE is a notion of infinite epistemic progress. Conceived as an advantage, this aspect of IPE is — paradoxically — simultaneously both arrogant and humble in different ways; additionally, though, it is quite blatantly optimistic. The question is whether IPE’s justifications are holistic rather than circular, and whether they can build an increasingly reliable epistemic framework with increasing explanatory power and increasingly normative and true understanding of mind-independent natural reality? Regardless, Perfect Procedural Epistemologies demand valid and sound conclusive reasons (whether provisional or not), reasons that “guarantee” the acceptability of the judgments they vindicate. Imperfect procedural epistemologies, however, merely require adequately convincing reasons, but they recognize that convincing reasons need not be (and typically are not) absolutely conclusive — but, rather, are probabilistic in varying
Ben Forbes G.
Kissimmee, FL
Post #: 369
Changing gears to analyze some other aspects of epistemology through the lenses provided by various philosophers: if skeptical doubts were meaningless, they would be unintelligible. It is true that they can be impractical or out of place in some contexts; however, in their proper context, they are very poignant philosophical possibilities, and worth considering, I think, as Barry Stroud and other skeptics have argued. In any event, skeptical Cartesian-style doubts on occasion seem more sensible than Moore’s “here is a hand” statement. Indeed, by my lights, skepticism stands up better to Wittgenstein’s style of criticisms than Moorean dogmatic realism. The most viable version of skepticism avoids the burden of proof associated with positing things with certainty and gives arguments that establish reasonable doubts, precluding or limiting certainty; it is ultimately always a fence-sitting yet provisionally conclusive philosophy, rather than a denial, a nihilism, or a contradiction of practical working assumptions and modes of comportment. Ultimately, skepticism is a sound agnosticism that is compatible with some forms of contextualism, and with the healthy living of a normal and mundane life: full of assumptions and beliefs that masquerade as “knowledge” in most, but not all, contexts or levels of epistemological or metaepistemological activity — yet not descending into the abyss of extreme Pyrrhonian-skepticism/solipsism or succumbing to the dangers of relativism.

As William Graham Sumner so eloquently put it in 1906 — “Men educated in… [the critical habit of thought] are slow to believe. They can hold things as possible or probable in all degrees, without certainty and without pain.”

Some interesting questions to consider are as follows:

In what sense are “considered” judgments normatively justified — and how does this relate to their fallibility? Who are they considered by and in what context? Are the “considered judgments” and “reflective equilibrium” in various communities normative or intersubjective? What criteria do they use and how good are these criteria/standards?

Another related phenomenon is also relevant to this discussion: if a claim is quite locked up in the practice that it relates to, so that it can only be sensible in that context, then we can (and do) talk casually about “knowing” it, according to the rules of that practice: even if it could be sensible (or made better sense of) in some larger, alternative, or other context — we are so penetrated by the discourses of the one practice that it is often difficult to escape it. However, we can (and do), to an extent, do just that when we critically examine a practice itself, as a whole, ascending to another level to contemplate what we view as its ideals, such as its ostensible utility or explanatory power (or lack thereof) relative to a variety of our different claims and our whole framework, in general. For example, this can (and should increasingly) be done with religion.devilish
Ben Forbes G.
Kissimmee, FL
Post #: 370
It’s funny how postmodern philosophers often defeat themselves with their own illogical and asinine statements while trying to criticize the superior objectivity of scientific methods: for example, “scientism” (in the strong sense) is the imagined self-annihilating “straw-man” view that “only scientific claims are meaningful” — which is not a scientific claim and hence, if true, not meaningful. Thus, scientism is either false or meaningless. This view seems to have been held by Ludwig Wittgenstein in his Tractatus Logico-philosophicus (1922) when he said such things as “The totality of true propositions is the whole of natural science...” He later repudiated this view though… Regardless, it’s an egregious misunderstanding of science to naïvely imagine science to be dogmatic.

In contrast, in the weaker (but more sound and less ridiculous) sense, “scientism” can mean the sometimes criticized view that the methods of the natural sciences should be applied to ANY subject matter (including the humanities, the social-sciences, the arts, etc.). This view is summed up nicely by Michael Shermer: “scientism is a scientific worldview that encompasses natural explanations for all phenomena, eschews supernatural and paranormal speculations, and embraces empiricism and reason as the twin pillars of a philosophy of life appropriate for an Age of Science” (Shermer 2002). I see no reason not to embrace “scientism” in this sense as heuristically advantageous and progressive.

The idea of non-overlapping-magisteria is bad epistemology; science can potentially answer more sorts of questions than many people realize... and, while eschewing teleology as we should, many scientific theories and lines of evidence can and do definitely allow us to adduce persuasive and reliable evidence regarding “WHY” questions in addition to questions of ontology and “how” questions... Religious and spiritual people simply don’t tend to like the most realistic answers to these questions which our best epistemic methods can increasingly discover: which some spuriously fear could point toward nihilism instead of humanism (for example, evolutionary theory is full of reasons “why” life develops and behaves in certain obviously undesigned and horrific ways that are nonetheless anything but random in their outcomes).

Art, literature, language, and history are far from our only tools for understanding past eras, locations, cultures, and multifarious diverse human experiences at the core of the human condition (including those which we have not ourselves lived through) with increasingly reliable and realistic accuracy. Historiography, philology, critical exegesis, etc. are all much easier and better when assisted and augmented by archaeology, physical-anthropology, scientific dating methods, Bayesian inference, etc. Art and literature can be amazingly powerful and moving — but their explanatory powers and meanings are only enhanced when combined with the even more verifiably awesome explanatory power of methodologically-sound scientific analyses as an incredibly helpful tool.

Concurrently, as eloquent as it is, I must wholeheartedly disagree with the implication of Keats’ famous poem,Lamia:
Do not all charms fly
At the mere touch of cold philosophy?
There was an awful rainbow once in heaven:
We know her woof, her texture; she is given
In the dull catalogue of common things.
Philosophy will clip an angel’s wings,
Conquer all mysteries by rule and line,
Empty the haunted air, and gnomed mine -
Unweave a rainbow…

*NOTE: in line three, “awful” = “awe-inspiring” (in more modern vernacular)
Keats’ chief villain in the poem, though not explicitly named, was Isaac Newton — whose use of the prism to split white light into its component colors was viewed by Keats as akin to desecration... but I aver that a rainbow, for example, actually becomes MORE beautiful once we understand and appreciate more deeply what it really is and how it works (learn to artificially “see” ultraviolet and infrared in addition to merely being able to simplistically take in its visible beauty). Profoundly and wisely experiencing phenomena through both their Apollonian and Dionysian modalities does not require any non-overlapping-magisteria or God-of-the-gaps delusions or spurious supernatural speculations — nor does it undermine maximally profound appreciation of emotionally moving art and literature by means of some imagined “straw-man” of allegedly “dogmatic” skeptical “scientism” (as a useful tool to examine even subjects beyond traditional areas of scientific inquiry, where possible). Indeed, the epistemic context of science is progressively expanding.

Even firsthand subjective personal experience is much less reliable than many postmodernists (in their ignorantly solipsistic relativism) assume — and this is true in a plethora of evolutionarily and psychologically well understood ways (including many involving “language games” or “indeterminacy of translation”), most of which are compounded in art and literature — and even more egregiously in mythology or religion. Nevertheless, ANY testable / falsifiable claim or empirical object of study (including texts or artworks) can be profitably subjected to various scientific or mathematical/Bayesian probabilistic analyses, especially in improving the explanatory power of interpretations in the humanities and social sciences, in addition of course to archaeology, demographics, biogeography, etc. Indeed, if more probably reliable and provisionally verifiable conclusive truth as best we can inductively discover it is the goal, then properly demonstrated “hard” science ultimately trumps all — and social-science “explanations” must conform to being compatible with it or
Ben Forbes G.
Kissimmee, FL
Post #: 371
Regarding hard science and the humanities, their overlap is often relevant to the nature/nurture debate. Essentially, my position is that hard science trumps humanities whenever its results are almost certainly demonstrably and verifiably sound (and when it's possible to be fairly certain the variables in question have been isolated) — since nature underlies culture and is more fundamentally determinate of outcomes. That said... language, culture, religion, etc. obviously have massive impacts by nurturing people's perceptions, ontology, epistemology, ethics, etc. Moreover, many aspects of the human condition are determined by complex combinations of nature and nurture. Also, as far as academic philosophy, I do support many cultural deconstructions, sociological theories, feminism, etc. — but I definitely part ways with extreme postmodernism whenever it embraces absurdly illogical degrees of relativism (whether ontological, cultural, or ethical). Furthermore, I aver that the humanities and social sciences are only strengthened by applying methods from the "hard" sciences and mathematics to help test or refute their theories (and Bayesian inference is a perfect example of that — as are tools like archaeology, scientific dating methods, philological hermeneutics, "big geography," etc. Similarly, sound philosophy of hard science to ensure properly epistemically rigorous methods is essential to its success.

I'm not one of those who thinks it's desirable to reduce epistemology to a lesser name than knowledge — rather, I prefer to try and understand how the concept actually works (or, in some cases, fails to work); certain degrees of certainty or epistemic statuses are appropriate to various contexts. Only upon the deepest philosophical reflection do people seriously question dominant epistemic frameworks, but question them people do, on occasion — and doing that requires escaping them, to a degree, and in certain limited ways. There are a lot of things people think they know, which they really merely believe, upon a deeper examination. Frameworks of knowledge that people know at a given time in a given context are wound together in complex webs that are holistic and tend to stand or fall as paradigms. The account that can best explain how this occurs is contextualism — i.e. for certain purposes, or in certain contexts, there are different standards, or the inquiry is pursued only so far, such that people really do "know" and are justified in their claims, insofar as anything can best be justified at the time. The wisdom that resides in humble recognitions of an underlying skeptical possibility is not incompatible with our day to day epistemic modes of comportment, wherein we claim, mean, and justify statements with various degrees of certainty using, e.g., the words “I know” or “I am certain enough.”

The most sound versions of skepticism are merely raising doubts about degrees of epistemic certainty — not claiming that the “facts” or “conclusions” we “know” are untenable. This view avoids the problem of internal contradiction that results if the skeptic denies, outright, the very human and “real” context in which he posits such denials by descending to Pyrrhonian extremes. Falsificationism and the scientific method are the best tools skeptics have ever had.
Ben Forbes G.
Kissimmee, FL
Post #: 372
Epistemically, I don't support the idea of non-overlapping magisteria, in general (especially regarding any ontological or ethical questions, since even ethics should always involve analyses of real world εὐδαιμονία versus suffering, to which science can decisively contribute) — and I also think the is/ought problem is a mistaken focus; science is an essential tool in determining right and wrong in consequentialist ethics and overcoming excessive relativism even within a contextual framework... Thus, while science can't tell us that we should have humanist values, it can help demonstrate which values, rules, etc. are more humanistically progressive and beneficial than others (to a degree) and it can definitely demonstrate which sorts of ethics are unequivocally and unnecessarily detrimentally harmful. Moreover, lacking axiomatic humanist values, one cannot make humane ethical judgements and would be in an analogous position to a Pyrrhonian-skeptic/solipsist trying to draw ontological conclusions about reality while doubting reality... Essentially, I'm not an ethical relativist any more than I am an ontological

Furthermore, Bayesian inference is an awesome epistemic tool: it basically quantifies any sound inductive process (which includes pretty much any IPE as described above)... One can always dispute the numbers that are plugged into any Bayesian analysis, of course — but that should be welcome and can actually really help illuminate and crystallize precisely where epistemic disputes are occurring in terms of fundamental errors or assumptions behind an argument's or opponent's premises; the concept of prior probability is extremely important to openly and explicitly consider and debate, especially when it comes to extraordinary claims (and it should be noted that of course there is no limit to how many times one can run the theoretical equation trying different numbers). For example, Bayesian reasoning helps epistemically explain and justify WHY ontological materialism is a warranted and justifiable inductive provisional conclusion (which has yet to be falsified) — but it is not an a priori axiomatic assumption. Indeed, even in cases where we don’t YET “know” (by the lights of provisionally conclusive evidence-based science) what causes a “mysterious” phenomenon, we can surmise that some vague and extraordinary immaterial hypothesis is not the most likely explanation using sound Bayesian reasoning (which considers prior probability) even before we do a single experiment. Indeed, when you look at the history of the world, you see thousands — tens of thousands, arguably hundreds of thousands or more — of phenomena for which a supernatural explanation has been replaced by a natural one: why the sun rises and sets, what thunder and lightning are, how and why illness happens and spreads, why people look like their parents, how people got to be here in the first place, etc… all these things, and thousands more, were once “explained” by gods or spirits or mystical energies. And now all of them have superior natural, physical explanations with mountains of solid, carefully collected, and replicable evidence to support them.

Now, how many times in the history of the world has a natural explanation of a phenomenon been supplanted by a supernatural one? The answer is: zero (if one had been, we would almost certainly all have heard of it — and, in case of an ongoing phenomenon, it would probably have been rediscovered even if it was once suppressed or forgotten). Of course, people are coming up with new supernatural explanations of naturally-explained phenomena all the time. Intelligent design is an obvious and ignorant example (which has replaced Creationism, but really isn't much more plausible). But supernatural “explanations” with evidence? Replicable evidence? Carefully gathered, patiently tested, and rigorously reviewed evidence? Internally consistent evidence which is also consistent with all the best evidence from established science or incontrovertible enough to revise or supplant it? Large amounts of evidence, from many different unbiased sources that can easily survive blind peer-review? Evidence that doesn't raise more questions than it “answers”? Again, as far as I’m aware — exactly zero supernatural claims can meet his standard (if they could, then they would actually be demonstrably real, and would thus become part of the best and currently most epistemically rigorous model of “nature” we have). Which brings me to the rather trenchant Bayesian question of likelihood, especially considering prior probability.

Given the overwhelming pattern of all of recorded intellectual history — i.e. thousands upon thousands upon thousands of natural explanations more accurately supplanting superstitious ones, and zero supernatural explanations accurately supplanting natural ones — doesn’t it seem most probable that any given unexplained phenomenon is far more likely to have a natural explanation than a supernatural one? Like, several orders of magnitude more likely? And that, among many other things, is what Bayesian inference can demonstrate. A great way of using it is also to be extremely generous to one's opponent, take his claims more seriously than they may deserve for the sake of argument, and plug in "best case" numbers for his case even if these are implausible, and then see what happens even then (Richard Carrier does this often and very well); this is even better if one can actually get one's opponent to concede, contribute himself, or agree upon the numbers being plugged in (e.g. for prior probability).

In general, in cases that are not somehow logically falsifiable by definition (see Perfect Procedural Epistemology above), we are always inductively reasoning regarding probabilities, not searching for absolute 100% disproofs... but this does not mean that all competing claims are even close to equiprobable or likely enough to deserve to be considered reasonably plausible.

A single counterexample can kill a hypothesis, yet even millions of confirming instances don’t change the status of the hypothesis. (There’s an asymmetry between confirmation and disconfirmation.) For example, through a window by a lake, you’ve seen one million white swans; nevertheless, this doesn’t mean all swans are white. No matter how many swans you’ve seen, this does not make the hypothesis that all swans are white true, it only means the hypothesis hasn’t been shown to be false (yet). Check out “Precambrian rabbits” for another famous hypothetical example.
Ben Forbes G.
Kissimmee, FL
Post #: 373
Admittedly, there are those who take Bayesianism too far or employ it sloppily/mistakenly — but it is definitely a very useful epistemological technique of inference to rigorously justify/quantify inductive logic. Ultimately, however, it is only as good as the numbers/values that one plugs into it and the adequacy of the theories/possibilities it considers and how far down the criterion-chain it holds to justify various prior probability premises based on empirical evidence, etc.

Let me explain a bit more about how Bayesian reasoning works in science and beyond though. Anyone who understands proper experimentation techniques knows that falsificationism is fundamental to any rigorous modern scientific method, and Bayesian analysis doesn’t undermine that — though it does augment, help qualify/quantify, and update it a bit (much like Einstein’s relativity theory did for Newtonian physics). I admire Popper’s significant and still crucially important philosophical contributions to reason and science (which are especially useful for dismissing untestable and baseless wishful speculations as they deserve). Specifically: falsificationism is perhaps most useful as one of the best ways to solve the epistemic demarcation problem of how to distinguish actual science from pseudoscience. It’s also important to note that it matters whether a proposition *could be* falsifiable in addition to whether it currently CAN be falsified with near absolute certainty. However, if poorly conceived, falsificationism can buy into too sharp a dichotomy between “positive” and “negative” claims. It's often said that one “can't prove negative claims” (i.e., the non-existence of a particular object) but one “can prove positive claims” (i.e., the existence of a particular object). While this certainly describes a lot of different sorts of claims, in general, there is no absolute dichotomy. It's not even necessarily easier to “prove” positive claims than negative ones. After all, the distinction between positive and negative is somewhat artificial. Any positive claim “P” can be made into the negative claim “not—(not-P)”. Also, sussing out causation among many correlations can require complex inferences involving both positive and negative probabilities.

Furthermore, if p(X|A) ~ 1 — that is, if the theory makes a definite prediction — then observing ~X very strongly falsifies A. On the other hand, if p(X|A) ~ 1, and we observe X, this doesn’t definitely confirm the theory; there might be some other condition B such that p(X|B) ~ 1, in which case observing X doesn’t favor A over B. For observing X to definitely confirm A, we would have to know, not that p(X|A) ~ 1, but that p(X|~A) ~ 0, which is something that we can’t absolutely know because we can’t range over all possible alternative explanations. It is usually quite adequate, however, if we can inductively reason that p(X|~A) ~ nearly-0 and consider the proposition sufficiently “falsified.” Sometimes, new evidence or previously unobserved factors/conditions may come to light that can alter thoroughly justified probabilistic conclusions. If theists are ever so lucky as to have this happen in favor of any religion, then they can feel free to bring it on and atheists will happily reexamine their currently warranted atheistic provisional conclusions.

One can even formalize Popper’s philosophy mathematically. The likelihood ratio for X, p(X|A)/p(X|~A), determines how much observing X slides the probability for A; the likelihood ratio is what says how strong X is as evidence. Well, in your theory A, you can predict X with ostensible probability 1, if you like; but you can’t control the denominator of the likelihood ratio, p(X|~A) because there will always be some alternative theories that also predict X, no matter how asinine or demonstrably unlikely they are — and, while we should go with the most parsimonious theory that best fits-with/explains all the currently available evidence, we may someday encounter some evidence that an alternative theory predicts but the status quo theory does not. That’s the hidden gotcha that supplanted Newton’s theory of gravity with Einstein’s improved theory, for example. So, there’s a limit on how much mileage one can get from successful predictions; and there’s a limit on how high the likelihood ratio goes for confirmatory evidence.

On the other hand, if one encounters some piece of evidence Y that is definitely NOT predicted by your theory, this is enormously strong evidence against your theory (and this is demonstrably the case for every human religion, bar none). If p(Y|A) is infinitesimal, then the likelihood ratio will also be infinitesimal. For example, if p(Y|A) is 0.0001%, and p(Y|~A) is 1%, then the likelihood ratio p(Y|A)/p(Y|~A) will be 1:10000. -40 decibels of evidence! Or: flipping the likelihood ratio, if p(Y|A) is very small, then p(Y|~A)/p(Y|A) will be very large, meaning that observing Y greatly favors ~A over A. Essentially, falsification is MUCH stronger than confirmation. This is a consequence of the earlier point that very strong evidence is not the product of a very high probability that A leads to X, but rather the product of a very low probability that not-A could have led to X. This is the precise Bayesian rule that underlies the heuristic value of Popper’s falsificationism and inductively vindicates it!

For example, consider the claim, “More peppered moths are black than white.” You can't “disprove” this by simply finding a white peppered moth. Nor can you prove it by finding a black peppered moth. In fact, you can't ever *absolutely* disprove or prove it! You can come pretty close though by observing a large random sampling, but almost no useful a posteriori knowledge is 100% absolute (other than tautologies, etc.).

More sophisticated forms of falsification account for this by saying that you can adequately falsify a theory when the evidence is so great against it that the theory is no longer reasonable. But no one single observation can 100% falsify a theory, so when exactly does it go from unfalsified to falsified? Not all hypotheses are plausible enough to warrant characterizing or quantifying all the grays between minute possibility and near-total disproof or attempting to account for every alternative explanation, no matter how ludicrous. Moreover, no religious belief is truly justifiable unless, at minimum, its proponent can show enough good evidence to demonstrate that it is more likely to be true than not true AND more probable than any well-supported alternative explanation (though a standard of almost certainly true beyond reasonable doubt would be even better for particularly dubious/extraordinary claims, as Hume so eloquently emphasized in his treatise “On Miracles”). No claim I have ever seen that is by definition uniquely “religious” (and which does not also happen to be true for trivial or nonreligious reasons) can meet this burden of proof (essentially, ALL the extraordinary claims of religions lack adequate evidence to justify them and/or have alleged “justifications” that are logical fallacies or cognitive errors; if there is an exception to this, I have yet to encounter it).devilish
Rami K.
Orlando, FL
Post #: 596
Ruark has written about eisegesis which is something you've been guilty of for much too long. Except the presuppositions, agendas, or biases aren't even your own. They are borrowed.

Meanwhile to help other readers, here is a brief article by Bill Beaty laying out three types of skepticism, while proposing a fourth. Knowing which we are coming from helps.
Powered by mvnForum

People in this
Meetup are also in:

Sign up

Meetup members, Log in

By clicking "Sign up" or "Sign up using Facebook", you confirm that you accept our Terms of Service & Privacy Policy