addressalign-toparrow-leftarrow-rightbackbellblockcalendarcameraccwcheckchevron-downchevron-leftchevron-rightchevron-small-downchevron-small-leftchevron-small-rightchevron-small-upchevron-upcircle-with-checkcircle-with-crosscircle-with-pluscontroller-playcrossdots-three-verticaleditemptyheartexporteye-with-lineeyefacebookfolderfullheartglobegmailgooglegroupshelp-with-circleimageimagesinstagramFill 1light-bulblinklocation-pinm-swarmSearchmailmessagesminusmoremuplabelShape 3 + Rectangle 1ShapeoutlookpersonJoin Group on CardStartprice-ribbonprintShapeShapeShapeShapeImported LayersImported LayersImported Layersshieldstartickettrashtriangle-downtriangle-uptwitteruserwarningyahoo

Luke Muehlhauser - Advanced scepticism

Scepticism and critical thinking teach us important lessons: Extraordinary claims require extraordinary evidence. Correlation does not imply causation. Don’t take authority too seriously. Claims should be specific and falsifiable. Remember to apply Occam’s razor. Beware logical fallacies. Be open-minded, but not gullible. Etc.

The 80/20 rule applies: the above simple rules of thumb might get you most of the benefits of scepticism, for example leaving religion. But what if a sceptic wants to go further?

what if you want to answer tougher questions?

Sceptics can upgrade their mental toolboxes in two ways.

We can

(1) deliberately practice mental habits shown to improve judgment accuracy, like a pianist would practice the piano, and we can

(2) learn the details of probability theory, causal analysis, theory of computation, and other formalisms, so that our judgment heuristics can be more specific, and grounded in the laws of the thought discovered thus far.

Skeptics in the Pub is a place for inquisitive people of all ages to meet and converse in the High Wycombe area.

This talk will explore these ideas further, with a chance for questions and discussion afterwards. Whether you just want to find out more about Skepticism, or take the ideas further this is a great talk for you!

Luke Muehlhauser runs the Machine Intelligence Research Institute in Berkeley, CA. He also helped launch the Center for Applied Rationality, which teaches some tools of advanced scepticism in workshops around the world.

Skeptics are people who yearn to discover the truth behind extraordinary claims that people or groups may make, this can be in areas of alternative therapies, the paranormal, religion and faith, the afterlife or many other areas of life.

We make no claims to balance, and actively promote science, freedom of expression and secular humanism. This means we often end up talking about superstition, religious fundamentalism, censorship and conspiracy theory.

You are welcome to come along and just sit and listen to others, or if you are braver get stuck right in!

see our Facebook page http://www.facebook.com/groups/187803204563655/ and Twitter account https://twitter.com/WycombeSITP to get a fuller idea of what we are about!

Most of us have a fondness for good quality beer, so our current home is Wycombe’s only specialty ale house & bottle bar, It is also a Bring Your Own Food Pub, and they have free WiFi.

The Bootlegger pub

3 Amersham Hill,

High Wycombe,

Bucks,

HP136NQ

Join or login to comment.

  • Joe C.

    Well, yet another excellent talk. I'm amazed at the quality of the speakers we are getting. Luke seemed very up to date in his field and able to field pretty much everything we asked him about, even if I personally did not completely agree with the answers or the unshakeable confidence in scientists and the scientific method. Our intelligence evolved to help us hunt and survive, not to solve the problems of consciousness. It is just conceivable that our scientific method will never give us the kind of advanced AI that Luke believes it will, because we are simply not wired up to solve the problem. However, an extremely interesting and engaging talk and Q&A. First class yet again.

    November 21, 2013

    • Joe C.

      Well, I am not saying that there is a limit on what AI can do. I am saying that there might be a limit on our ABILITY TO DESIGN strong AI, not its feasibility.

      I was pointing out that human intelligence has evolved to be good at hunting, eating, keeping us warm, evading predators etc. It has not evolved specifically to solve complex mathematical or abstract problems. Hence we may never be able to solve very difficult problems such as strong AI or Theories of Everything because we are simply not wired to do so. This applies to any complex abstract problem which we might wish to solve, not just strong AI and it has nothing to do with sentience or qualia. We assume that human intelligence can solve any problem, when this may not be the case. That is what I am trying to say.

      And re-reading my posts I see that I have made this point repeatedly.

      November 22, 2013

    • Joe C.

      PS Thanks for the links - I just spotted them!

      November 22, 2013

  • Daniel S.

    Luke asked me to post this comment, and some links for him:

    "I had a lovely time chatting with you all about rationality and AI.
    Here are some hopefully-helpful links for some of the topics I talked about” :-

    Center for Applied Rationality's workshops
    http://rationality.org/workshop...­

    His deconversion story:
    http://lesswrong.com/lw/7dy/a_r...­

    Literature on rationality + explanation of the distinction between normative, prescriptive and descriptive rationality:

    http://lesswrong.com/lw/7dy/a_r...­

    Lessons from probability theory for applied rationality:
    http://lesswrong.com/lw/iwb/bay...­

    Tutorial on Solomonoff induction:
    http://lesswrong.com/lw/dhg/an_...­

    November 22, 2013

  • Daniel S.

    .. and the rest ( due to the 1000 character limit )

    Andrew Gelman arguing against Occam's razor:
    http://andrewgelman.com/2005/04/20/against_parsimo_1/

    When will AI be created?
    http://intelligence.org/2013/05/15/when-will-ai-be-created/

    The AI problem, summarized:
    http://intelligence.org/2013/05/05/five-theses-two-lemmas-and-a-couple-of-strategic-implications/

    Why think that most of the value is in the far future?
    http://intelligence.org/2013/07/17/beckstead-interview/

    Some views on metaethics:
    http://lesswrong.com/lw/5u2/pluralistic_moral_reductionism/

    November 22, 2013

  • Peter T.

    What an excellent evening! It was BOGOF night at Skeptics HW: first concise exposition of rationality (who cares if it was extemporized: it was right on the money), and then a perceptive discussion of machine intelligence. Great to see several people travelling to join in. The only slight damper on the evening was the Hogs Back running out before I'd quite done with it. Happily, The Bootlegger isn't short of adequate alternative options.

    2 · November 21, 2013

  • Nabil S.

    Very interesting chat and Q&A - even if some of the maths stuff went right over my head ;-)

    1 · November 21, 2013

  • Dougald T.

    The somewhat unprepared opening talk was more than made up for by an excellent Q&A

    November 21, 2013

  • Neena

    So sorry could not make this tonight...hope it goes well

    November 20, 2013

  • Neil D.

    Don't forget this tonight! Will be worth braving the freezing rain for! (and a beer will warm you anyway)

    November 20, 2013

    • Bruce L.

      I don't think I can make it. I was really keen, but still in London and lots to do.

      November 20, 2013

  • Sonja

    Haven't shaken off my cold so I'll be a 'maybe for now'

    November 20, 2013

  • Peter T.

    As I asked in a fortnight's time - is advanced scepticism like fundamentalist scepticism?

    November 9, 2013

    • Daniel S.

      wouldn’t that be a contradiction ?

      November 9, 2013

14 went

People in this
Meetup are also in:

Sign up

Meetup members, Log in

By clicking "Sign up" or "Sign up using Facebook", you confirm that you accept our Terms of Service & Privacy Policy