addressalign-toparrow-leftarrow-rightbackbellblockcalendarcameraccwcheckchevron-downchevron-leftchevron-rightchevron-small-downchevron-small-leftchevron-small-rightchevron-small-upchevron-upcircle-with-checkcircle-with-crosscircle-with-pluscontroller-playcrossdots-three-verticaleditemptyheartexporteye-with-lineeyefacebookfolderfullheartglobegmailgooglegroupshelp-with-circleimageimagesinstagramFill 1light-bulblinklocation-pinm-swarmSearchmailmessagesminusmoremuplabelShape 3 + Rectangle 1ShapeoutlookpersonJoin Group on CardStartprice-ribbonprintShapeShapeShapeShapeImported LayersImported LayersImported Layersshieldstartickettrashtriangle-downtriangle-uptwitteruserwarningyahoo

Socrates Café Newtown: The Ethics of Robots

UPDATE 2: This session will has been moved back to Wednesday 24th August. The previously cited book launch was a furphy. Berkelouw books has confirmed the 24th is free. Apologies for the runaround!

The question

Can computers think, and more specifically think for themselves? Will they ever be capable of self-awareness, could they ever be conscious in the way we are? Insofar as computers can develop consciousness, or self-awareness, or intelligence similar to that of human beings sometime in the future, will they have moral rights and obligations in the same way we humans do?

The problem

However, if android self-autonomy results in android behaviour that is less than morally perfect, at least sometimes, how can we trust that android behaviour would not be directed against us with the possibility that it could lead to the elimination or enslavement of our species?

Given our own historical experience we know too well how this is not a mere abstract possibility. Morally-misdirected human agency has resulted in several genocides and the extinction or near extinction of other species. How can we ensure that superbly intelligent autonomous androids without our human limitations may not do the same to our own species?

How can we trust such powerful but potentially dangerous and less than morally perfect beings? “Hey guys you are cute but too damaging to the planet so we have to get rid of you – nothing personal old chums so it’s the big goodbye from us – and hey, thanks for the chips!”

A possible solution

Dr Edward Spence, from the Centre for Applied Philosophy and Public Ethics, will explore the question whether the problem of trusting autonomous robots can be alleviated if at all, if androids can develop into Stoic sages who always act for the Right and the Good, of themselves and of others, including humans.

Such androids must not only possess intelligent autonomous agency, they must also be capable of developing a good character. It is only if we can be reasonably confident that androids can develop into Stoic sages that we can trust them not to eliminate us or enslave us if they come to view us as an inferior species that is potentially dispensable in a world of scarce and competing resources.

Another problem

The problem, however, that faces us now is the problem of epistemological uncertainty. In the absence of such certainly can we ever be sure that the creation of superbly intelligent androids will not potentially spell our own demise?

By implementing Asimov’s Three Laws of Robotics we could, perhaps, overcome this problem but then only at the price of rendering our robots less intelligent and less morally responsible by reducing their optimal autonomy, an attribute necessary for both rational and moral agency.

Such androids, however, will be more like Aristotelian slaves, which according to Aristotle would be worthy of their enslavement because incapable of optimal rationality and moral freedom. Will it me moral on our part, however, to create such creatures only so that we can treat them no more than slaves?

A moral dilemma?

Edward shall conclude with this question: to android or not to android? In terms of our survival as a species, that might indeed be the ultimate question!

Then it's up to us to bounce around our own perspectives on the topic in the following discussion!

The next Socrates Café Newtown, held on 23 August from 6.30pm at Berkelouw Books at 6-8 O’Connell Street Newtown, Sydney.

The meeting will kick off with a talk by Edward. Following a few minutes for questions, we'll move into the open discussion. At this point, we'll break off into groups, and discuss the topic at our leisure. During this time, Edward will join each discussion group for a brief period, giving everyone a chance to bounce ideas around with him. After about 45 minutes, we'll reconvene and share the issues discussed in each group.

Socrates Café has a $5 door charge, which includes a tea or coffee from the café.

Note: there's no RSVP limit on this event. Seats in the café are limited (around 65 all up) so arrive early to secure your spot! If you arrive late, it may be standing room only!

Hope to see you there!

Join or login to comment.

  • A former member
    A former member

    Good discussion on how robots would have an impact on human life. Very interesting!

    September 12, 2011

  • Tibor M.

    It's always a pleasure to spend an evening debating Big Questions with a bunch of engaged, and engaging, fellow philosophers. However, I was disappointed in Dr Spence's presentation. For whilst I agree that autonomous, self-conscious, intelligent androids are possible in principle and likely in practice, it is not only unlikely but impossible in principle that they might ever become omniscient, perfectly rational, and (hence) perfectly moral. I offer three reasons. First, there is at least a trio of evidence that omniscience and perfect rationality are impossible in principle: Turing's "halting" problem, the non-computability of complex systems, and Godel's Theorems of Undecidability and Incompleteness. Second, perfect morality cannot follow from perfect rationality; which is, by definition, value neutral. Third, despite the Platonic moral ideal, the Principle of Sub-optimisation militates against even the remotest possibility of perfect morality. 'Tis rather a pity, really...

    August 26, 2011

  • A former member
    A former member

    A great presentation and an interesting discussion.

    August 25, 2011

  • Naomi H.

    It was an interesting vehicle for discussing morality.

    August 25, 2011

60 went

Your organizer's refund policy for Socrates Café Newtown: The Ethics of Robots

Refunds are not offered for this Meetup.

People in this
Meetup are also in:

Sign up

Meetup members, Log in

By clicking "Sign up" or "Sign up using Facebook", you confirm that you accept our Terms of Service & Privacy Policy