This Meetup is past

7 people went

Karma Bird House

47 Maple Street · Burlington, VT

How to find us

Enter Maglianero's, up the stairs or elevator to the 3rd floor

Location image of event venue

Details

This Discussion topic originated with a portion of the most recent Bad Wizards podcast, their 150th anniversary podcast.

The movie and HBO series WestWorld portray a future where robots/androids have been developed to the point where one cannot easily detect the difference between humans and androids. This is likely to occur, one way or the other (electronics vs biology) at some point in the future.

In WestWorld, these androids are abused (beaten, raped, killed) by paying clients for the amusement of the latter; the presumptions about what people really want out of life are depressing, but try to ignore that and focus on the thought experiment. WestWorld encourages the viewer to consider the implications of the technology.

What kind(s) of relationship(s) should we encourage or discourage with a "machine" that is conscious, has thoughts, has feelings, can speak? How should we think about switching them off, wiping their memory, loading their memory, or rebooting them?

Would it make a difference to you if they appeared to be human, vs if they appeared to be more like a science fiction robot? Remember: they are the same being inside; all I am changing is the skin.

No one laments the suffering of a wrench or a lawn mower or a smart phone. Should we lament the suffering of an android? That is, is it a machine? If not, what is the dividing line between a machine and a non-machine?

If we ignore their suffering, might our abusing an android have such a negative effect on us that we should avoid such actions for our own sake? What about abusing a lawn mower? What is the difference?

Suppose we could develop a masochistic android, one that "enjoys" being abused; that is, a being that is innately and inherently masochistic. Would our relationship with such a machine be different than our relationship with a non-masochistic android?

Suppose we could develop a masochistic human through use of either selective breeding or genetic engineering. How should we think about our behavior around that being? Would abusing it be acceptable?

Suppose we could develop a masochistic human (a being that is not inherently masochistic) through use of behavioral modification (training, upbringing, brain washing)? How would we think about our behavior around that being?

Does "creating" a being using electronics or biology or training feel differently? How does the path to the final result affect our thoughts and feelings about it?

These situations may seem ridiculous, but companies are designing cars right now that will have to decide who dies (passenger or bystander) in an accident. These issues will be with us soon.

I am sorry if the tone of the examples seems extreme. For me, the extreme examples focus me better than the bland "what rights should an android have".