Does AI amplify or diminish our humanity? (Venue A: Caffè Nero)
Details
(Scroll down for topic intro)
THE VENUE: Caffè Nero
It's winter so we will meet indoors for the next few months.
When we meet indoors, we run the same event in two locations: Caffè Nero and Starbucks, so as to provide capacity for as many people who would like to attend, without overwhelming any one venue. Thus, there will be two events published, and you can choose which one to attend. Please don't sign up for both. This event is for the Nero location.
We meet upstairs at Caffè Nero. An organiser will be present from 10.45. We are not charged for use of the space so it would be good if everyone bought at least one drink.
An attendee limit has been set so as not to overwhelm the venue.
Etiquette
Our discussions are friendly and open. We are a discussion group, not a for-and-against debating society. But it helps if we try to stay on topic. And we should not talk over others, interrupt them, or try to dominate the conversation.
There is often a waiting list for places, so please cancel your attendance as soon as possible if you subsequently find you can't come.
WhatsApp groups
We have two WhatsApp groups. One is to notify events, including extra events such as meeting for a meal or a drink during the week which we don't normally put on the Meetup site. The other is for open discussion of whatever topics occur to people. If you would like to join either or both groups, please send a note of the phone number you would like to use to Richard Baron on: website.audible238@passmail.net. (This is an alias that can be discarded if it attracts spam, hence the odd words.)
THE TOPIC: Does AI amplify or diminish our humanity?
This week’s topic and introduction have been written by Tracy, in collaboration with her thought partner, Robochat. Thanks to Richard and Duncan for their input.
Both in our philosophy group and the wider world, AI seems to have become a topic that provokes a range of often unfavourable reactions, from raised eyebrows to sceptical scowls and exasperated eye-rolls!
But, whether we like it or not, it’s clear that artificial intelligence and automation will very likely feature as an increasingly integrated part of our personal and professional lives. But how is the advent of AI going to impact on our human capacities, qualities, values and choices? In the longer term, will it make us more or less human?
On the “amplify our humanity" side:
1. Freeing up creativity: If AI can handle repetitive tasks for us, we can surely spend more time on creative and meaningful work.
2. Enhancing empathy and connection: By automating left-brain logistics, we might have more time to connect with each other on a human level.
3. Amplifying human capability: AI can extend human cognitive reach in complex fields such as medicine, health and science, meaning human care and compassion could have an even greater impact.
And on the "diminish our humanity" side:
1. Overreliance on technology: If we allow AI to do too much for us, do we risk ‘outsourcing our brains’ and losing certain aptitudes in the process?
2. Reduced human interaction: If we start relying on AI for coaching, companionship and support, might we end up having fewer genuine human interactions?
3. Ethical and empathy gaps: Delegating certain decisions to AI might mean we sidestep moral responsibilities and dilemmas that help us grow as humans.
Do we need to start by defining what we even mean by humanity? Where does it come from? Silly question, but is it something only humans can possess? What would it mean for it to be amplified or diminished? And does it matter either way? What’s at stake here?
What might philosophers have to say on the question, especially given the most famous of these lived in a pre-Internet and pre-AI age?
Plato might have argued that it would diminish our minds. In the Phaedrus, he worried that even writing would weaken the memory because we would no longer need to use it. AI makes life easier. You have a question – it gives an answer. We don't even need to remember which books to consult. On the other hand, he would have liked the to-and-fro one can have with Chat GPT, asking for clarification and challenging its initial answers. He wanted us to engage with the living word, not be stuck with frozen statements in writing.
So what about our digital-age thinkers and philosophers?
Was Stephen Hawking being a tad melodramatic when he said, "The development of full artificial intelligence could spell the end of the human race. It would take off on its own and redesign itself at an ever-increasing rate."
Shannon Vallor is an American philosopher of technology and a consulting AI Ethicist for Google’s Cloud AI programme. She says: "The real ethical and existential risk is not that AI will become malevolent, but that we will abdicate our own humanity by failing to cultivate it in a world of thinking machines."
And Daniel Dennett, the late American philosopher and cognitive scientist, said: "The problem isn't that machines will become like humans. It’s that humans will learn to think like machines, and in doing so, we may misunderstand our own minds."
So, what does our humanity have to say on the subject?
