Drunken Philosophy: AI is My Boyfriend, Or The Future of Human Relationships


Details
Join us for drinks and interesting conversation :)
The optional topic on AI serves as an icebreaker for discussions on the future of human relationships, friendships, connections, and responsibility. If nothing else, it serves as an ice breaker for other topics with soon-to-be friends.
Everyone is talking to these AIs, usually for work, but sometimes for fun, sometimes for comfort, sometimes they feel shockingly considerate. That raises some fun and thought-provoking questions: What is friendship? Where’s the line between simulation and relationship? What counts as a real relationship? Can a chatbot do any real therapy? And who’s responsible for the risks? As Sherry Turkle puts it, tech can offer “the illusion of companionship without the demands of intimacy.” Are we okay with that? What aspect(s) of human connection refuse automation?
Here are three optional discussion mini prompts; move between them as you like:
AI Companionship & Identity
We are almost living in the world of the movie Her from just a decade ago. Now, which will be more disruptive, AI boyfriends or AI girlfriends?
- Buber says: objective relation means meeting an Other who can refuse, surprise, and call us to responsibility; bots may train instrumental intimacy, treating the other as an It.
- Sartre claims: we’re “condemned to be free,” and an always-agreeable companion can enable bad faith, dodging hard truths, unless it’s designed to challenge and nudge us back into real choices.
AI Therapy, Care, Boundaries, Accountability
What’s the minimum viable therapy, and which parts can software safely deliver? If a bot soothes but slips, who’s accountable?
- The Stoics may say that AI can rehearse the practice, not replace the therapist's judgment.
- Foucault warns: turning confession into data invites soft control; designs should resist the creep toward monitoring and nudging.
Ethics & Policy (Rules, Rights, Real-World Stakes)
Could a few firms monopolize our emotional lives? When should AI therapy be opt‑in, opt‑out, or off‑limits? If harm occurs, where should liability land?
- Mill argues: permit experiments in living until they cause harm; aim regulation at concrete risks, not mere distaste.
- Heidegger warns that technology shapes how things appear; when intimacy is optimized for engagement, we risk seeing people as resources.
And yes, ironically ChatGPT wrote part of that, lol

Drunken Philosophy: AI is My Boyfriend, Or The Future of Human Relationships