What we're about

This is a new discussion group for anyone, beginner to advanced, interested in the burgeoning field of Artificial Intelligence. Areas of focus include, but are not limited to, narrow AI, machine learning, natural language processing, vision, speech and robotics. Through reading and discussion, we'll investigate the different types of intelligence such as reactive machines, limited memory, theory of mind, and self awareness as well as relevant news, current developments, disruptive effects on industry, and the ethical implications of singularity and superintelligence. We'd like to stress that this is a discussion group to foster community and critical thinking, so, despite the technical terminology, all levels of familiarity are encouraged to join. Also, there will be coffee. Decaffers welcome.

Upcoming events (1)

The Future of Violence

Anthroware LLC

Viral, autonomous, and deadly. Tech-enabled brutality is prompting a global conversation on the future of violence. What role might AI play in curbing (or propagating) these uniquely dangerous threats? How does the challenge of context deter AI capabilities to effectively moderate violence and its representation? This week's readings will investigate the role AI plays in subjects ranging from autonomous weapons systems to the virality of mass attacks. Come join the conversation. Note: The coffee/tea setup for our group costs roughly $30 per meeting. Consider bringing a couple bucks to offset expenses. Meeting agenda: 7pm - Lightning Talk: The group will walk through an overview of current AI domains. 7:30pm - Future of Violence Discussion Primary readings: Richard Moyes talk at MIT on Autonomous Weapons Systems Policy: https://www.youtube.com/watch?v=U6lJI-NSfBY "Why tech didn't stop the New Zealand attack from going viral:" https://www.wired.com/story/new-zealand-shooting-video-social-media/ Supplemental readings on autonomous weapons: -Campaign to Stop Killer Robots: https://www.stopkillerrobots.org/learn/ -"Elon Musk leads 116 experts calling for outright ban of killer robots:" https://www.theguardian.com/technology/2017/aug/20/elon-musk-killer-robots-experts-outright-ban-lethal-autonomous-weapons-war -"Banning autonomous weapons is not the answer:" https://www.chathamhouse.org/expert/comment/banning-autonomous-weapons-not-answer -Army University Press "Pros and cons of weapons systems:" https://www.armyupress.army.mil/Journals/Military-Review/English-Edition-Archives/May-June-2017/Pros-and-Cons-of-Autonomous-Weapons-Systems/ -Slaughterbots video: https://www.youtube.com/watch?v=9CO6M2HsoIA -"Why you shouldn't fear 'slaughterbots':" https://spectrum.ieee.org/automaton/robotics/military-robots/why-you-shouldnt-fear-slaughterbots Supplemental readings on virality and mass violence: -"Why AI is still terrible at spotting violence online:" https://www.cnn.com/2019/03/16/tech/ai-video-spotting-terror-violence-new-zealand/index.html -"This new AI system can now predict school violence:" https://www.dnaindia.com/technology/report-this-new-ai-system-can-now-predict-school-violence-here-s-how-2612371 -"The NZ shooting and the challenges of governing livestreamed video:" https://www.newyorker.com/tech/annals-of-technology/the-new-zealand-shooting-and-the-challenges-of-governing-live-streamed-video Miscellaneous: -"Google's AI has learned to become 'highly aggressive' in stressful situations:" https://www.sciencealert.com/google-deep-mind-has-learned-to-become-highly-aggressive-in-stressful-situations

Past events (2)

Ethics of AI

Anthroware LLC

Photos (4)