Skip to content

Details

PLEASE MAKE SURE TO RSVP ON AICAMP --WE ARE HOLDING THIS IN CONJUNCTION WITH AICAMP AND AN RSVP VIA AICAMP IS REQUIRED.

Agenda for the evening:
5:30-6:15 pm Socializing with food and refreshments
6:15-6:20 pm Intro by Dan Elton, Ph.D.
6:20-6:30 pm talk by Preston Estep, Ph.D.
6:30-6:40 pm talk by Brian M Delaney
6:40-6:50 pm talk by Kris Carlson
7:00-8:00 pm Open panel discussion with the audience
8:00-8:30 pm Socializing and wind-down

Speakers and panelists:
--------- Preston Estep, Ph.D. --------
Talk title: "Why we might need advanced AI to save us from doomers, rather than the other way around."
Bio: Preston Estep III is the Founder and Chief Scientist of the Mind First Foundation and Rapid Deployment Vaccine Collaborative (RaDVaC), and co-founder and Chief Safety Officer of Ruya AI. Since the beginning of the SARS-CoV-2 pandemic in 2020 he has focused exclusively on AI and AI safety, and decentralized pathogen countermeasures. He began writing and speaking about AI safety in the early 2000s, and gave his first public talk on AI safety at the 2008 Singularity Summit in San Jose, CA (an annual event organized by the Singularity Institute for Artificial Intelligence). Dr. Estep graduated from Cornell University and received his Ph.D. in Genetics at Harvard University in the laboratory of pioneering scientist George Church.
Abstract: Over the past two years Dr. Estep has given several talks in which he has critically analyzed Eliezer Yudkowsky’s claims of impending AI Doom. Now, with the release of Yudkowsky and coauthor Nate Soares’ new book, If Anyone Builds It, Everyone Dies, Estep adds new complaints to his prior criticisms and he marshals real science to push back on the book’s absurdly improbable tales of doom. Against Yudkowsky and Soares’ dubious claims, Estep argues that humanity needs advanced AI systems to protect us from the limits of human rationality—and that means we probably need more powerful AI to protect us from well-intentioned but dangerously misguided doomsayers.

-------- Brian M. Delaney -------------
Talk title: “If Nobody Builds It, Everyone Dies”
Bio: Brian M. Delaney is Chief Philosophy Officer at the Mind First Foundation and Clinical Trials Liaison at RaDVaC. He has founded and run several nonprofit and not-for-profit research organizations focused on health and longevity, with a particular emphasis on mental and neurological health. He did AI research long ago with Eugene Charniak at Brown University and recently has been thinking and writing extensively about the co-evolution of AI and humanity.
Abstract: Eliezer Yudkowsky and Nate Soares’s If Anyone Builds It, Everyone Dies is a literal call to arms against advanced AI where what’s needed is more careful reflection on the goals of AI research, which include the goals of humanity itself. Without solving fundamental, age-old problems in philosophy, above all, the nature of the Good, we can’t possibly make decisions about what to do about AI research. As it happens, it is AI research itself, and collaboration with advanced AIs, that might constitute the best way to determine the Good, and hence humanity’s best path forward.

--------- Kris Carlson ---------------
Talk title: "AGI Timelines and Implications for Existential Risk"
Bio: Kris Carlson is Founder and Editor-in-Chief of the journal SuperIntelligence - Robotics - Safety & Alignment. He edits the journal alongside Roman Yampolskiy, Steve Omohundro, and Allison Duettmann. He has been working full-time on AI existential safety since 2023. Previously he did research on neurostimulation at Beth Israel Deaconess Medical Center and Harvard Medical School.
Abstract: If there are 50 years until artificial general intelligence (AGI), workers with high P(doom) are relieved. If we have only two years, even low P(doom)ers are alarmed. We present three generic scenarios determining the emergence of AGI: 1) Scaling is all we need, 2) Theoretical break-throughs are required and AI will make them, and 3) Theoretical break-throughs by humans are needed. The ultimate goals of AGI safety technology are Provably Compliant Systems and Guaranteed Safe World Models. Nearer-term, what technology to ensure safe AGI is available?

Your host and panel discussion moderator for the evening is Dan Elton. He writes about AI, scitech progress, metascience, and other topics on his Substack, More is Different. He is founder of the Metascience Observatory and a Senior Scientist at Mind First Foundation.

This event is sponsored by The Mind First Foundation and run in conjunction with AICamp, The Boston Astral Codex Ten Meetup, and The Boston Futurology & Transhumanism Meetup.

For more AI events in Boston, check out the AI Blueprint for MA Boston in-person events listing and the listing on cerebralvalley.ai.

Events in Cambridge, MA
AI and Society
AI/ML
Artificial Intelligence
Futurology
AI Ethics

Members are also interested in