Skip to content

Details

Attempts to raise awareness about catastrophic and existential risk often seem to be ineffective or counterproductive. Listeners turn away, find excuses not to engage, or make only token gestures as a result. What's going on?

People with long experience in the fields of marketing and advertising have pointed out several key facts of human psychology relevant to such conversations:

  1. Our deep-rooted aversion to contemplating annihilation - whether personal or global
  2. Our practical inability to grasp exponential escalation
  3. Our disengagement when conversations seem overly abstract or hypothetical, rather than having personal salience
  4. Our reluctance to accept uniform instructions that seem to be handed down by self-declared elites, where there is no chance to contribute meaningfully to determining solutions.

Researchers at Seismic Foundation have gone beyond theorising, to conduct empirical research into the effectiveness of different kinds of conversations about the various serious risks posed by misaligned AI. This video https://www.youtube.com/watch?v=AAw2yd2WKgo features a thought-provoking conversation between Philip Trippenbach of the Seismic Foundation and John Sherman of the AI Risk Network, on the implications arising for the question of how to talk to people about AI x-risk.

In this open online London Futurists swarm conversation, we'll be joined by a number of people who have given a great deal of thought to these topics:

  • Anthony Bailey, volunteer operations team leader at Pause AI Global
  • John Sherman, President of the AI Risk Network
  • Hannah Betts MSc, thesis topic: news media coverage of AI risks and benefits
  • Mayank Adlakha, Policy Advisor at ControlAI

This Swarm conversation will be held inside a Zoom Meeting. No advance registration is required, but you will arrive in a Waiting Room until a co-host admits you. By default, please keep your mics muted. You will be welcome to turn on your cameras, and to exchange comments (respectfully, please) in the Chat window.

Here's a link to the event: https://us02web.zoom.us/j/82167994008?pwd=V3sICUwFn5515Bp88JMtFOeqkPAfqX.1

You can take part, either from a Zoom app if you have installed it, or from a web browser.

~~~~

About the special guests:

Anthony Bailey describes himself as a mid-fifties nerd who quit his AI research and development job at Amazon in 2024 to volunteer for PauseAI.info because superintelligence risks remain neglected. For a slightly longer bio, see http://antb.me/pdoom.

John Sherman is President of The AI Risk Network and Founder & Executive Director of GuardRailNow, a 501(c)(3) nonprofit running urgent public-facing campaigns to prevent AI extinction risk.

Hannah Betts has recently completed a Masters thesis investigating news media representation of AI risks and benefits, illuminating the key challenges for journalists who cover this emerging technology. Prior to this, Hannah was the second operations hire at FAR.AI where she wore many hats, maintaining day-to-day business operations while supporting the launch of FAR.Labs, the FAR.AI Alignment Workshops and other projects.

Mayank Adlakha is a Policy Advisor at ControlAI where he has briefed over 80 Parliamentarians on the risks posed by advanced AI models. Prior to joining ControlAI, Mayank served in the UK Government's Department for Science, Innovation and Technology (DSIT), where he helped establish the world's first AI Safety Institute (now the AI Security Institute). He also contributed to the Bletchley AI Safety Summit and played a leading role in shaping the UK's frontier AI legislation.

Related topics

AI and Society
Marketing
Political Philosophy
Risk Governance and Compliance
Futurists

You may also like