Skip to content

Details

In what ways (if at all) should people around the world try to constrain and guide the development and deployment of new generations of AI platforms and applications?

Recent events raise significant new issues and opportunities regarding the possibilities for coordinated global governance of advanced AI. These include:

  • The Singapore Consensus
  • The US AI Action Plan
  • Discussions at the World AI Conference in Shanghai
  • Conversations in New York around the UN High Level Week
  • The Global Call for AI Red Lines
  • Rapid new releases of AI models
  • AI models passing new thresholds of capability

This London Futurists webinar features a number of close observers of these trends and events, each offering their suggestions for what can (and should) happen next:

  • Sean O hEigeartaigh, Director, AI: Futures and Responsibility Programme, University of Cambridge
  • Kayla Blomquist, Director, Oxford China Policy Lab
  • Dan Faggella, CEO and Head of Research, Emerj Artificial Intelligence Research
  • Duncan Cass-Beggs, Executive Director, Global AI Risks Initiative
  • Robert Whitfield, Convenor, GAIGANow
  • Nora Amman, Technical Specialist, Advanced Research and Invention Agency (ARIA).

The webinar will include time for audience questions and feedback.

======

This event will be hosted on Zoom. To register, click here: https://us02web.zoom.us/webinar/register/WN_xLsaC43AQ0icH_vo2NclyA.

There will be no charge to attend the webinar.

The webinar will start broadcasting at 4pm UK time on Sat 4th October. To find this time in other timezones, you can use this conversion page.

Please log into Zoom up to 10 minutes ahead of the start time of the event, so you won't miss the start of the live broadcast.

As the discussion proceeds, attendees will be welcome to raise questions and vote to prioritise questions raised by others.

======

About the panellists:

Seán Ó hÉigeartaigh is Associate Director (Research Strategy) and the Programme Director for the AI:FAR research programme at the Leverhulme Centre for the Future of Intelligence (CFI). Seán was also the founding Executive Director of the Centre for the Study of Existential Risk (CSER), an academic research centre at University of Cambridge focusing on global risks associated with emerging technologies and human activity.

Since 2011 Seán has played a central role in international research on long-term trajectories and impacts associated with artificial intelligence (AI) and other emerging technologies, project managing the Oxford Martin Programme on the Impacts of Future Technology from 2011-2014, co-developing the Strategic AI Research Centre (Cambridge-Oxford collaboration) in 2015, and the Leverhulme Centre for the Future of Intelligence (Cambridge-Oxford-Imperial-Berkeley collaboration) in 2015/16.

Kayla Blomquist conducts academic and policy research at the intersection of US-China relations and AI governance. She is currently pursuing her DPhil at the Oxford Internet Institute (Balliol College), serves as Director of the Oxford China Policy Lab, and is an affiliate of the Oxford Martin School AI Governance Initiative.

She is committed to promoting resilient US-China relations and advancing good governance both of and through AI to build a better future.

Dan Faggella founded Emerj Artificial Intelligence Research, a market research company focused on the ROI of AI among the Fortune 500. He has conducted nearly a thousand interviews with Fortune 500 AI leaders, AI unicorn startup C-level execs, and AI researchers (Yoshua Bengio, Nick Bostrom, etc).

He believes that moralizing AGI governance and innovation (calling some “bad” and others “good”) is disingenuous. Instead, we should focus squarely and with good faith on the incentives of the players involved in order to find a way forward for humanity, and intelligence itself.

Duncan Cass-Beggs is executive director of the Global AI Risks Initiative at CIGI, focusing on developing innovative governance solutions to address current and future global issues relating to artificial intelligence (AI). Duncan has more than 25 years of experience working on domestic and international public policy issues, most recently as head of strategic foresight at the Organisation for Economic Co-operation and Development (OECD).

In 2021, Duncan and his team launched the OECD’s collaborative foresight initiative on emerging global existential risks, aiming to better inform governments and the international community on future global challenges that may require new approaches in international collaboration. A key focus of this work was on future global risks from advanced AI — work that is continuing as part of the OECD’s Expert Group on the Future of AI.

Robert Whitfield is Convenor of GAIGANow, an alliance that promotes the urgent need for effective, accountable, and inclusive global governance of AI to ensure it serves humanity safely and ethically. The Global AI Governance Alliance - GAIGANow – acknowledges the many outstanding organizations and individuals actively advocating specific pathways toward this goal and specific priorities and seeks to bring these voices together, fostering a broad and transformative movement dedicated to securing safe and ethical AI global governance.

Robert is Chair of the World Federalist Movement’s Transnational Working Group on AI and Chair of the One World Trust, with an extensive career in international business, governance and the environment.

Nora Amman is is a Technical Specialist at the UK's Advanced Research and Invention Agency (ARIA). In her recent article "Avoiding an AI Arms Race with Assurance Technologies", she has written that "A global race to build powerful AI is not inevitable. Here’s how technical solutions can help foster cooperation."

In addition to her role at ARIA, Nora is Board President at
PIBBSS (Principles of Intelligent Behaviour in Biological and Social Systems), and has been a Research Affiliate at The Alignment of Complex Systems Research Group (ACS) and at the Simon Institute for Longterm Governance.

======

To register on Zoom for this event, click here.

AI and Society
New Technology
Political Activism
Risk Governance and Compliance
Geopolitics

Members are also interested in