Anticipating the Paris AI Action Summit
Details
On 10 and 11 February 2025, France will host the Artificial Intelligence (AI) Action Summit, at the Grand Palais, Paris. People invited to participate include heads of state and government, leaders of international organizations, CEOs of small and large companies, representatives of academia, non-governmental organizations, artists, and members of civil society.
The overarching topic is: what actions are needed, to promote a good future for everyone with AI, and to avoid significant harms?
In the run-up to that Summit, this London Futurists webinar provides an opportunity to help shape the public conversation around several crucial topics:
- How to prevent a dangerous escalating AI arms race, in which there is increasing pressure on developers to cut corners?
- How to establish an effective global system to govern the transition from today's relatively narrow AIs to the AIs with much more general reasoning capabilities that are likely to emerge?
- What canary signals of a forthcoming abrupt catastrophic transition should be agreed in advance, and what contingency measures should be prepared, in case these alarms sound?
- What practical steps can and should concerned citizens take, individually and collectively, to steer the development and deployment of AI in truly positive directions?
======
The speakers who have kindly agreed to help lead the conversation in this webinar are:
Dan Faggella, the founder of Emerj Artificial Intelligence Research. Dan has stated two of his beliefs as follows:
- Creating posthuman intelligence will be the most morally consequential event we can think of. We should aim to ensure that this final creation is a worthy successor.
- Moralizing AGI governance and innovation (calling some “bad” and others “good”) is disingenuous. All players are selfish. We should focus squarely and with good faith on the incentives of the players involved in order to find a way forward for humanity, and intelligence itself.
Jerome Glenn, the Executive Director of the Millennium Project. Jerome has played leading roles in more than 60 international futures research projects, and was enlisted by the European Commission to contribute the AGI (artificial general intelligence) paper for their Horizon 2025-2027 program. He is a voting member of the IEEE Standards Association’s Organizational Governance of AI Working Group.
Patty O’Callaghan, Technical Director at Charles River Laboratories. Patty is a member of the Google Developer Advisory Board and is a Women Techmakers Ambassador, with over 25 years of experience in the tech industry. As an international speaker, she delivers technical AI workshops and talks on the Technological Singularity, Responsible AI, and AI Governance.
======
This event will be hosted on Zoom. To register, visit https://us02web.zoom.us/webinar/register/WN_1Vq1L8B0QxK9lvPInR4Eow.
There will be no charge to attend the webinar.
The webinar will start broadcasting at 4pm UK time on Sat 18th January. To find this time in other timezones, use https://www.timeanddate.com/worldclock/converter.html?iso=20250118T160000&p1=136
Please log into Zoom up to 10 minutes ahead of the start time of the event, so you won't miss the start of the live broadcast.
As the discussion proceeds, audience members will be welcome to raise questions and vote to prioritise questions raised by others.
======
To register for this event, click here.
