
What we’re about
This group is for people who are interested in the future and maybe want to help shape it.
The acceleration of technology means that the near future may bring radical changes to all of us. Major developments in technologies like anti-aging, nanotech, genetics, computing, robotics, and geo-engineering are going to make the next few years very exciting - and possibly also very dangerous. We could gain god-like powers - but we might also lose sight of our humanity, and destroy everything that we used to hold dear.
What's your view? Are things improving? Too slowly or too quickly? Are we are entering a new golden age? Or is the potential "Technological Singularity" something to fear? What does it mean to talk about "Human 2.0" and "Humanity+"? Or perhaps you view such talk as techno-hype? Maybe you just like the practical side of technology and want to find out more about possible paradigm shifts?
Anybody is welcome to this group - you don't have to be a Techno Geek or work for some futuristic company to be in our group. The future applies to us all!
Come join in the debate - have your opinions voiced and maybe make some interesting new friends.
All we ask is that members treat each other with the respect they would want for themselves. Our group has members of many ages and backrounds. We have many different perspectives on what the future may bring and like to share different ideas with each other. We approach the future with a open mind and sense of humility. Our group mission is to introduce you to some of the ideas, advancements and people who are making our future happen today.
If you have a subject you would like us to discuss at a meetup just drop us a line.
Note: Videos of some of the previous meetings are available on our YouTube Channel here https://www.youtube.com/user/LondonFuturists/ and here: (Older Archive).
Upcoming events (1)
See all- Options for the future of global AI governanceLink visible for attendees
In what ways (if at all) should people around the world try to constrain and guide the development and deployment of new generations of AI platforms and applications?
Recent events raise significant new issues and opportunities regarding the possibilities for coordinated global governance of advanced AI. These include:
- The Singapore Consensus
- The US AI Action Plan
- Discussions at the World AI Conference in Shanghai
- Rapid new releases of AI models
- AI models passing new thresholds of capability
This London Futurists webinar features a number of close observers of these trends and events, each offering their suggestions for what can (and should) happen next:
- Sean O hEigeartaigh, Director, AI: Futures and Responsibility Programme, University of Cambridge
- Kayla Blomquist, Director, Oxford China Policy Lab
- Dan Faggella, CEO and Head of Research, Emerj Artificial Intelligence Research
- Duncan Cass-Beggs, Executive Director, Global AI Risks Initiative
- Other panellists to be announced.
The webinar will include plenty of time for audience questions and feedback.
======
This event will be hosted on Zoom. To register, click here: https://us02web.zoom.us/webinar/register/WN_xLsaC43AQ0icH_vo2NclyA.
There will be no charge to attend the webinar.
The webinar will start broadcasting at 4pm UK time on Sat 4th October. To find this time in other timezones, you can use this conversion page.
Please log into Zoom up to 10 minutes ahead of the start time of the event, so you won't miss the start of the live broadcast.
As the discussion proceeds, attendees will be welcome to raise questions and vote to prioritise questions raised by others.
======
About the panellists:
Seán Ó hÉigeartaigh is Associate Director (Research Strategy) and the Programme Director for the AI:FAR research programme at the Leverhulme Centre for the Future of Intelligence (CFI). Seán was also the founding Executive Director of the Centre for the Study of Existential Risk (CSER), an academic research centre at University of Cambridge focusing on global risks associated with emerging technologies and human activity.
Since 2011 Seán has played a central role in international research on long-term trajectories and impacts associated with artificial intelligence (AI) and other emerging technologies, project managing the Oxford Martin Programme on the Impacts of Future Technology from 2011-2014, co-developing the Strategic AI Research Centre (Cambridge-Oxford collaboration) in 2015, and the Leverhulme Centre for the Future of Intelligence (Cambridge-Oxford-Imperial-Berkeley collaboration) in 2015/16.
Kayla Blomquist conducts academic and policy research at the intersection of US-China relations and AI governance. She is currently pursuing her DPhil at the Oxford Internet Institute (Balliol College), serves as Director of the Oxford China Policy Lab, and is an affiliate of the Oxford Martin School AI Governance Initiative.
She is committed to promoting resilient US-China relations and advancing good governance both of and through AI to build a better future.Dan Faggella founded Emerj Artificial Intelligence Research, a market research company focused on the ROI of AI among the Fortune 500. He has conducted nearly a thousand interviews with Fortune 500 AI leaders, AI unicorn startup C-level execs, and AI researchers (Yoshua Bengio, Nick Bostrom, etc).
He believes that moralizing AGI governance and innovation (calling some “bad” and others “good”) is disingenuous. Instead, we should focus squarely and with good faith on the incentives of the players involved in order to find a way forward for humanity, and intelligence itself.
Duncan Cass-Beggs is executive director of the Global AI Risks Initiative at CIGI, focusing on developing innovative governance solutions to address current and future global issues relating to artificial intelligence (AI). Duncan has more than 25 years of experience working on domestic and international public policy issues, most recently as head of strategic foresight at the Organisation for Economic Co-operation and Development (OECD).
In 2021, Duncan and his team launched the OECD’s collaborative foresight initiative on emerging global existential risks, aiming to better inform governments and the international community on future global challenges that may require new approaches in international collaboration. A key focus of this work was on future global risks from advanced AI — work that is continuing as part of the OECD’s Expert Group on the Future of AI.
======
To register on Zoom for this event, click here.