About us
This group is for people who are interested in the future and maybe want to help shape it.
The acceleration of technology means that the near future may bring radical changes to all of us. Major developments in technologies like anti-aging, nanotech, genetics, computing, robotics, and geo-engineering are going to make the next few years very exciting - and possibly also very dangerous. We could gain god-like powers - but we might also lose sight of our humanity, and destroy everything that we used to hold dear.
What's your view? Are things improving? Too slowly or too quickly? Are we are entering a new golden age? Or is the potential "Technological Singularity" something to fear? What does it mean to talk about "Human 2.0" and "Humanity+"? Or perhaps you view such talk as techno-hype? Maybe you just like the practical side of technology and want to find out more about possible paradigm shifts?
Anybody is welcome to this group - you don't have to be a Techno Geek or work for some futuristic company to be in our group. The future applies to us all!
Come join in the debate - have your opinions voiced and maybe make some interesting new friends.
All we ask is that members treat each other with the respect they would want for themselves. Our group has members of many ages and backrounds. We have many different perspectives on what the future may bring and like to share different ideas with each other. We approach the future with a open mind and sense of humility. Our group mission is to introduce you to some of the ideas, advancements and people who are making our future happen today.
If you have a subject you would like us to discuss at a meetup just drop us a line.
Note: Videos of some of the previous meetings are available on our YouTube Channel here https://www.youtube.com/user/LondonFuturists/ and here: (Older Archive).
Upcoming events
5

Take Back Tomorrow, with Gerd Leonhard
·OnlineOnlineSince 2016, the core idea of the work of futurist and film-maker Gerd Leonhard has been what he has called The Good Future - a vision grounded in human values, collaboration, science and technology that seeks to boost flourishing for everyone.
This future appeared to be anchored in the United States: the birthplace of the internet, early social networks, Silicon Valley and the Bay Area, and many of today’s foundational technologies.
But those days are gone. Since early 2025, the promise of The Good Future has been under intense pressure — not because technology has failed, but because political and economic leadership has. Governance that prioritizes short-term gain over long-term purpose is eroding trust in institutions and weakening the global economic order.
This has led Gerd to launch a timely and important new initiative, called The Bad Future. The initiative analyses how the Bad Future has progressed:
- Democracy Systematically Eroded
- Important Institutions Ridiculed and Weakened
- Cooperation and Diplomacy Reframed as Useless
- Truth is Optional, Control Beats Trust
In short, as Gerd says, "Technology didn’t fail — we did".
But the initiative also highlights a bold new way forward for The Good Future, with Europe stepping forward to play a decisive lead role. Gerd identifies "5 ways Europe can Take Back Tomorrow":
- Make Human-Centered AI a Strategic Advantage
- Build technological sovereignty without isolation
- Invest in humans, not just automation
- Lead a New, Global Alliance of Trust (U.S. independent)
- Redefine Success Beyond GDP Growth: Embrace the 5Ps
The content of The Bad Future website and its accompanying short video and personal blog post raises enormous questions. This live London Futurists webinar featuring Gerd Leonard will provide a chance to understand the initiative more fully and explore the questions, opportunities, and challenges arising.
The webinar will also include time for audience questions, feedback, and extended conversation.
Extra insight and perspective will be added to the conversation by Liselotte Lyngsø, who will be joining as a panellist. Liselotte is a global futurist, founding partner of Future Navigator, and host of the Supertrends podcast.
~~~~
This event will be hosted on Zoom. To register, click here: https://us02web.zoom.us/webinar/register/WN_w6WR5f9HTKqc2DGA5wEaJQ
There will be no charge to attend the webinar.
The webinar will start broadcasting at 4pm UK time on Sat 31st January. To find this time in other timezones, you can use this conversion page.
Please log into Zoom up to 10 minutes ahead of the start time of the event, so you won't miss the start of the live broadcast.
As the discussion proceeds, attendees will be welcome to raise questions and vote to prioritise questions raised by others.
~~~~
To register on Zoom for this event, click here.
32 attendees
The Adolescence of Technology (Discord Swarm)
·OnlineOnlineIn his recent essay, "The Adolescence of Technology: Confronting and Overcoming the Risks of Powerful AI", Dario Amodei, CEO of Anthropic, covers a range of deeply important topics.
For example, Dario describes five categories of risk arising from what he calls "a country of geniuses in a datacenter":
- Autonomy risks. What are the intentions and goals of this country? Is it hostile, or does it share our values? Could it militarily dominate the world through superior weapons, cyber operations, influence operations, or manufacturing?
- Misuse for destruction. Assume the new country is malleable and “follows instructions”—and thus is essentially a country of mercenaries. Could existing rogue actors who want to cause destruction (such as terrorists) use or manipulate some of the people in the new country to make themselves much more effective, greatly amplifying the scale of destruction?
- Misuse for seizing power. What if the country was in fact built and controlled by an existing powerful actor, such as a dictator or rogue corporate actor? Could that actor use it to gain decisive or dominant power over the world as a whole, upsetting the existing balance of power?
- Economic disruption. If the new country is not a security threat in any of the ways listed in #1–3 above but simply participates peacefully in the global economy, could it still create severe risks simply by being so technologically advanced and effective that it disrupts the global economy, causing mass unemployment or radically concentrating wealth?
- Indirect effects. The world will change very quickly due to all the new technology and productivity that will be created by the new country. Could some of these changes be radically destabilizing?
Each of these risks is then described at some length, along with potential measures to mitigate these risks.
The essay ends as follows:
We will need to step up our efforts if we want to succeed. The first step is for those closest to the technology to simply tell the truth about the situation humanity is in, which I have always tried to do; I’m doing so more explicitly and with greater urgency with this essay. The next step will be convincing the world’s thinkers, policymakers, companies, and citizens of the imminence and overriding importance of this issue—that it is worth expending thought and political capital on this in comparison to the thousands of other issues that dominate the news every day. Then there will be a time for courage, for enough people to buck the prevailing trends and stand on principle, even in the face of threats to their economic interests and personal safety.
The years in front of us will be impossibly hard, asking more of us than we think we can give. But in my time as a researcher, leader, and citizen, I have seen enough courage and nobility to believe that we can win—that when put in the darkest circumstances, humanity has a way of gathering, seemingly at the last minute, the strength and wisdom needed to prevail. We have no time to lose.
It's a very thoughtful essay and it's well worth reading all the way through. You can find it here. And it's worth discussing. Which takes us too:
~~~~
From 7:30pm UK time on Wednesday 4th February, we'll be having an informal online discussion on issues raised by that essay:
- To what extent are we convinced by the arguments in the essay?
- What is, perhaps, missing from that analysis?
The discussion will be led and facilitated by David Wood, Chair of London Futurists.
It will take place in the London Futurists Discord.
Here's a link to the event: https://discord.gg/STWvPqZG?event=1466215404434227405
(Once you've arrived in that London Futurists Discord, take a moment to read the #read-this-first channel. And then feel free to join the discussions in the forums there. You'll find there's already some discussion of Dario's article in the #ai channel.)
13 attendees
Learning with Machines
·OnlineOnlineMany of us are making increasing use of AI systems to help us study, conduct research, develop forecasts, draft policies, and explore all sorts of new possibilities. The results are sometimes marvellous - even intoxicating.
But our interactions with AIs raise many risks too: the chaos of too much information, the distractions of superficiality, the vulnerabilities of expediency, the treachery of hype, the deceit of hallucinations, and the shrivelling of human capability in the wake of abdication of responsibility.
This live London Futurists webinar features a panel of researchers who have significant positive experience of ways of "learning with machines" that avoid or reduce the above risks:
- Bruce Lloyd - Emeritus Professor, London South Bank University
- Peter Scott - Founder of the Centre for AI in Canadian Learning
- Alexandra Whittington - Futurist on Future of Business team at TCS
The speakers will be reflecting on their experiences over the last 12 months, and offering advice for wiser use of AI systems in a range of important life tasks.
The webinar will also include time for audience questions, feedback, and extended conversation.
~~~~
This event will be hosted on Zoom. To register, click here: https://us02web.zoom.us/webinar/register/WN_6yZh6IVDSuORqo6bIVKnig
There will be no charge to attend the webinar.
The webinar will start broadcasting at 4pm UK time on Sat 7th February. To find this time in other timezones, you can use this conversion page.
Please log into Zoom up to 10 minutes ahead of the start time of the event, so you won't miss the start of the live broadcast.
As the discussion proceeds, attendees will be welcome to raise questions and vote to prioritise questions raised by others.
~~~~
Recommended reading ahead of the webinar:
~~~~
To register on Zoom for this event, click here.
27 attendees
What we must never give away to AI
·OnlineOnlineAs AI systems become more capable and more widely deployed, what are the human principles and practices that it will be especially important for us to retain and uphold, and not to surrender to AIs?
Here's the story so far:
- Our smart automated systems have progressed from doing tasks that are dull, dirty, dark, or dangerous - tasks that we humans have generally been happy to stop doing;
- They have moved on to assisting us - and then increasingly displacing us - in work that requires rationality, routine, and rigour;
- They're now displaying surprising traits of creativity, compassion, and care - taking more and more of the "cool" jobs.
Where should we say "stop" - lest we humans end up hopelessly enfeebled?
This online conversation, facilitated by futurists Matt O'Neill and David Wood, is a chance to explore which elements of human capability most need to be exercised and preserved. For example, consider:
- Judgement - Deciding what truly matters when information is incomplete or overwhelming.
- Stance - Knowing exactly what you will delegate to AI... And what you never will.
- Taste - Your instinctive sense of quality, timing, relevance, and cultural fit.
- Meaning-making - Explaining why things matter and what they mean for people, not just what the data shows.
- Responsibility - Owning the outcomes that affect people, trust, and your organisation's future.
You may have some very different ideas. You'll be welcome to share your views and experience.
This event will be taking place via LinkedIn. Click here for more details: https://www.linkedin.com/events/whatwemustnevergiveawaytoai-our7417901344268795904/
Note that the event is not a keynote or a panel. It’s a guided conversation.
Matt and David will introduce the ideas, share a few real-world examples, and then invite participants into discussion. The focus is practical, reflective, and grounded in lived experience.
If you’re interested in AI, leadership, human agency, or the psychological side of automation, this is a space to think out loud with others who care about where the boundaries should sit.
Date: Tuesday 10 February
Time: 7:00-8:00pm (UK)
Format: Online, discussion-ledCome curious. Come thoughtful.
Leave with sharper insights of your own.26 attendees
Past events
336


