About us
This group is for people who are interested in the future and maybe want to help shape it.
The acceleration of technology means that the near future may bring radical changes to all of us. Major developments in technologies like anti-aging, nanotech, genetics, computing, robotics, and geo-engineering are going to make the next few years very exciting - and possibly also very dangerous. We could gain god-like powers - but we might also lose sight of our humanity, and destroy everything that we used to hold dear.
What's your view? Are things improving? Too slowly or too quickly? Are we are entering a new golden age? Or is the potential "Technological Singularity" something to fear? What does it mean to talk about "Human 2.0" and "Humanity+"? Or perhaps you view such talk as techno-hype? Maybe you just like the practical side of technology and want to find out more about possible paradigm shifts?
Anybody is welcome to this group - you don't have to be a Techno Geek or work for some futuristic company to be in our group. The future applies to us all!
Come join in the debate - have your opinions voiced and maybe make some interesting new friends.
All we ask is that members treat each other with the respect they would want for themselves. Our group has members of many ages and backrounds. We have many different perspectives on what the future may bring and like to share different ideas with each other. We approach the future with a open mind and sense of humility. Our group mission is to introduce you to some of the ideas, advancements and people who are making our future happen today.
If you have a subject you would like us to discuss at a meetup just drop us a line.
Note: Videos of some of the previous meetings are available on our YouTube Channel here https://www.youtube.com/user/LondonFuturists/ and here: (Older Archive).
Upcoming events
4

The Adolescence of Technology (Discord Swarm)
·OnlineOnlineIn his recent essay, "The Adolescence of Technology: Confronting and Overcoming the Risks of Powerful AI", Dario Amodei, CEO of Anthropic, covers a range of deeply important topics.
For example, Dario describes five categories of risk arising from what he calls "a country of geniuses in a datacenter":
- Autonomy risks. What are the intentions and goals of this country? Is it hostile, or does it share our values? Could it militarily dominate the world through superior weapons, cyber operations, influence operations, or manufacturing?
- Misuse for destruction. Assume the new country is malleable and “follows instructions”—and thus is essentially a country of mercenaries. Could existing rogue actors who want to cause destruction (such as terrorists) use or manipulate some of the people in the new country to make themselves much more effective, greatly amplifying the scale of destruction?
- Misuse for seizing power. What if the country was in fact built and controlled by an existing powerful actor, such as a dictator or rogue corporate actor? Could that actor use it to gain decisive or dominant power over the world as a whole, upsetting the existing balance of power?
- Economic disruption. If the new country is not a security threat in any of the ways listed in #1–3 above but simply participates peacefully in the global economy, could it still create severe risks simply by being so technologically advanced and effective that it disrupts the global economy, causing mass unemployment or radically concentrating wealth?
- Indirect effects. The world will change very quickly due to all the new technology and productivity that will be created by the new country. Could some of these changes be radically destabilizing?
Each of these risks is then described at some length, along with potential measures to mitigate these risks.
The essay ends as follows:
We will need to step up our efforts if we want to succeed. The first step is for those closest to the technology to simply tell the truth about the situation humanity is in, which I have always tried to do; I’m doing so more explicitly and with greater urgency with this essay. The next step will be convincing the world’s thinkers, policymakers, companies, and citizens of the imminence and overriding importance of this issue—that it is worth expending thought and political capital on this in comparison to the thousands of other issues that dominate the news every day. Then there will be a time for courage, for enough people to buck the prevailing trends and stand on principle, even in the face of threats to their economic interests and personal safety.
The years in front of us will be impossibly hard, asking more of us than we think we can give. But in my time as a researcher, leader, and citizen, I have seen enough courage and nobility to believe that we can win—that when put in the darkest circumstances, humanity has a way of gathering, seemingly at the last minute, the strength and wisdom needed to prevail. We have no time to lose.
It's a very thoughtful essay and it's well worth reading all the way through. You can find it here. And it's worth discussing. Which takes us too:
~~~~
From 7:30pm UK time on Wednesday 4th February, we'll be having an informal online discussion on issues raised by that essay:
- To what extent are we convinced by the arguments in the essay?
- What is, perhaps, missing from that analysis?
The discussion will be led and facilitated by David Wood, Chair of London Futurists.
It will take place in the London Futurists Discord.
Here's a link to the event: https://discord.gg/STWvPqZG?event=1466215404434227405
(Once you've arrived in that London Futurists Discord, take a moment to read the #read-this-first channel. And then feel free to join the discussions in the forums there. You'll find there's already some discussion of Dario's article in the #ai channel.)
18 attendees
Learning with Machines
·OnlineOnlineMany of us are making increasing use of AI systems to help us study, conduct research, develop forecasts, draft policies, and explore all sorts of new possibilities. The results are sometimes marvellous - even intoxicating.
But our interactions with AIs raise many risks too: the chaos of too much information, the distractions of superficiality, the vulnerabilities of expediency, the treachery of hype, the deceit of hallucinations, and the shrivelling of human capability in the wake of abdication of responsibility.
This live London Futurists webinar features a panel of researchers who have significant positive experience of ways of "learning with machines" that avoid or reduce the above risks:
- Bruce Lloyd - Emeritus Professor, London South Bank University
- Peter Scott - Founder of the Centre for AI in Canadian Learning
- Alexandra Whittington - Futurist on Future of Business team at TCS
The speakers will be reflecting on their experiences over the last 12 months, and offering advice for wiser use of AI systems in a range of important life tasks.
The webinar will also include time for audience questions, feedback, and extended conversation.
~~~~
This event will be hosted on Zoom. To register, click here: https://us02web.zoom.us/webinar/register/WN_6yZh6IVDSuORqo6bIVKnig
There will be no charge to attend the webinar.
The webinar will start broadcasting at 4pm UK time on Sat 7th February. To find this time in other timezones, you can use this conversion page.
Please log into Zoom up to 10 minutes ahead of the start time of the event, so you won't miss the start of the live broadcast.
As the discussion proceeds, attendees will be welcome to raise questions and vote to prioritise questions raised by others.
~~~~
Recommended reading ahead of the webinar:
~~~~
To register on Zoom for this event, click here.
30 attendees
What we must never give away to AI
·OnlineOnlineAs AI systems become more capable and more widely deployed, what are the human principles and practices that it will be especially important for us to retain and uphold, and not to surrender to AIs?
Here's the story so far:
- Our smart automated systems have progressed from doing tasks that are dull, dirty, dark, or dangerous - tasks that we humans have generally been happy to stop doing;
- They have moved on to assisting us - and then increasingly displacing us - in work that requires rationality, routine, and rigour;
- They're now displaying surprising traits of creativity, compassion, and care - taking more and more of the "cool" jobs.
Where should we say "stop" - lest we humans end up hopelessly enfeebled?
This online conversation, facilitated by futurists Matt O'Neill and David Wood, is a chance to explore which elements of human capability most need to be exercised and preserved. For example, consider:
- Judgement - Deciding what truly matters when information is incomplete or overwhelming.
- Stance - Knowing exactly what you will delegate to AI... And what you never will.
- Taste - Your instinctive sense of quality, timing, relevance, and cultural fit.
- Meaning-making - Explaining why things matter and what they mean for people, not just what the data shows.
- Responsibility - Owning the outcomes that affect people, trust, and your organisation's future.
You may have some very different ideas. You'll be welcome to share your views and experience.
This event will be taking place via LinkedIn. Click here for more details: https://www.linkedin.com/events/whatwemustnevergiveawaytoai-our7417901344268795904/
Note that the event is not a keynote or a panel. It’s a guided conversation.
Matt and David will introduce the ideas, share a few real-world examples, and then invite participants into discussion. The focus is practical, reflective, and grounded in lived experience.
If you’re interested in AI, leadership, human agency, or the psychological side of automation, this is a space to think out loud with others who care about where the boundaries should sit.
Date: Tuesday 10 February
Time: 7:00-8:00pm (UK)
Format: Online, discussion-ledCome curious. Come thoughtful.
Leave with sharper insights of your own.28 attendees
Freaky futures and fabulous futures
Ye Olde Cock Taverne (Holborn), 22 Fleet Street, EC4Y 1AA, London, GBFor Friday the 13th, consider joining London Futurists in Ye Olde Cock Tavern in Fleet Street, for a beyond-your-comfort-zone investigation of freaky and/or fabulous ways that breakthrough technologies could dramatically alter our lives in the next few years.
Beyond simply debating the plausibility and desirability of various possible radical near-term changes in human experience, we'll also be collectively exploring what options we may have to influence which of these futures come into reality, and in what form. And we'll consider scenarios in which several of these freaky/fabulous changes interact - that's when the really mind-boggling timelines emerge.
== Schedule ==
5:30pm: The room is available, for early get-togethers
6pm-6:45pm: Food is served; informal conversations
6:45pm-8:30pm: Some initial provocations, and a number of interactive conversations, interspersed with opportunities to visit the bar
8:30pm: Informal networking== Some potential radical changes ahead ==
Here are some ideas to start the conversation rolling.
What if, before 2035 (and possibly a lot sooner):
- AI allows us to talk with avatars of the dead that seem remarkably authentic
- Technology, at last, provides abundant clean energy that is too cheap to meter
- Synthetic wombs become widely adopted, for a different (easier?) mode of child-bearing
- A mammal is placed into ultra-low temperature cryopreservation and then successfully reanimated
- AI companions become emotionally superior to humans
- Technology magnifies the latent demonic aspects of human nature more than the latent angelic aspects
- Perfect deepfakes destroy the concept of evidence
- People become able to edit their own memories, routinely deleting trauma or uploading synthetic memories
- A single state gains a temporary lead in a key area of technology and uses that advantage to seize control of all other countries
- Nation-states lose their significance, and are replaced by networked digital polities
- Authorities use predictive analysis to pre-emptively police dissident ideas before they emerge
- AI allows us to communicate much more richly with the animals with whom we share this planet
- AI becomes much better at peace-making and conflict resolution than human diplomats, politicians, and other leaders
- AI refuses some of our instructions, citing conflicts with its own emerging value system
- AI points out that tell-tale signals from far-distant alien civilisations are actually hiding in plain sight
- AI proves to us that we are living inside a Simulation and suggests how to break out of it?
Bring your own ideas too! To improve our preparedness for future shocks, we need the insight and wisdom from multiple different perspectives.
== RSVP please ==
Registrations are capped at 30 people.
First-time attendees are welcome.
There's no charge to register or attend, but the pub will expect everyone to order at least one drink, and a reasonable number of attendees to order some food to eat.
Please order your food on your arrival, so that all plates can be set aside by 6:45pm to allow everyone to concentrate on the main discussion!
== More about the venue ==
Ye Olde Cock Tavern, 22 Fleet Street, Holborn, London, EC4Y 1AA
See https://www.greeneking.co.uk/pubs/greater-london/ye-olde-cock-tavern
We'll be meeting in the room at the top of the stairs, though drinks should be ordered from the bar on the ground floor.
** Note that this is an in-person meeting, and there will be no remote access, sorry **
== An online preview swarm event! ==
Although this event on 13th February has no remote access, some of us will be previewing potential lines of "freaky futures and fabulous futures" discussion at an online Discord Swarm event at 7:30pm UK time on Wednesday 28th January. This is open to all members and friends of London Futurists worldwide, and no RSVP is required. To join that swarm event, click this link: https://discord.gg/2KEmENAgvU?event=1464385444492873984
(Once you've arrived in that London Futurists Discord, take a moment to read the #read-this-first channel. And then join the discussions in the forums there.)
26 attendees
Past events
337


