Skip to content

Details

In his recent essay, "The Adolescence of Technology: Confronting and Overcoming the Risks of Powerful AI", Dario Amodei, CEO of Anthropic, covers a range of deeply important topics.

For example, Dario describes five categories of risk arising from what he calls "a country of geniuses in a datacenter":

  1. Autonomy risks. What are the intentions and goals of this country? Is it hostile, or does it share our values? Could it militarily dominate the world through superior weapons, cyber operations, influence operations, or manufacturing?
  2. Misuse for destruction. Assume the new country is malleable and “follows instructions”—and thus is essentially a country of mercenaries. Could existing rogue actors who want to cause destruction (such as terrorists) use or manipulate some of the people in the new country to make themselves much more effective, greatly amplifying the scale of destruction?
  3. Misuse for seizing power. What if the country was in fact built and controlled by an existing powerful actor, such as a dictator or rogue corporate actor? Could that actor use it to gain decisive or dominant power over the world as a whole, upsetting the existing balance of power?
  4. Economic disruption. If the new country is not a security threat in any of the ways listed in #1–3 above but simply participates peacefully in the global economy, could it still create severe risks simply by being so technologically advanced and effective that it disrupts the global economy, causing mass unemployment or radically concentrating wealth?
  5. Indirect effects. The world will change very quickly due to all the new technology and productivity that will be created by the new country. Could some of these changes be radically destabilizing?

Each of these risks is then described at some length, along with potential measures to mitigate these risks.

The essay ends as follows:

We will need to step up our efforts if we want to succeed. The first step is for those closest to the technology to simply tell the truth about the situation humanity is in, which I have always tried to do; I’m doing so more explicitly and with greater urgency with this essay. The next step will be convincing the world’s thinkers, policymakers, companies, and citizens of the imminence and overriding importance of this issue—that it is worth expending thought and political capital on this in comparison to the thousands of other issues that dominate the news every day. Then there will be a time for courage, for enough people to buck the prevailing trends and stand on principle, even in the face of threats to their economic interests and personal safety.

The years in front of us will be impossibly hard, asking more of us than we think we can give. But in my time as a researcher, leader, and citizen, I have seen enough courage and nobility to believe that we can win—that when put in the darkest circumstances, humanity has a way of gathering, seemingly at the last minute, the strength and wisdom needed to prevail. We have no time to lose.

It's a very thoughtful essay and it's well worth reading all the way through. You can find it here. And it's worth discussing. Which takes us too:

~~~~

From 7:30pm UK time on Wednesday 4th February, we'll be having an informal online discussion on issues raised by that essay:

  • To what extent are we convinced by the arguments in the essay?
  • What is, perhaps, missing from that analysis?

The discussion will be led and facilitated by David Wood, Chair of London Futurists.

It will take place in the London Futurists Discord.

Here's a link to the event: https://discord.gg/STWvPqZG?event=1466215404434227405

(Once you've arrived in that London Futurists Discord, take a moment to read the #read-this-first channel. And then feel free to join the discussions in the forums there. You'll find there's already some discussion of Dario's article in the #ai channel.)

Related topics

Artificial Intelligence
Risk Management
Futurists
Geopolitics
Singularity

You may also like