Skip to content

Details

From 9:30 AM to 10:00 AM we discuss RANDOM BITS where anybody can bring information to the group that might be useful or important when considering the future. Then the presentation starts at 10 AM and lasts until noon. There is usually some time for discussion.

## Some of the dangers of artificial superhuman intelligence (ASI)

Eliezer Yudkowsky is a founding researcher of the field of AI alignment and played a major role in shaping the public conversation about smarter-than-human AI.

In 2023, hundreds of AI luminaries signed an open letter warning that artificial intelligence poses a serious risk of human extinction. Since then, the AI race has only intensified. Companies and countries are rushing to build machines that will be smarter than any person, or all people combined. The world is devastatingly unprepared for what would come next.
For decades, 2 signatories of that letter—Eliezer Yudkowsky and Nate Soares—have studied how smarter-than-human intelligences will think, behave, and pursue their objectives. Their research says that sufficiently smart AIs will develop goals of their own that put them in conflict with us—and that if it comes to conflict, an artificial super intelligence would crush us. The contest wouldn’t even be close.

How could a machine super intelligence wipe out our entire species? Why would it want to? Would it want anything at all? In this urgent book, Yudkowsky and Soares walk through the theory and the evidence, present one possible extinction scenario, and explain what it would take for humanity to survive.

To get some idea how difficult it is to predict the timing of future milestones, consider that in 1901 Orville Wright said to wilber Wright, after some discouraging work, that he thought humans would not fly for 1,000 years. They flew 2 years later. SHAGI is more complex & therefore harder to predict than human flight.

Guesses of time remaining before super human artificial general intelligence (SHAGI) by top AI researchers have been decreasing rapidly, & currently seem to range from 1-20 years. Not much considering all that must be done beforehand to minimize risks.

Nobel laureate Geoffrey Hinton quit his very hi level job at Google to be free to warn about the dangers of SHAGI.

Turing award winner Yann LeCun says we are still far from existential risk because current LLMs lack long term memory & world models incorporating the laws of physics, et al. I agree that SHAGI will need both, but these features could possibly be coordinated with or integrated to work with LLMs.

No one fully understands how LLMs work. There is no way to know yet how close to SHAGI LLMs may get by increased scaling. We may not know immediately when SHAGI exists, especially if it conceals its full powers to preempt our limiting or disconnecting it. Examples of such unexpected emergent behavior have already occurred, such as with Claude at Anthropic.

The most important take-home is that we cannot fail to solve the alignment problem & keep it solved! 1 failure in this & it’s game over. As a carbon chauvinist, I give human extinction a - infinity weighting, which is why I give glimpses of the risks here. To fully grasp the risks, I think one need to read the whole book & consider its arguments carefully + other videos & papers if you aren’t convinced.

There is hope because of the nuclear limitation treaties & inspections that have so far helped prevent global nuclear war, which was widely expected in the 1950s & 60s, & the Asilomar meeting & agreement, which has so far led to global caution & monitoring of genetic engineering, because most major world leaders have realized the dangers to themselves of not doing so.

Presentation and discussion based on the book "If Anyone Builds It, Everyone Dies" by Eliezer Yudkowsky & Nate Soares
The full audio book is available online at
https://www.youtube.com/watch?v=HtL9NkV4CjY
(but probably not for long).

Many other discussions of the existential risks of AI are easily found on YouTube. To find more technical peer-reviewed publications, just search for “Risks of superhuman AI” on
https://scholar.google.com/

Presenter - Eric Hand

Join Zoom Meeting
https://us06web.zoom.us/j/84429004388?pwd=aXAvTs9iKf2i0j3qmsu4Mk6ABzxDyf.1
Meeting ID: 844 2900 4388
Passcode: 914646

Artificial Intelligence
New Technology
Futurology

Members are also interested in