Is AI an existential threat?

This is a past event

20 people went

Filter Cafe

1373 -1375 Milwaukee · Chicago, IL

How to find us

We'll be at the back.

Location image of event venue

Details

Moore's Law (http://en.wikipedia.org/wiki/Moore%27s_law) describes the observed acceleration of computing technology. Every 18 months, we approximately double the number of transistors on a silicon chip. This generally results in lower power requirements, greater miniaturization, and higher speed.

Futurist Hans Moravec observed that the exponential performance trend existed long before the advent of semiconductors. Consequently, Ray Kurzweil generalized Moore's Law, proposing a law of accelerating returns. If Kurzweil is correct, computing technology will continue to accelerate for decades to come. By the year 2029, a $1,000 PC will have the same raw computing power as the human brain. 18 months later, it will be twice as fast. Obviously, the days of human employment are going to come to an end.

Eventually, computers will exceed our abilities in all respects, and the best computer technicians and best computer programmers will be machines. Perhaps the same machine. This extrapolation (http://en.wikipedia.org/wiki/Technological_singularity#Basic_concepts)has led authors and philosophers to contemplate the notion of a runaway superintelligence (http://en.wikipedia.org/wiki/Superintelligence). Every year, the AI improves its performance and upgrades itself. In a short period of time, an AI would be as far above our intelligence as we are above chimpanzees.

Star Trek: The Next Generation had an episode (http://www.hulu.com/watch/442059) asking a similar question. What happens when a well-meaning person expands his intelligence orders of magnitude beyond the norm? Can he resist the temptation to impose his will when he knows what is best for everyone?

Perhaps we will eschew general AI, and instead rely only on AI for specific tasks, like driving cars. But this seems unlikely. We face plenty of problems that can only be solved with general intelligence, i.e., a general understanding of the world, and the ability to create new concepts and theories, and test those theories.

Stephen Hawking, Bill Gates and Elon Musk have all sounded the alarm about the dangers of AI to humanity. Are they right? Will a superintelligence kill us?