Skip to content

General AI - Opportunities and Risks

Photo of Mark Harris
Hosted By
Mark H.
General AI - Opportunities and Risks

Details

Artificial intelligence (AI) is being incorporated into an increasing range of engineered systems. Potential benefits are so desirable, there is no doubt that humans will pursue AI with increasing determination and resources. Potential risks to humans range from economic and labor disruptions to extinction, making AI risk analysis and mitigation critical.

Specialized (narrow and shallow-to-deep) AI, such as Siri, OK Google, Watson, and vehicle-driving systems acquire pattern recognition accuracy by training on vast data sets containing the target patterns. Humans provide the operational goals (utility functions) and curate the items in the training data sets to include only information directly related to the goal. For example, a driving AI's utility functions involve getting the vehicle to a destination while keeping the vehicle within various parameters (speed, staying within lane, complying with traffic signs and signals, avoiding collisions, etc.).

Artificial general intelligence (AGI or GAI) systems, by contrast, are capable of learning and performing the full range of intellectual work at or beyond human level. AGI systems can achieve learning goals without explicitly curated training data sets or detailed objectives. They can learn 'in the wild', so to speak. For example, an AGI with the goal of maximizing a game score requires only a visual interface to the game (so it can sense the game environment and the outcomes of its own actions) and an ability to interact with (play) the game. It figures out everything on its own.

Some people have raised alarms that AGIs, because their ability to learn is more generalized, are likely to suddenly surpass humans in most or all areas of intellectual achievement. By definition, once AGI minds surpass ours, we will not be able to understand much of their reasoning or actions. This situation is often called the technological singularity--a sort of knowledge horizon we'll not be able to cross. The concerns arise from our uncertainty that superintelligent AIs will value us or our human objectives or--if they do value us--that they will be able to translate that into actions that do not degrade our survival or quality of existence.

Multimedia Resources

• Demis Hassabis on Google Deep Mind and AGI (video (https://www.youtube.com/watch?v=vQXAsdMa_8A), 14:05, best content starts a 3:40)

• Google Deep Mind (Alpha Go) AGI (video (https://youtu.be/TnUYcTuZJpM), 13:44)

• Extra: Nick Bostrom on Superintelligence and existential threats (video (https://www.youtube.com/watch?v=-UIg00a_CD4), 19:54) - part of the talk concerns biological paths to superintelligence

Print Resources

• Primary reading (long article): Superintelligence: Fears, Promises, and Potentials (http://www.kurzweilai.net/superintelligence-fears-promises-and-potentials)

• Deeper dive (for your further edification): Superintelligence; Paths, Dangers, and Strategies (https://smile.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom-ebook/dp/B00LOOCGB2/), by Nick Bostrom

Photo of Brain, Mind & AI group
Brain, Mind & AI
See more events
North Domingo Baca Multigenerational Center
7521 Carmel, NE (505) 764-6475 · Albuquerque, NM