Passa ai contenuti

Dettagli

There are many compelling reasons to suspect that, shortly after surpassing human intelligence, an AI could FAR surpass human intelligence. Don't think of the difference between Einstein and a village idiot, think of the difference between humans and mice. A mind of this power would almost certainly become the dominant optimization process on the planet. Whatever it's goals are, it is likely to achieve them. When considering what such an intelligence would do, we must be wary of anthropomorphizing minds that do not share our evolutionary history. Whatever a superintelligence does, it will do so because we made it that way.

Presumably, most of us would prefer a superintelligence that takes actions that are, on the whole, beneficial to humanity: a "Friendly AI." Actually building such an AI turns out to be even more difficult than one might initially guess. Any strategy that involves constraining an AI's behavior against it's will is fundamentally unsafe when dealing with minds vastly superior to our own. The only safe superintelligence is one that does not want to harm us in the first place.

The challenge of Friendly AI is designing a goal system so precisely aligned with our own that it can be trusted with effectively unlimited power. It is very unlikely that such a goal system can be easily "tacked on" to an existing AI design not built explicitly for that purpose, so we cannot wait until someone solves the AI problem before working on the problem of Friendliness. This is the problem that, if we solve it, could solve all other problems as a special case. Likewise, if we don't solve it, it doesn't matter how many other things we get right. This may very well be the most daunting challenge that our species ever faces, and we need to do so now.

Argomenti correlati

Potresti anche apprezzare