As research around the world proceeds to improve the power, the scope, and the generality of AI systems, should developers adopt regulatory frameworks to help steer progress?
What are the main threats that such regulations should be guarding against? In the midst of an intense international race to obtain better AI, are such frameworks doomed to be ineffective? Might such frameworks do more harm than good, hindering valuable innovation? Are there good examples of precedents, from other fields of technology, of international agreements proving beneficial? Or is discussion of frameworks for the governance of AGI (Artificial General Intelligence) a distraction from more pressing issues, given the potential long time scales ahead before AGI becomes a realistic prospect?