Skip to content

Details

A.I. companies are racing to create artificial general intelligence, or “AGI.” If they succeed, the result will be human-level AI systems that can independently pursue high-level goals by formulating and executing long-term plans in the real world. By default, such systems will be “misaligned” — pursuing goals that humans do not desire. This goal mismatch will put humans and AGIs into strategic competition with one another. Thus, leading AI researchers agree that, as with competition between humans with conflicting goals, human–AI strategic conflict could lead to catastrophic violence.

Existing law is not merely unequipped to mitigate this risk; it will actively make things worse. This article is the first to systematically investigate how law affects the risk of catastrophic human–AI conflict. It begins by arguing, using formal game-theoretic models, that under today’s legal regime, humans and AIs will likely be trapped in a prisoner’s dilemma. Both parties’ dominant strategy will be to permanently disempower or destroy the other, even though the costs of such conflict would be high.

This talk contends that one surprising legal change could help to reduce catastrophic risk: A.I. rights. Not just any rights will do. To promote human safety, A.I.s should be given the basic private law rights already enjoyed by other non-human agents, like corporations. A.I.s should be empowered to make contracts, hold property, and bring tort claims. Granting these rights would enable humans and A.I.s to engage in iterated, small-scale, mutually-beneficial transactions. This, we show, changes humans’ and A.I.s’ optimal game-theoretic strategies, encouraging a peaceful strategic equilibrium. The reasons are familiar from human affairs. In the long run, cooperative trade generates immense value, while violence destroys it.

Basic private law rights are not a panacea. The talk will identify many ways in which catastrophic human–AI conflict may still arise. It thus explores whether law could further reduce risk by imposing a range of duties directly on AGIs. But basic private law rights are a necessary prerequisite for all such further regulations. In this sense, the A.I. rights investigated here form the foundation for a Law of AGI, broadly construed.

Suggested reading:

  • Peter N. Salib and Simon Goldstein, “AI Rights for Human Safety” (August 01, 2024). Virginia Law Review (forthcoming), Available at SSRN.

About the Speaker:

Peter N. Salib is an Assistant Professor of Law at the University of Houston and associated faculty in Public Affairs. He also serves as a Law and Policy Advisor to the Center for AI Safety in San Francisco, as Co-Director of the Center for Law & AI Risk, and is a Contributing Editor at Lawfare. Salib is an expert in the law of artificial intelligence. His research applies substantive constitutional doctrine and economic analysis to questions of AI governance. He has previously written about how machine learning techniques can be used to solve intractable-seeming problems in constitutional policy. Salib’s current research focuses on how law can help mitigate catastrophic risks from increasingly capable AI. Prior to joining the University of Houston Law Center, Salib was a Climenko Fellow and lecturer on law at Harvard Law School. Before that, Salib practiced law at Sidley Austin LLP and served as a judicial clerk to the Honorable Frank H. Easterbrook.

_________________________________________________________

This is an online talk and audience Q&A presented by the University of Toronto's Schwartz Reisman Institute for Technology and Society. It is open to the public and held on Zoom.

The featured speaker will present for 45 minutes, followed by an open discussion with participants.

About the Schwartz Reisman Institute for Technology and Society:

The Schwartz Reisman Institute for Technology and Society is a research institute at the University of Toronto that explores the ethical and societal implications of technology. Our mission is to deepen our understanding of technologies, societies, and what it means to be human by integrating research across traditional boundaries and building practical, human-centred solutions that really make a difference.

We believe humanity still has the power to shape the technological revolution in positive ways, and we’re here to connect and collaborate with the brightest minds in the world to make that belief a reality. The integrative research we conduct rethinks technology’s role in society, the contemporary needs of human communities, and the systems that govern them. We’re investigating how best to align technology with human values and deploy it accordingly.

The human-centred solutions we build are actionable and practical, highlighting the potential of emerging technologies to serve the public good while protecting citizens and societies from their misuse.

The institute will be housed in the new $100 million Schwartz Reisman Innovation Centre currently under construction at the University of Toronto.

Artificial Intelligence
Law
Science
Political Philosophy
Technology

Members are also interested in