Building Aligned Artificial Intelligence
Details
Algorithms are shaping the present and will shape the future ever more strongly. It is crucially important that these powerful algorithms be aligned – that they act in the interests of their designers, their users, and humanity as a whole. Failure to align them could lead to catastrophic results.
These words introduce the website of Aligned AI. The website continues as follows:
Aligned AI is a benefit corporation dedicated to solving the alignment problem – for all types of algorithms and AIs, from simple recommender systems to hypothetical superintelligences. The fruits of this research will then be available to companies building AI, to ensure that their algorithms serve the best interests of their users and themselves, and do not cause them legal, reputational, or ethical problems.
Our long experience in the field of AI safety has identified the two key bottlenecks for solving alignment: deducing human values, and value transfer and extrapolation. Using the funds raised for this company, we will hire top-quality researchers, programmers, and software engineers, and work though our research plan to solve the alignment problem.
This webinar, hosted by Rohit Talwar of Fast Future and David Wood of London Futurists, with support from the UK node of the Millennium Project, continues these organisations' joint project to explore and understand the rise and implications of AGI (Artificial General Intelligence). The speaker will be Stuart Armstrong, Co-founder and Chief Research Officer of Aligned AI.
Stuart will be explaining why alignment has fundamental importance for the future of AI, the approach taken by Aligned AI, and opportunities for collaboration.
======
This event will be hosted on Zoom. Click here to register. After registering, you will receive a confirmation email containing information about joining the meeting.
On this occasion, there is no charge to register for the event or to attend it.
======
The webinar will start broadcasting at 4pm UK time on Sat 14th May. To find this time in other timezones, use https://www.timeanddate.com/worldclock/converter.html?iso=20220514T150000&p1=136
Please log into Zoom up to 10 minutes ahead of the start time of the event, so you won't miss the start of the live broadcast.
As the discussion proceeds, attendees will be welcome to raise questions and vote to prioritise questions raised by others.
======
Stuart Armstrong is a Former Researcher at the Future of Humanity Institute, where he was the originator of the value extrapolation approach to AI alignment.
Stuart has extensive expertise in AI alignment research, having pioneered such ideas as interruptibility, low-impact AIs, counterfactual Oracle AIs, the difficulty/impossibility of AIs learning human preferences without assumptions, and how to nevertheless learn these preferences. Along with journal and conference publications, he posts his research extensively on the AI Alignment Forum.
Stuart is also the author of the book Smarter Than Us: The Rise of Machine Intelligence.
======
To register for this event, visit https://us02web.zoom.us/webinar/register/WN_Src2sxsdQTeX3AdOOHX8nw
