AI Safety Level 3: "so what" Lightning talks


Details
What are the risks from this year's AI models and how can they be mitigated?
We invite Boston tech leaders, engineers, researchers, and policy to share 5-6min lightning talks (e.g. Pecha Kucha) to communicate something you've learned about the promise and peril of recent LLMs.
We encourage these brief oral presentations (backed by pictures or other visual) to keep barrier to entry very low and exchange more awareness.
Some ideas may be:
- What were significant safety wins over the last 5 years, that enable some trust in AI models?
- Is "training with my data" the main concern? Is there a hierarchy of alignment and safety challenges?
- Are there big safety differences between foundation models?
- What are AI safety levels? Why do we have them and who sets them?
- Does it matter whether a model is open source?
- How do metaphors help or damage understanding AI safety?
- What are the critical security precautions to take when developing with AI?
- How does universal adoption help threat actors?
- How do we separate new from existing exacerbated risks?
If you need any materials to support your topic, check out our recommended readings/courses/resources on AI risks and mitigations.
Code of conduct:
Food, drinks, pets, and weapons are strictly prohibited by the Boston Public Library.
This is a forum for open discussion between participants, intentionally open to new ideas. Disparaging or harassing other participants is unacceptable. We want AI Safety Awareness to be a safe and productive environment for everyone.
We do not condone harassment against any participant, for any reason. Harassment includes deliberate intimidation and targeting individuals in a manner that makes them feel uncomfortable, unwelcome, or afraid.
Participants asked to stop any harassing behavior are expected to comply immediately. We reserve the right to respond to harassment in the manner we deem appropriate, including but not limited to expulsion and referral to the relevant authorities.

AI Safety Level 3: "so what" Lightning talks