LLM Security: Hacking by Asking Nicely


Details
The rapid adoption of LLMs raises critical questions about their safety and reliability. Because of their vulnerabilities, they often require human supervision, which limits the LLMs to being tools rather than independent agents. In this talk, I will delve (wink wink) into the inherent security risks of LLMs, such as direct and indirect prompt injections. We'll illustrate these on Gandalf, an LLM security game that we developed. We will explore various other dimensions of these security concerns, including data privacy issues, the potential for misuse, and the challenges of ensuring content accuracy and ethical compliance.
-------------------------------------------------------------
🔈 Announcement: This Meetup group will soon merge into Miton AI Times Meetup, so please join this group instead.
⌚️ Start: 16:00 and end at 17:00, both online and offline in Miton offices, Křížíkova 34, Praha - reception will have the necessary information.
🎙️ Speaker: Václav Volhejn
🍻 Networking after the talk.
⏺ Livestreaming: RSVP to see the the link for livestreaming
🎥 Recording: After the event we will publish a recording and post a link to it in the comments.
🚪Doors open at 15:45, and the event officially starts at 16:00.
Your expertise is about to take off. Can't wait to have you on board!

LLM Security: Hacking by Asking Nicely