Skip to content

Details

The flexibility and power of generative AI has yielded new classes of security risks for computing systems. In this talk, Keegan Hines will discuss common risks to language models such as indirect prompt injection attacks and RAG poisoning. Keegan will describe the fundamental limitations of LLMs which yield these risks and will describe ongoing work in addressing and mitigating these pressing security concerns.

About our speaker:
Keegan is a Principal Applied Scientist at Microsoft, working on the security of generative AI systems. Prior to Microsoft, Keegan has led ML teams in roles at startups, financial services, and government. He is an Adjunct Assistant Professor at Georgetown University, teaching graduate coursework in data science.

Events in Richmond, VA
Artificial Intelligence
Artificial Intelligence Applications
Machine Learning
Data Science

Members are also interested in