Skip to content

Details

Large language models and Generative Artificial Intelligence (GenAI) are all the rage.
GenAI technologies, such as large language models (LLMs) and diffusion models, have changed the computing landscape. They have enabled exciting applications, such as generating realistic images, automatic code completion, and document summarization. However, adversaries can use GenAI as well (this is the classic case of "dual use"). For example, adversaries can use GenAI to generate spearfishing emails or realistic-looking content that spreads is information. Note that these attacks were possible before, but the velocity/scale of these attacks might be greatly enhanced because of GenAI. However, we will discuss on risks of GenAI with focus on questions, such as:

[1] How could attackers leverage GenAI technologies?

[2] How should security measures change in response to GenAI technologies?

[3] What are some current and emerging technologies we should pay attention to for designing countermeasures?

The material is based on a workshop that was organized by Google,
Stanford, and UW-Madison.

We are honored to have Dr Somesh Jha, a UW CS professor and member of the Langroid group presenting to MadAI.

Events in Madison, WI
Cloud Computing
Data Science
New Technology
Software Development
Technology

Members are also interested in