Skip to content

GENERIC LLMS IN CYBERSECURITY

Photo of IEEE Computational Intelligence
Hosted By
IEEE Computational Intelligence .
GENERIC LLMS IN CYBERSECURITY

Details

Free event, please register at:
https://events.vtools.ieee.org/m/489327

Generic Large Language Models (GLLMs) are continually being released with increased size and capabilities, enhancing the capabilities of these tools as universal problem solvers. While the reliability of GLLMs' responses is questionable in many situations, these models are often augmented or retrofitted with external resources for various applications, including cybersecurity.

The talk will discuss major security concerns of these pre-trained models: first, GLLMs are prone to adversarial manipulation, such as model poisoning, reverse engineering, and side-channel cyberattacks. Second, the security issues related to LLM-generated codes using open-source libraries/codelets for software development can involve software supply chain attacks. These may result in information disclosure, access to restricted resources, privilege escalation, and complete system takeover.

This talk will also cover the benefits and risks of using GLLMs in cybersecurity, particularly in malware detection, log analysis, intrusion detection, etc. I will highlight the need for diverse AI approaches (non-LLM-based smaller models) trained with application-specific curated data, fine-tuned for well-tested security functionalities in identifying and mitigating emerging cyber threats, including zero-day attacks.

Photo of IEEE Computational Intelligence Society - SCV Chapter group
IEEE Computational Intelligence Society - SCV Chapter
See more events