The Risks of Hardcoding Secrets in AI-Generated Code
Details
Hybrid Attendance: Join us in person or online (link to be provided).
Join us for discussion, food, appsec news, and an OWASP-related talk.
For our June meeting, Julie Peterson, Senior Product Marketing Manager at Cycode, will be speaking to the chapter about The Risks of Hardcoding Secrets in AI-Generated Code.
Machine learning, particularly Language Learning Models (LLMs), has paved the way for groundbreaking advancements in many fields, including code generation. However, this innovation is not without inherent risks. One potential issue is that these models generate code with hardcoded secrets, such as API keys or database credentials. This practice stands in stark contrast to the recommended way of managing these secrets – through a secrets manager.
In this presentation, we consider the following:
- What are hardcoded secrets and how to prevent them
- The importance of secrets management
- The impact of LLMs in generating code
- How to mitigate the risk of hardcoded secrets in code generated by LLM
This will be an exciting session, RSVP now!
