AI for AppSec - A discussion of AppSec Best Practices

Details
AI Use Cases for AppSec - A discussion of AppSec Best Practices
11am-2:30pm for session
2:30pm-3:30pm for happy hour
Details
Topics- See abstracts below
- Host Intro-Potential AI use cases for Application Security
- Leveraging AI for Vulnerability Identification-NowSecure
- AI coding agents -Risks and Benefits-Endor Lab
- AppSec for AI AND NHI (Non Human Identity) -GrayLog
- Shadow AI and AppSec: What You Don’t Know Will Get You!-ByteWhisper
Lunch Provided
Scuzzi’s Italian Restaurant - 4035 N Loop 1604 W #102, San Antonio, TX 78257
HAPPY HOUR & DEMO LAB networking after session!!
Snyk
Rapid 7
ZOOM link provided below for remote attendees
Join Zoom Meeting
**https://ftsc.zoom.us/j/87354552283?pwd=riuQYJOQfGAjjEkAoY2eb5YORAvU7D.1**
Meeting ID: 873 5455 2283
Passcode: 994707
We encourage everyone to attend in person. We will have door prizes and excellent food for all to enjoy, as you take advantage of this excellent networking opportunity!
Please feel free to pass this information on to your peers and team members.
Please reply “ONSITE” if you plan on attending in person so we can finalize headcount for food and room attendance 😊
Presentations will include:
Host Intro-Potential AI use cases for Application Security
I. Leveraging AI for Vulnerability Identification-NowSecure
Artificial intelligence (AI) language models are emerging as valuable tools for mobile security analysts and developers, offering significant benefits such as aiding in structured vulnerability assessments or generating code. However, limitations such as “hallucinations” in which the model generates inaccurate or misleading outputs highlight the importance of human oversight in managing risk posed by AI.
This talk covers a novel approach for recovering application source code, leveraging AI language models to transform pseudo-disassembly into high-level source code. This method is able to handle complex abstractions introduced in high-level languages SwiftUI or Dart and generates output in popular programming languages like Swift, C#, Kotlin, Java, Python or even Bash.
II. AI coding agents -Risks and Benefits-Endor Lab
The proliferation of AI coding agents will accelerate the production of code, but what are the risks associated with this acceleration? In many ways the core challenge to securing these outputs will be the familiar fundamental challenges that appsec has always faced: maintaining an understanding of your inventory and risk-posture, conducting security assessments at scale, and managing processes for risk acceptance and remediation. Good appsec fundamentals will be critical in the new era of AI generated code. But coding agents also introduce novel concerns born from the inherent differences between these agents and human developers, as well the additional layers of abstraction which will become intrinsic to AI development: understanding how to vet and validate non-human agents, identifying the operational risks posed by agents trained on open-source, and the complexity of managing code developed through natural language will all require the development of new practices in appsec. This talk will look at some of the new risks that will arise in the era of large scale AI code development, and discuss possible paths forward for deploying such agents in a secure way.
III. AppSec for AI AND NHI -GrayLog
APPSEC FOR AI AND NHI - As we're empowering NHIs (Non Human Identity) to take on greater responsibilities, it's smart to wonder how we'll keep these good bots in bounds. This isn't possible to answer without acknowledging a dirty little secret -- while modern software is already driven by bots, modern security tools fall short in observing and regulating interactions between bots and APIs, whether those bots are trusted NHIs or malicious attackers. This session dispels a few myths about bots and bot detection and shows a few practical considerations and techniques to identify and block high-risk activities.
IV. The Shadow AI and AppSec: What You Don’t Know Will Get You!-ByteWhisper
The over-the-top headlines about artificial intelligence (AI) have only been outstripped by the breakneck speed by which many are adopting AI to transform their organizations. Shadow AI creates significant security exposures, like development teams processing sensitive customer data through unauthorized AI tools for creating mission-critical solutions using unvetted open-source AI models. This session will focus on where Shadow AI and appsec intersect – the coding co-pilots, the platforms, and the risks that they represent to your organization. This session will provide an overview of Shadow AI, how application development might unknowingly create Shadow AI, and tools to identify and mitigate Shadow AI.

AI for AppSec - A discussion of AppSec Best Practices