Skip to content

About us

Information Systems Security Association (ISSA) is a not-for-profit, international professional organization of information security professionals and practitioners. It was founded in 1984 by Sandra M. Lambert and Nancy King (albeit work on its establishment started in 1982). ISSA promotes the sharing of information security management practices through educational forums, publications and networking opportunities among security professionals. ISSA is present in more than one hundred countries, including Europe and Asia, with more than 10,000 members.

As the founding chapter of ISSA, ISSA Los Angeles (ISSA-LA) has become the premier catalyst and community resource in Southern California for improving the practice of information security. The Chapter provides various training classes and lectures for information Security and IT professionals throughout the year and at the annual Summit. We accomplish this by providing:

  • Education, networking and support to information security practitioners
  • IT practitioners with information security responsibilities
  • Information security vendors
  • Outreach, advocacy and education to the broader Los Angeles community

ISSA-LA meets monthly for lunch and dinner and regularly collaborates with other IT and InfoSec organizations, having joint meetings with ISACA, OWASP, the Cloud Security Alliance, HTCIA, and the Association of IT Professionals to name a few.

Upcoming events

3

See all
  • What Rebuilding a Poetry Site Taught Me About AI and Security

    What Rebuilding a Poetry Site Taught Me About AI and Security

    CULVER CITY VETERANS MEMORIAL AUDITORIUM ROTUNDA ROOM, 4117 OVERLAND AVENUE, Culver City 90230, CA, US

    You must register and pay to attend:
    https://www.eventbrite.com/e/what-rebuilding-a-poetry-site-taught-me-about-ai-and-security-tickets-1986106980443

    ### Topic One: Quoth the AI: “Nevermore” — What Rebuilding a Poetry Site Taught Me About AI and Security

    Edward Bonver spent weeks rebuilding a 25-year-old website with thousands of poems using AI (Claude Code in Visual Studio Code on Windows) as my coding partner. The AI wrote clean, confident code, passed its own reviews, and introduced changes that caused production outages — including a bad deployment and data routing issues.

    This talk shares real examples from a real codebase: where AI hallucinates, where it skips steps, and how to build guardrails that actually work. We’ll cover input validation, output encoding, dependency minimization, and rollback planning — grounded in the OWASP Top 10 and the OWASP Top 10 for LLMs — and what actually improved (and didn’t) after those failures.

    You’ll leave with a practical framework for building with AI without needing to trust it blindly, along with lessons from rebuilding at scale and safely introducing new features under AI-assisted development.

    Who Should Attend:
    Anyone whose team is adopting AI-assisted development: web developers, application security practitioners, IT auditors, digital asset managers, and technical leaders responsible for reliability and security.

    What You’ll Learn:

    • How AI-generated code fails in real systems (hallucinations, skipped steps)
    • How to write security requirements AI can actually enforce
    • Where AI hallucinations, platform assumptions, and dependency risks show up
    • How to design guardrails: validation, encoding, and dependency minimization
    • How to plan rollback and recovery when AI introduces production issues
    • A practical framework for using AI as a development partner without trusting it blindly

    Speaker One: Edward Bonver

    Edward, CISSP, CSSLP, is a seasoned cybersecurity leader with more than 25 years of experience spanning software development, assurance, and product security. His background includes roles at Raytheon Technologies, Symantec, Digital Equipment Corporation, Veritas Technologies, and Arctera. Over the course of his career, he has worked across a wide technical spectrum, from developing real-time operating systems and networking protocols to building and leading enterprise-scale product security programs.

    A recognized software security evangelist and product cybersecurity subject matter expert, Edward regularly speaks at global software industry security events and contributes to security community forums and industry alliances.

    Edward served on the SAFECode Board of Directors, representing Symantec and Raytheon Technologies, and contributed actively to SAFECode working groups and publications.

    ### Topic Two: Tales from the Coalface: How AI Is Transforming Software Development Culture to Enforce Cybersecurity and Privacy Compliance

    Cybersecurity policies and privacy compliance frameworks often fail—not because they are poorly written, but because development teams struggle to operationalize them consistently. The real challenge lies at the coalface, where developers, DevOps teams, and security requirements intersect under real-world pressure.

    In this session, Gavin Jackson shares practical field-tested insights on how artificial intelligence is reshaping software development culture and enabling teams to consistently implement cybersecurity and privacy compliance requirements. Drawing from real operational experience, this presentation explores how AI-driven tools can reinforce secure coding practices, automate policy enforcement, and create measurable accountability across development teams.

    Attendees will gain a practical understanding of how AI can be embedded into daily workflows to transform cybersecurity from a compliance obligation into an operational discipline.

    Key Takeaways:

    • How AI-assisted development tools help enforce secure coding standards
    • Using automation to bridge the gap between policy and practice
    • Building developer accountability without slowing delivery velocity
    • Cultural shifts required to sustain long-term cybersecurity compliance

    Speaker Two: Gavin Jackson

    Gavin Jackson is the Co-Founder and Chief Technology Officer of Syncrasy Dynamicx LLC, where he leads the design and execution of secure, scalable digital platforms that align technology infrastructure with core business objectives. With an executive technology career spanning the United Kingdom, Europe, Middle East, and United States, Gavin brings a global perspective to cybersecurity strategy, software engineering, and enterprise IT transformation.

    Gavin specializes in building resilient architectures that integrate cybersecurity, hybrid cloud environments, and predictive analytics into operational workflows. His work focuses on enabling organizations to transition from reactive IT models to proactive, intelligence-driven security and operational frameworks.

    Throughout his career, Gavin has led initiatives that delivered measurable business value through automation, data-driven decision-making, and security-first system design. He is particularly known for translating complex technical risk into clear operational strategies that executives and DevSecOps teams can execute with confidence.

    • Photo of the user
    • Photo of the user
    • Photo of the user
    6 attendees
  • Digital Forensics for AI: When the Model Becomes the Crime Scene

    Digital Forensics for AI: When the Model Becomes the Crime Scene

    Location not specified yet

    You must register and pay to attend:
    https://www.eventbrite.com/e/digital-forensics-for-ai-when-the-model-becomes-the-crime-scene-tickets-1987788998404

    ### Topic One: Digital Forensics for AI: When the Model Becomes the Crime Scene

    Artificial intelligence systems have become increasingly embedded in critical business, security, and decision-making processes; they are no longer just tools—they are potential targets, witnesses, and even victims of cyber incidents. Traditional digital forensics methods were designed for static systems and deterministic software, but AI introduces a new paradigm where models themselves can be manipulated, poisoned, or exploited in ways that leave complex and often non-obvious traces.

    This presentation explores the emerging discipline of AI-focused digital forensics, where the model becomes the crime scene. We will examine how adversaries attack machine learning systems through techniques such as data poisoning, model inversion, and adversarial inputs, and what forensic artifacts these attacks leave behind. Attendees will gain insight into how to investigate compromised models, validate model integrity, and reconstruct attack timelines in environments where behavior is probabilistic rather than deterministic.

    The session will also address practical challenges, including lack of logging visibility, reproducibility issues, and the difficulty of distinguishing model drift from malicious tampering. Real-world scenarios and case studies will illustrate how organizations can build forensic readiness into their AI pipelines, leveraging secure MLOps practices, auditability, and governance frameworks.

    By the end of this talk, participants will understand how to extend traditional forensic methodologies into AI-driven environments, enabling them to detect, investigate, and respond to incidents where the model itself holds the evidence.

    Speaker One: Genevieve McGinty

    Genevieve McGinty is a cybersecurity and digital forensics practitioner & strategist with more than two decades of experience protecting critical infrastructure, corporate networks, and healthcare systems. As the Founder & CEO of Intelligent ForensicsX, Inc., she leads initiatives in cyber risk management, threat intelligence, and forensic analysis—helping organizations uncover digital evidence, strengthen defenses, and ensure regulatory compliance.

    Her career spans leadership roles across global enterprises and public agencies, including the SAIC, Large Hospitals, PVH Corp (Retail), Atos North America, and several local & state legal firms. Genevieve has successfully directed large-scale security operations, incident response programs, and strategic initiatives that align cybersecurity with business objectives.

    Known for her analytical insight and commitment to operational excellence, Genevieve is a trusted advisor to executives and legal teams seeking clarity in complex security initiatives and digital investigations. Her work bridges technology, governance, and leadership—empowering organizations to anticipate threats, respond decisively, and build lasting cyber resilience in an evolving digital landscape.

    ### Topic Two: The Insider Isn’t Human: Securing AI Agents as the Newest Insider Threat

    As organizations increasingly deploy AI agents to act autonomously across security, IT, and business workflows, the definition of “insider” is rapidly changing. Hear how AI agents that are often highly privileged, always active, and operating at machine speed, introduce a new class of insider risk that traditional security models are not designed to detect.

    Attendees will learn why static rules and alert‑driven approaches fall short for agent activity, how behavioral intelligence provides the missing context to identify abnormal or risky behavior, and what it means to extend insider‑threat detection and investigation beyond humans to include AI systems themselves.

    ### Speaker Two: Steve Wilson

    Steve Wilson is the Chief AI and Product Officer at Exabeam, where his team applies cutting-edge AI technologies to tackle real-world cybersecurity challenges. He founded and co-chairs the OWASP Gen AI Security Project, the organization behind the industry-standard OWASP Top 10 for Large Language Model Security list.

    His award-winning book, "The Developer’s Playbook for Large Language Model Security" (O'Reilly Media), was selected as the best Cutting Edge Cybersecurity Book by Cyber Defense Magazine.

    Steve contributed to the development of Java at Sun Microsystems and held leadership positions at industry giants Citrix and Oracle. He holds 11 U.S. and international patents, and was named the Cybersecurity Innovation Leader by Enterprise Security Tech.

    • Photo of the user
    • Photo of the user
    • Photo of the user
    11 attendees
  • The AI Shift in Cybersecurity: What Breaks, What Survives, What Wins

    The AI Shift in Cybersecurity: What Breaks, What Survives, What Wins

    Location not specified yet

    You must register to attend: https://www.eventbrite.com/e/the-ai-shift-in-cybersecurity-what-breaks-what-survives-what-wins-tickets-1988245285170

    ### Topic One: The AI Shift in Cybersecurity: What Breaks, What Survives, What Wins

    Cybersecurity is entering a new phase, defined not by a single tool or technology, but by a fundamental shift in how work gets done.

    In this session, Mikhael Felker shares a “state of the union” perspective on how AI is reshaping cybersecurity functions and decision-making. Drawing from experience across enterprise environments and startup innovation, he will explore what is changing rapidly, what remains constant, and where security leaders should focus to stay ahead.

    The discussion will go beyond AI tooling to examine how core functional areas, including vulnerability management, offensive security, secure development lifecycle (SDLC), security operations, GRC, and hiring are evolving in practice. Attendees will gain concrete examples of how traditional workflows are compressed, augmented, or replaced, alongside areas where human judgment, risk trade-offs, and organizational design remain critical.

    This session is designed for security leaders and practitioners who want a clear, practical lens on how to prepare for the next phase of cybersecurity, and how to position themselves and their teams for what comes next.

    Speaker One: Mikhael Felker

    Mikhael Felker is the Head of Security & Privacy Engineering at Verily Health. He has over 20 years of experience in security, privacy, risk, compliance, and IT, working at both startups and several Fortune 10 companies. Mikhael is an advisor for ArmorIQ.

    Felker received his M.S. in Information Security Policy and Management from Carnegie Mellon University and a B.S. in Computer Science from UCLA. His written work, comprising over 50 publications, has been featured in Forbes, ACM, IEEE Security & Privacy, ISACA Journal, ISSA Journal, podcasts, case studies, and several online magazines.

    • Photo of the user
    • Photo of the user
    • Photo of the user
    5 attendees

Group links

Organizers

Members

1,134
See all
Photo of the user Michael Lehman
Photo of the user Dirk Harms-Merbitz
Photo of the user Mitch
Photo of the user Jorge Garifuna
Photo of the user Dave Holcomb
Photo of the user volkan uzun
Photo of the user Anthony
Photo of the user Lou
Photo of the user Don Thomas
Photo of the user Steve Bolotin
Photo of the user Eric Hims Elf
Photo of the user Joel

Find us also at