Skip to content

Details

Responsible Tech Forum – Month 4 Event
ACM Distinguished Speaker Series
Date: Friday, May 15 | Time: 11:30 AM – 1:00 PM
Format: Distinguished Lecture + Interactive Discussion

***

Discovering Bias in Large Language Models (LLMs)
Speaker: Mehdi Bahrami – ACM Distinguished Speaker, Santa Clara, USA

***

Overview
As large language models (LLMs) become increasingly embedded in enterprise systems, healthcare, education, and everyday digital tools, it is essential to understand not only their capabilities but also their limitations and risks. While these models enable powerful automation, decision support, and productivity gains, they can also reflect and amplify societal biases, generate misleading information, and introduce unintended harms.

This ACM Distinguished Lecture will explore how bias emerges in LLMs and why it matters for developers, researchers, business leaders, and policymakers. The session will examine both the technical foundations and societal implications of bias in generative AI systems.

Participants will gain a deeper understanding of how bias manifests, how it can be detected, and what practical strategies exist to mitigate it. Through real-world examples and research-backed approaches, this lecture will advance the conversation on building more trustworthy, fair, and accountable AI systems.

***

What You Will Learn

  • What bias in large language models is and how it originates
  • Common forms of bias, stereotypes, hallucinations, and misinformation in LLM outputs
  • Current methods and tools for detecting bias in generative AI systems
  • Emerging mitigation strategies to improve fairness and accountability
  • Practical considerations for deploying responsible and ethical AI in real-world applications

***

Why This Matters
As organizations rapidly adopt generative and Agentic AI, ensuring fairness, transparency, and responsible deployment is essential. This session is designed to equip technical professionals, product leaders, researchers, and policymakers with the knowledge needed to recognize risks and implement safeguards.

This lecture directly supports Responsible Tech Forum’s mission to advance responsible, human-centered AI and strengthen the community’s ability to build trustworthy intelligent systems.

***

Who Should Attend

  • AI engineers and developers
  • Data scientists and machine learning practitioners
  • Product managers and technology leaders
  • Researchers, students, and academics
  • Anyone interested in ethical, responsible, and trustworthy AI

***

Part of the ACM Distinguished Speaker Series hosted by Responsible Tech Forum.

Related topics

Artificial Intelligence
Technology Innovation
Data Governance
Technology Governance
Human-Centered Design

Sponsors

Sofiva

Sofiva

Organizer and additional technology fees

You may also like