Building Trustworthy AI Systems Through Technical Rigor and Ethical Design
Details
As part of the AI Lunch & Learn Series, Dr David A. Rivkin will lead an in-depth exploration of Responsible AI, where ethical intent meets robust technical execution.
This live session provides a practical, end-to-end view of the AI lifecycle, from data governance and fairness-aware model development to production monitoring and incident response. Participants will gain real-time insights into how to balance accuracy with fairness, implement human-in-the-loop systems, and establish governance structures that make Responsible AI a consistent, repeatable practice rather than an ad-hoc response.
What you’ll gain from this live webinar:
- Practical insights into Responsible AI across the full AI lifecycle
- The six core pillars: Fairness, Transparency, Privacy, Accountability, Reliability, and Human agency
- Technical approaches to fairness-aware training, evaluation, monitoring, and incident response
- Best practices for cross-functional collaboration and AI governance
- Why Responsible AI is a continuous practice, not a one-off compliance exercise
Who should attend:
- AI, ML, and data professionals
- Product, UX, and digital leaders working with AI systems
- Governance, risk, compliance, and ethics professionals
- Technology decision-makers and AI practitioners
- Researchers, students, and Responsible AI enthusiasts
Notes: Registration is required to receive the access link.
Sponsors: CX INSIGHT
Related topics
AI/ML
Artificial Intelligence
Artificial Intelligence Applications
Trends and Forecasting
