Skip to content

Details

LLMs generate factually incorrect outputs (hallucinations) that look perfectly plausible, and detecting these requires analysing semantic uncertainty at meaning-level rather than token-level—a capability most observability platforms lack.

10:00-10:30: Networking & tea
10:30-11:00: Session1 from One2N
11:00-11:30: Session2 from Last9
11:30-12:30: Round table discussion with all attendees sharing their knowledge, challenges and solutions about hallucination detection
12:30 onwards: Open

Related topics

Events in Pune, IN
AI/ML
Artificial Intelligence
Artificial Intelligence Applications
Automated Machine Learning
DevOps

You may also like