

О нас
PyData is an educational program of NumFOCUS, a 501(c)3 non-profit organization in the United States. PyData provides a forum for the international community of users and developers of data analysis tools to share ideas and learn from each other. The global PyData network promotes discussion of best practices, new approaches, and emerging technologies for data management, processing, analytics, and visualization. PyData communities approach data science using many languages, including (but not limited to) Python, Julia, and R.
The PyData Code of Conduct governs this meetup. To discuss any issues or concerns relating to the code of conduct or the behavior of anyone at a PyData meetup, please contact NumFOCUS Executive Director Leah Silen (+1 512-222-5449; [leah@numfocus.org](mailto:leah@numfocus.org)) or the group organizer.
Предстоящие события
2

April Meetup: LLM hallucinations and time-series models
Moderna HQ, 325 Binney St, Cambridge, MA, US💻Skip April showers, get new tech powers with PyData Boston!
We currently have no food sponsor for this event! Reach out if you're able to sponsor!
RSVP is REQUIRED to attend
Do not arrive before 6:30pm!
📅 Schedule:
6:30–7:00 — Networking
7:00–7:15 — Introduction
7:15–8:15 — Large Language Models can Hallucinate Speaker Transitions (Julia Mertens)
8:15-8:30 - Break
8:30-9:15 — Classifying Time Series with Foundation Models (Abhishek Murthy, Evans Addo)
9:15–9:30 — Wrap-upSpeaker: Julia Mertens (Boston Fusion)
Title: Large Language Models can Hallucinate Speaker Transitions
Abstract: Large Language Models (LLMs) can perform a wide range of tasks, but the field continues to struggle to define the boundaries around performance, including the extent to which LLMs can learn to replicate human-like dialogue skills. In this paper, we investigated the cognitive alignment between LLMs and interlocutors in dialogue. Specifically, we explored whether GPT models can learn the relationship between "who is speaking," and "what they will say." Surprisingly, we found that LLMs modeled this relationship during speaker transitions, but struggled to model sequences where the same person produces two turns. In fact, it suggests that LLMs may hallucinate speaker transitions where there are not. This finding provides a potential explanation for qualitative examples where audio-visual models inject speaker transitions when reading scripts, and suggests that LLMs may struggle to attend to the underlying, smoother signals in dialogue.Speakers: Abhishek Murthy (Schneider Electric and Northeastern University), Evans Addo (Northeastern University)
Title: Classifying Time Series with Foundation Models
Abstract: Time series classification traditionally relies on task‑specific models and extensive feature
engineering, limiting reuse across domains and making labeled data a persistent bottleneck.
Recent advances in time series foundation models challenge this approach by enabling
large‑scale, self‑supervised pretraining over diverse temporal data.In this talk, we explore how models like MOMENT can be used as generic representation
learners for time series classification. We’ll start with a quick intuition for how these models
work, including patch‑based transformers and masked time series pretraining, and then
walk through practical ways of applying them using a publicly available motor diagnostics
dataset. The goal is to highlight when these models work well, what tradeoffs to be aware
of, and how practitioners can start using them effectively.📍Venue provided by Moderna
This, and all NumFOCUS-affiliated events and spaces, both in-person and online, are governed by a Code of Conduct:
👉 https://pydata.org/code-of-conduct/⚡⚡**Speak at PyData!**⚡⚡
We are always looking for speakers! Sign up here and we'll be in touch:
🔗 https://forms.gle/kfFZ5hiqA9W57Ewg7⚡⚡**Sponsor an event!**⚡⚡
PyData events are free and open to all. We’re always looking for sponsors and hosts. Get in touch to support the community:
📧 boston@pydata.org44 участников
Прошедшие события
63


