About us
DataPhilly is a community run group for anyone interested in gaining insights from data. Topics include (but are not limited to) predictive analytics, applied machine learning, big data, data warehousing and data science. We <3 data!
Join our slack to help plan future events! http://bit.ly/DataPhillySlack
Email us at: admin@dataphilly.com
Found a space we can use for future meetups? https://goo.gl/Ru0eth
Found a speaker for an upcoming meetup? https://goo.gl/9DJxq0
Found a sponsor for our events? https://goo.gl/JLVfqh
Want to have access to the video recordings and details of our past events? https://dataphilly.github.io
See more at dataphilly.com
Upcoming events
2

DataPhilly Webinar Series: Why Good Models Fail in Business Decisions
·OnlineOnlineWe’re kicking off a new DataPhilly webinar series focused on the real-world challenges of applying analytics, ML, and AI in business—and our first session tackles a problem many teams know too well: great models that never get used. This is session 1 out of 3.
Most analytics teams assume adoption fails because models aren’t explainable or sophisticated enough. In reality, resistance often comes from how models shift control, accountability, and decision authority inside organizations. This session challenges analytics and ML professionals to rethink what it means to build “good” models—not just technically correct ones, but models designed for trust, acceptance, and real decision-making.What you’ll learn:
- Why validated, accurate models are often ignored in pricing, forecasting, and commercial decisions
- How model outputs, confidence intervals, and “optimal” recommendations can build—or erode—trust
- What repeated requests for tweaks really signal about decision risk
- Why designing for adoption may be as important as designing for accuracy
If you’ve ever asked yourself, “If a model is accurate but never acted on, is it really a good model?”—this session is for you.
Abstract: Why Good Models Fail in Business Decisions
Most analytics teams believe their biggest challenge is building better models. In practice, the harder problem is getting good models used. Pricing, forecasting, and commercial decisions are full of technically sound analyses that were validated, approved, and then quietly ignored.
This session challenges analytics and ML professionals to rethink why adoption fails. Drawing on real-world experience, it argues that resistance to machine learning is rarely about lack of explainability or technical sophistication. Instead, it reflects how models redistribute control, accountability, and decision authority inside organizations.
Rather than asking how to simplify models for business users, the talk asks a more uncomfortable question: what responsibilities do analytics teams have in designing for acceptance, not just correctness? The session explores how model outputs, confidence intervals, and “optimal” recommendations can either build trust or undermine it, and why repeated requests for tweaks are often signals of unresolved decision risk.
Provocation: If a model is accurate but never acted on, is it really a good model?Speakers:
Venu Gorti is the Founder and CEO of Quant Matrix AI Solutions. He has spent over 18 years working closely with C-suite decision makers across global consumer-focused companies, helping them apply analytics to pricing, promotions, media, and growth decisions in real business contexts.
His work spans FMCG, retail, and consumer businesses, with experience partnering with leadership teams at large global organizations on high-stakes commercial decisions. He has published in marketing and statistics journals, and his work on pricing with PepsiCo received the Best Paper award at the Advertising Research Foundation (ARF) conference in New York.
Venu’s current focus is on bridging the gap between advanced analytics and real-world decision-making, with particular interest in trust, adoption, and the human side of analytics. He is based in Mumbai, lives with his wife and two daughters, and enjoys conversations with fellow practitioners that challenge assumptions and spark new ways of thinking.Rahul Maan is the Founder and Principal Solutions Advisor at RCS Analytics, helping banks and insurers modernize risk and compliance capabilities on cloud-native data and analytics platforms. With 20+ years of experience, he partners with business and technology leaders to translate regulatory requirements into scalable, audit-ready solutions that drive measurable outcomes.
His work spans Stress Testing, Credit Loss Forecasting (CECL), IFRS 9, IFRS 17, LDTI, and Model Risk Management (MRM), with end-to-end delivery for 20+ customers. Rahul leads programs from target architecture through implementation and adoption, including CECL implementations on Databricks and lakehouse-based risk pipelines with strong governance and reporting.
Rahul’s current focus is enabling large banks to implement and scale enterprise risk management, strengthening controls, model oversight, and operating processes while accelerating delivery. He works across risk, finance, and technology teams to ensure solutions are defensible to regulators and practical to run. Based in North Carolina, Rahul values execution discipline and transparent risk transformation.The webinar link will be shared before the event.
25 attendees
DataPhilly Tech Talks: Building Adaptive and Agentic AI Systems
BrainDo Office, 3 N Christopher Columbus blvd, Philadelphia, PA, USJoin us at BrainDo for February’s DataPhilly Tech Talks, featuring two talks that explore how modern AI systems learn from human signals and act autonomously in real-world environments. This evening connects cutting-edge research with hands-on engineering, highlighting the evolution from intelligent perception to agentic execution.
Our Host and Sponsor, BrainDo is a Philadelphia-based team of digital marketing experts who partner with clients to create meaningful experiences with data-driven results. They host and manage Philly Analytics, including the monthly Web Analytics Wednesdays community meetups.
🔗 phillyanalytics.org | Web Analytics Wednesday LinkedIn Group🗓️ Agenda
6:00 – 6:30 PM: Arrival, networking, and refreshments
6:30 – 7:10 PM: Talk 1 – Tony Siu, followed by Q&A
7:10 – 7:50 PM: Talk 2 – Ben Morss, followed by Q&A
7:50 – 8:30 PM: NetworkingTony Siu: Generalizable Detection of Student Engagement in Online Learning Environments - Direct Preference Optimization.
Abstract: Automated recognition of student engagement in online learning is crucial as it enables teachers to adapt content delivery to improve learning. In this paper, we explore a method that finetunes a pretrained vision language model (VLM) to recognize student engagement markers in still images. Our model learns to avoid incorrect answers during finetuning by using the emerging direct preference optimization techniques on self-generated preference pairs based on the correct and incorrect VLM answers. On publicly available student engagement datasets, our model shows superior performance over other approaches and substantially better generalizability over the traditional vision methods. My docs, CAIP published paper
Bio: Tony Siu is an AI Engineer and Multi-modal generative AI researcher with deep experience spanning applied machine learning, computer vision, and large-scale data systems. He has led end-to-end development of state-of-the-art GenAI products, including natural-language-to-game generation systems and vision-based agentic interfaces, while driving major performance and cost optimizations in production environments.
Tony has conducted published research across top-tier venues such as IEEE, CAIP, and ECAI, with a focus on multi-modal preference optimization, engagement modeling, and visual question answering. His background includes research leadership with Google, engineering roles across startups and enterprise teams, and hands-on work with CUDA-accelerated vision systems. He is also an active community builder, founding Code&Coffee Philadelphia to connect engineers, founders, and researchers across the tech ecosystem.Ben Morss https://benmorss.com: MCP revealed: How does MCP transform an LLM into an agent? Let's learn how MCP works, how to use it, and how to build a server and an app.
Abstract: Anthropic’s Model Context Protocol (MCP) lets your favorite LLM do things in the real world - send emails, make Jira tickets, and browse the web. It lets your LLM be an agent.
We’ll explain how MCP works in detail. We’ll show you how to set up and use a server. We’ll build a little server of our very own. And we’ll take a quick look at the very latest thing - MCP Apps and ChatGPT Apps!
Bio: As DeepL’s Developer Evangelist, Ben works to help anyone access DeepL’s world-class AI experiences and language translations. Previously, at Google, he was a Product Manager on Chrome and a Developer Advocate for a better web. Before that he was a software engineer at the New York Times and AOL, and once he was a full-time musician. He earned a BA in Computer Science at Harvard and a PhD in Music at the University of California at Davis. You might still find him making music with the band Ancient Babies, analyzing pop songs at Rock Theory, and writing a musical that’s not really about Steve Jobs.👉 RSVP now to secure your spot and join us for an evening of cutting-edge AI insights, great conversations, and community connections!
Please note, by RSVPing this event you agree to our Code of Conduct62 attendees
Past events
206


