(Recap & QA) Statistical Rigor and the Promise of Data Science
Details
### 📊 Recap & Open Q&A: Data Science, Statistics, and the Future of Estimation
Missed the original talk — or want to continue the conversation?
This meetup will begin with a concise recap of a recent talk exploring the growing tension between modern data science and traditional statistical theory in applied estimation problems.
We’ll revisit key themes, including:
- Why machine learning methods like XGBoost often improve point estimates but struggle with rigorous uncertainty quantification
- The continued importance of statistical theory for principled inference
- Trade-offs between flexibility, interpretability, and reliable confidence intervals
- Why understanding convergence and variability remains essential for sound decision-making
- Practical examples from consulting and simulation-based modeling
The original talk argued for a hybrid approach: using statistical theory as a foundation, and modern data science tools as powerful augmentation.
After the recap, the majority of the session will be open Q&A and discussion.
This is an opportunity to:
- Ask deeper technical questions
- Challenge assumptions
- Share real-world experiences
- Explore applications in high-stakes domains like medicine
- Debate what should (and shouldn’t) carry over from classical statistics
Whether you’re a statistician, ML practitioner, researcher, or just curious about the evolving relationship between theory and practice, this session is designed to be interactive and discussion-driven.
Come ready to think out loud.
AI summary
By Meetup
A talk for data scientists and statisticians on blending statistical theory with modern data science to improve estimation and uncertainty quantification.
AI summary
By Meetup
A talk for data scientists and statisticians on blending statistical theory with modern data science to improve estimation and uncertainty quantification.
