This Meetup was canceled.
Statistical Methods and Data Skepticism
Data analysis today is dominated by three paradigms: null hypothesis significance testing, Bayesian inference, and exploratory data analysis. There is concern that all these methods lead to overconfidence on the part of researchers and the general public, and this concern has led to the new "data skepticism" movement.
But the history of statistics is already in some sense a history of data skepticism. Concepts of bias, variance, sampling and measurement error, least-squares regression, and statistical significance can all be viewed as formalizations of data skepticism. All these methods address the concern that patterns in observed data might not generalize to the population of interest.
We discuss challenge of attaining data skepticism while avoiding data nihilism, and consider some proposed future directions.
Andrew Gelman (http://www.stat.columbia.edu/~gelman/) is one of the leading quantitative researchers at the interface of social science and statistics. He has received numerous honors for his work, including the Outstanding Statistical Application award from the American Statistical Association, the award for best article published in the American Political Science Review, and the Council of Presidents of Statistical Societies award for outstanding contributions by a person under the age of 40.
Andrew has written several books on statistical methods, as well as "Red State, Blue State, Rich State, Poor State", a book about U.S. voting patterns. He is also well known for his blog (http://www.stat.columbia.edu/~gelman/blog/), "Statistical Modeling, Causal Inference, and Social Science", which covers topics such as data analysis, statistical graphics, politics, social science and academics in general.
Andrew received his undergraduate degrees in math and physics at MIT and his PhD in statistics from Harvard. He is currently a professor of statistics and political science and director of the Applied Statistics Center at Columbia University.