Skip to content Skip to main navigation Report an accessibility issue

Assessing Procedures vs. Assessing Evidence

Dr. Michael Lavine
Program Manager for Probability and Statistics
US Army Research Office
Friday, April 24, 2020  2:30-3:30pm

ABSTRACT: A fundamental idea in statistics and data science is that statistical procedures are judged by criteria like misclassification rates, p-values, or convergence that measure how the procedure performs when applied to many possible data sets. But such measures gloss over quantifying the evidence in a particular data set. We show that assessing a procedure and assessing evidence are distinct. The main distinction is that procedures are assessed unconditionally, i.e., by averaging over many data sets, while evidence must be assessed conditionally by considering only the data at hand.


BIO: Dr. Michael Lavine received his PhD in Statistics in 1987 and spent 30 years as Professor of Statistics and Department Head at Duke University and UMASS Amherst.  He is currently Program Manager for Probability and Statistics at the US Army Research Office. His interest includes statistical theory and methods of Bayesian Robustness, Bayesian Nonparametrics, and Spatial Statistics with applications in Ecology, Environment and Neurobiology. He has published two books of “Introduction to Statistical Thought” and “Basic Statistical Thought” and more than 70 papers at Science, Journal of American Statistical Association, Journal of Statistical Planning and Inference, Journal of Statistical Planning and Inference, Annals of Statistics, Biometrika, The American Statistician, and many other top statistics, environment and neuroboiology journals. He is a fellow of  the American Statistical Association.