Home » A Statistician's Life, Additional Features, ASA Leaders Reminisce

ASA Leaders Reminisce: Brad Efron

1 September 2015 3,556 views One Comment
Jim Cochran

In the ninth installment of the Amstat News series of interviews with ASA presidents and executive directors, we feature a discussion with 2004 president Bradley Efron.

Bradley Efron is Max H. Stein Professor of Humanities and Sciences, professor of statistics at Stanford University, and professor of biostatistics with the department of health research and policy in the school of medicine. He completed his undergraduate work in mathematics at the California Institute of Technology in 1960, and he earned his doctorate in statistics from Stanford in 1964, joining the Stanford faculty that same year. Brad was promoted to full professor there in 1972. He was associate dean for the school of humanities and sciences from 1987–1990, served three terms as chair of the department of statistics, and, since 1980 has served as co-director of Stanford’s Mathematical and Computational Sciences Program. Brad has held visiting faculty appointments at Harvard University; Imperial College, London and the University of California, Berkeley. He was elected president of both the American Statistical Association (2004) and Institute of Mathematical Statistics (1987–1988). He is a past theory and methods editor of the Journal of the American Statistical Association and was the founding editor of the Annals of Applied Statistics from 2006–2012.

Brad is a fellow of the American Statistical Association, the Institute of Mathematical Statistics, the Royal Statistical Society, the International Statistical Institute, the John D. and Catherine T. MacArthur Foundation, and the American Academy of Arts and Sciences. He is also a member of the National Academy of Sciences. A recipient of the Ford Prize of the Mathematical Association of America and both the Wilks Medal and the Noether Prize of the American Statistical Association, Brad was awarded the 1998 Parzen Prize for Statistical Innovation by Texas A&M University. In 2003, he was selected for the first Rao Prize for outstanding research in statistics by Pennsylvania State University.

In 2015, Brad received the National Medal of Science “for his contributions to theoretical and applied statistics, especially the bootstrap sampling technique; for his extraordinary geometric insight into nonlinear statistical problems; and for applications in medicine, physics, and astronomy.” In 2014, he was awarded the Guy Medal in Gold by the Royal Statistical Society, an honor bestowed only every three years, “in recognition of his hugely influential contributions to both theoretical and applied statistics.” The published citation continues: “He has made seminal contributions to many areas of statistics, including empirical Bayes analysis, the analysis of survival data, applications of differential geometry to statistical theory, and analysis of multiple testing problems in inference for gene expression data. He is best known for his introduction of the bootstrap method of statistical inference. His work is characterized by its depth, simplicity of presentation, geometric insights and by a desire to understand statistical procedures from both frequentist and Bayesian perspectives.”

2004 ASA President Bradley Efron

2004 ASA President Bradley Efron

   










Q  Thank you for taking time for this interview, Brad. You earned your BS degree in mathematics from the California Institute of Technology before earning your MS and PhD in statistics from Stanford University. Did any courses that you took in pursuit of your mathematics degree or any of the faculty at the California Institute of Technology motivate your future studies in statistics?
A The number of statistics courses at Caltech during my undergraduate years was close to zero. In my senior year, I prevailed on one of the math faculty guys to let me do a reading course in Cramer’s wonderful book Mathematical Methods of Statistics. Cramer wrote the book under virtual house arrest in Sweden during the Second World War. It is mostly probability and good old-fashioned math stat, but there I was under statistical house arrest at Caltech, and it very much appealed. It still has an honored place on my shelf.

Q You have consulted extensively with several organizations including the RAND Corporation and the Google Analytics Group. What particular projects that you worked on with these organizations were most interesting or challenging?
A In the 1970s, RAND was carrying out the Health Insurance Study. This was an amazingly ambitious randomized experiment on the effects of various levels of insurance—100% compensation, 50% compensation, etc.—on medical usage. The economist Joe Newhouse was the principal investigator, while Carl Morris was the lead statistician. My role, which was vastly enjoyable, was to discuss statistics in general and the Health Insurance Study in particular with Carl. My main contact at Google was my former PhD student Omkar Muraldiharan. They are almost literally awash in data at Google, to the point where “terabytes” disparagingly describes small data sets. The place is sort of a madhouse of science, numbers, and business. We talked about bringing empirical Bayes methods into play. Omkar and his colleagues have continued to develop such ideas, and much more.

Q In 2007, you became the founding editor of The Annals of Applied Statistics. What motivated you to establish this journal?
A Ah, The Annals of Applied Statistics, by far my most successful editorial stint. In fact, AOAS was suggested by the Council of the Institute of Mathematical Statistics, showing that committees really can have good ideas. They felt, correctly, that the IMS needed to have some presence in the world of statistical applications. My editorial masterstroke was to recruit three world-class “area editors”: Steve Fienberg for social science, Mike Newton for biostatistics, and Mike Stein for physical sciences. It’s hard to define “applied statistics.” Lining up topics from right (pure math stat) to left (direct applications), we took as our remit anything in the line’s left half. The hardest thing about starting a new journal is getting enough good papers, or, sometimes, enough papers period. I kept nervous graphs of our submissions, agonizing over random fluctuations downward. After a couple of years, the graphs took a sudden lurch upward, and the rest is history as they say. My parting joke was to put one of my own papers—the paper’s not the joke—at the end of my last issue as editor.

Q You have been teaching statistics for more than 50 years. What course or courses have you most enjoyed teaching, and why?
A My favorite course to teach is Exponential Families in Theory and Practice, usually taught to our first year PhD students. It’s often required, which guarantees an audience! I start out with a sort of bull’s-eye picture—normal theory in the central circle and general asymptotics around the periphery. Then, and this is the dramatic part, I draw in an intermediate circle labelled “exponential families.” The point of the picture, and the course, is that exponential family theory is the main way statisticians extend the exact results of the normal world toward more general structures. And it’s worth saying that the theory hasn’t been left in the dust by modern Big Data statistics, and in fact is prospering even in Google’s extra-Big Data environment.

Q In 1983, you were named a MacArthur Prize Fellow. What was the impact of this award on your career as a statistician?
A The money from the MacArthur award was quickly spent, but the effect on my reputation in the general world of science lingered. Compared to other fields, statisticians don’t give each other enough awards. The COPSS (Committee of Presidents of Statistical Societies) award, for instance, has had a good effect on highlighting several worthy careers. I very much like that Rao and Parzen have instituted quite successful award series.

Q  On what special project are you currently working?
A Right now, my main professional activity involves a book in progress, Computer Age Statistical Inference, that I am writing with Trevor Hastie. Our idea was to examine the effect of electronic computation on the development of statistical theory and practice from the 1950s to the present. Each chapter surveys a particular topic—empirical Bayes, survival analysis, generalized linear models, etc.—and tries to say how electronic computation did or did not affect its development. We’re into the 21st century at last, with topics such as false discovery rates, sparsity and the lasso, and support vector machines making their appearance.

1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)
Loading...

Comments are closed.