Home » Cover Story, Meetings

JSM Is Baltimore Bound!

1 May 2017 1,292 views No Comment

With more than 3,000 individual presentations arranged into approximately 204 invited sessions, 300 contributed sessions, and 500 poster and speed poster presentations, the 2017 Joint Statistical Meetings will be one of the largest statistical events in the world. It will also be one of the broadest, with topics ranging from statistical applications in numerous industries to new developments in statistical methodologies and theory. Additionally, there will be presentations about some of the newer and expanding boundaries of statistics, such as analytics and data science.

This year, the exhibit hall is the place to be. The Opening Mixer will take place there in addition to Spotlight Baltimore, which will feature events throughout the week. Moreover, if you are looking for a way to help the local community, you’ll want to visit IMPACT Baltimore. Finally, there will be an art show featuring data artists just inside the hall.

Here are the featured speakers you can expect at JSM 2017. We hope to see you there.

 
 
 
 
 

Featured Speakers

Monday, July 31

8:30 a.m.
IMS Medallion Lecture I
Edoardo M. Airoldi, Harvard University

“Design and Analysis of Randomized Experiments on Networks”

Classical approaches to causal inference largely rely on the assumption of “no interference,” according to which the outcome of an individual does not depend on the treatment assigned to others. In many applications, however, such as evaluating the effectiveness of health care interventions that leverage social structure or assessing the impact of product innovations on social media platforms, assuming lack of interference is untenable. In fact, the effect of interference itself is often an inferential target of interest, rather than a nuisance. In this lecture, we will formalize technical issues that arise in estimating causal effects when interference can be attributed to a network among the units of analysis, within the potential outcomes framework. We will then introduce and discuss several strategies for experimental design in this context.

10:30 a.m.
Blackwell Lecture
Martin Wainwright, University of California, Berkeley

“Information-Theoretic Methods in Statistics: From Privacy to Optimization”

Blackwell made seminal contributions to information theory and statistics, including early work on characterizing the capacities of various channels. The notion of channel capacity has a natural analogue for statistical problems, where it underlies the characterization of minimax rates of estimation. Inspired by this seminal work, this talk is devoted to the use of information-theoretic methods for tackling statistical questions, and I provide two vignettes. First, in the realm of privacy-aware statistics, how to characterize the tradeoffs between preserving privacy and retaining statistical utility? Second, in the realm of statistical optimization, how can we characterize the fundamental limits of dimensionality reduction methods?

This lecture will draw on joint work with John Duchi, Michael Jordan, and Mert Pilanci.

2:00 p.m.
IMS Medallion Lecture II
Emery N. Brown, Massachusetts Institute of Technology

“State-Space Modeling of Dynamics Processes in Neuroscience”

Dynamic processes are the rule, rather than the exception, in all areas of neuroscience. For this reason, many of the data analysis challenges in neuroscience lend themselves readily to formulation and study using the state-space paradigm. In this lecture, I will discuss the state-space paradigm using both point process and continuous valued observation models in the study of three problems in basic and clinical neuroscience research: characterizing how the rodent hippocampus maintains a dynamic representation of the animal’s position in its environment; real-time tracking of brain states of patients receiving general anesthesia; and real-time assessment and control of medical coma. This research has led to the development of state-space methods for point process observation models and state-space multitaper methods for time-frequency analysis of nonstationary time series.

4:00 p.m.
ASA President’s Invited Address
Jo Craven McGinty, The Wall Street Journal

Abstract unavailable.

 
 
 

8:00 p.m.
IMS Presidential Address and Awards Ceremony
Jon A. Wellner, University of Washington
“The IMS at 82: Past, Present, and Future”

In 2017, the IMS reaches the age of 82. Will it make it to 100? In this talk, I will argue that the IMS has succeeded remarkably well in fulfilling its fundamental goal: “To foster the development and dissemination of the theory of statistics and probability.” The IMS publishes the top journals in both statistics and probability and organizes world-class meetings on a regular basis in conjunction with its sister/brother organizations, the Bernoulli Society and the ASA.

As noted by Jim Pitman in his 2008 IMS Bulletin article, “… [A]mong many organizations with this goal, the IMS stands out as the most responsive to creative suggestions about how to achieve it.” In this talk, I will review the history of the IMS, summarize the current state of affairs of the IMS as an organization and its goals, and briefly discuss future directions. With continued creative responsiveness to recent trends, it seems likely the IMS will easily reach its 100th anniversary.

 
 

Tuesday, August 1

8:30 a.m.
IMS Medallion Lecture III
Subhashis Ghoshal, North Carolina State University

“Coverage of Nonparametric Credible Sets”

The celebrated Bernstein-Von Mises theorem implies that for regular parametric problems, Bayesian credible sets are also approximately frequentist confidence sets. Thus, the uncertainty quantification by the two approaches essentially agree, even though they have very different interpretations. A frequentist can then construct confidence sets by Bayesian means, which are often easily obtained from posterior sampling. However, the incredible agreement can fall apart in nonparametric problems whenever the bias becomes prominent. Recently, positive results have appeared in the literature overcoming the problem by undersmoothing or inflation of credible sets. We shall discuss results on Bayes-frequentist agreement of uncertainty quantification in white noise models, nonparametric regression, and high-dimensional linear models. We shall also discuss related results on nonlinear functionals.

2:00 p.m.
IMS Medallion Lecture IV
Mark Girolami, Imperial College London

“Probabilistic Numerical Computation: A Role for Statisticians in Numerical Analysis?”

A research frontier has emerged in scientific computation founded on the principle that numerical error in numerical methods that, for example, solve differential equations entails uncertainty that ought to be subjected to statistical analysis. This viewpoint raises exciting challenges for contemporary statistical and numerical analysis, including the design of statistical methods that enable the coherent propagation of probability measures through a computational and inferential pipeline. A probabilistic numerical method is equipped with a full distribution over its output, providing a calibrated assessment of uncertainty shown to be statistically valid at finite computational levels, as well as in asymptotic regimes. The area of probabilistic numerical computation defines a nexus of ideas, philosophies, theories, and methodologies bringing together statistical science, applied mathematics, engineering, and computing science. This talk seeks to make a case for the importance of this viewpoint. I will examine the case for probabilistic numerical methods in mathematical modeling and statistical computation while presenting case studies.

4:00 p.m.
ASA Deming Lecture
Fritz Scheuren, NORC at the University of Chicago

“A Rake’s Progress Revisited”

Deming’s statistical consulting advice to us is rich and varied. I knew Deming a little and long loved him from afar. That affection for this great man should be evident in this talk. To give focus to my remarks, I will cover just one of his quality ideas in depth: the algorithm commonly called “raking.”

Raking, or raking ratio estimation, was advanced by Deming and Stephan for use in the 1940 U.S. Decennial Census. The algorithm employs at its heart a process whereby the weights of a data set are iteratively ratioed-adjusted within categories, until they simultaneously meet, within tolerance, a set of pre-specified population totals.

Some confusion surrounding raking’s introduction slowed its development. At its base, the approach was intuitive. Its theoretical development came long after, and some of the justifications for its use were misplaced, including by Deming.

For a long time, the lack of computing power limited raking to modest applications and, given the work needed, the benefit/cost ratio often seemed insufficient relative to the time and money that had to be expended. These limitations no longer hold.

The problem of variance calculation accounting for raking posed additional challenges. These were solvable asymptotically in some decennial and sample survey settings, but usually not in a closed form. Even now, replication techniques are most commonly the only practical solution available for general sample survey settings.

This talk will motivate these assertions with examples taken from my practice and that of other statisticians. Extensions by me to multivariate raking will also be covered and speculations on unsolved or incompletely posed problems will be offered. Throughout, I will intersperse examples from my decades of practice.

8:00 p.m.
ASA President’s Address and Founders & Fellows Recognition
Barry D. Nussbaum

“Statistics: Essential Now More Than Ever (Or, Why Uber Should Be in the Driver’s Seat for Cars, Not for Data Analysis)”

Now is the time for statisticians to come to the rescue of rational analysis of today’s challenges. Our profession is essential now, perhaps more than ever before. The world is besieged with a deluge of Big Data and analysts ready to process it. Rather than being captive to this burgeoning field, statisticians are poised to provide the critical elements for affecting social problems. This talk focuses on the new initiatives taken to ensure the proper and vital use of statistics now and into the future. Let the Ubers of the world be in the driver’s seat to speed you to the airport, while the statisticians correctly assess the data. Examples will be given showing the crucial importance and vital strength of statisticians who are at the table when decisions are made. The art of effective collaboration with clear and succinct explanations is more important than ever. Come hear how you can improve your contributions to society and our collective standing in the world.

 
 

Wednesday, August 2

8:30 a.m.
IMS Medallion Lecture V
Judith N. Rousseau, Université Paris Dauphine

“On the Semiparametric Bernstein-Von Mises Theorem in Regular and Nonregular Models”

In regular models, the renown Bernstein-Von Mises theorem states that the posterior distribution of a quantity of interest, say $\theta$, is asymptotically Gaussian with mean $\hat \theta$ and variance $V/n$ when the data are assumed to be distributed from a model $P_{0}$. It also states that, under $P_0$, $\sqrt{n}( \hat \theta- \theta_0) $ is asymptotically Gaussian with mean zero and variance $V$. This duality between the asymptotic behavior of the posterior distribution of $\theta$ and the frequentist distribution of $\hat \theta$ has important implications in terms of strong adequacy between the Bayesian and frequentist approaches. In non-regular models, a similar adequacy can happen; however, the asymptotic distribution may not be Gaussian nor the concentration rate by $1/\sqrt{n}$. These results are well known in parametric models.

In this talk, I will present developments that have been obtained in both regular and non-regular semiparametric models (i.e., when the parameter of interest $\theta$ is finite dimensional, but the model also includes an infinite or high-dimensional nuisance parameter).

4:00 p.m.
COPSS Awards and Fisher Lecture
Robert E. Kass, Carnegie Mellon University

“The Importance of Statistics: Lessons from the Brain Sciences”

The brain’s complexity is daunting, but much has been learned about its structure and function, and it continues to fascinate. On the one hand, we are all aware that our brains define us; on the other hand, it is appealing to regard the brain as an information processor, which opens avenues of computational investigation.

While statistical models have played major roles in conceptualizing brain function for more than 50 years, statistical thinking in the analysis of neural data has developed much more slowly. This seems ironic, especially because computational neuroscientists can, and often do, apply sophisticated data analytic methods to attack novel problems. The difficulty is that, in many situations, trained statisticians proceed differently than those without formal training in statistics. What makes the statistical approach different and important? I will give you my answer to this question and go on to discuss a major statistical challenge, one that could absorb dozens of research-level statisticians in the years to come.

 
 

Wald Lectures

Emmanuel J. Candes, Stanford University
“What’s Happening in Selective Inference?”
Wald Lecture I
4:00 p.m., Tuesday, August 1
Wald Lecture II
10:30 a.m., Wednesday, August 2
Wald Lecture III
8:30 a.m., Thursday, August 3

Science has long operated as follows: A scientific theory can only be empirically tested, and only after it has been advanced. Predictions are deduced from the theory and compared with the results of experiments so they can be falsified or corroborated. This principle, formulated by Popper and operationalized by Fisher, has guided the development of scientific research and statistics for nearly a century. We have, however, entered a new world where large data sets are available prior to the formulation of scientific theories. Researchers mine these data relentlessly in search of new discoveries, and it has been observed that we have run into the problem of irreproducibility. Consider the April 23, 2013, Nature editorial: “[…] Nature has published a string of articles that highlight failures in the reliability and reproducibility of published research.” The field of statistics needs to reinvent itself to adapt to the new reality in which scientific hypotheses/theories are generated by data snooping. I will make the case that statistical science is taking on this great challenge and discuss exciting achievements such as FDR theory, knockoffs, and post-selection inference.

 
 

1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)
Loading...

Comments are closed.