Home » Cover Story, Departments, Meetings

Looking Forward to JSM

1 May 2019 942 views No Comment

With more than 3,000 individual presentations arranged into approximately 200 invited sessions, 300 contributed sessions, and 900 poster and speed presentations, the 2019 Joint Statistical Meetings will be one of the largest statistical events in the world. It will also be one of the broadest, with topics ranging from statistical applications in numerous industries to new developments in statistical methodology and theory. Additionally, it will include presentations about some of the newer and expanding boundaries of statistics, such as analytics and data science.

JSM offers a unique opportunity for statisticians in academia, industry, and government to exchange ideas and explore opportunities for collaboration, as well as for beginning statisticians (including current students) to learn from and interact with senior members of the profession. We hope to see you in Denver. In the meantime, enjoy these highlights.

Featured Speakers

 

ASA President’s Invited Address


Teresa A. Sullivan
President Emerita of the University of Virginia

Monday, July 29, 4:00 p.m.
 
 
 
 
 

Deming Lecture


Nicholas Fisher
University of Sydney

“Walking with Giants: A Research Odyssey”

Tuesday, July 30, 4:00 p.m.

This lecture describes a statistical research journey interwoven DNA-like with a series of encounters with great names in the development of improved management practices since the Second World War. The research journey—which still continues—relates to developing a systematic statistical approach to problems of performance measurement and touches on such seemingly disparate issues as board reporting, measuring and monitoring safety culture, rating the research quality of university departments, strategic planning, and the efficient and effective delivery of government programs. The great names—those of people whose work has transformed whole industries and countries—include William Cleveland, W. Edwards Deming, Ray Kordupleski, Richard Normann, Homer Sarasohn, Myron Tribus, Yoshikazu Tsuda, and Norbert Vogel.
 

ASA President’s Address and Awards


Karen Kafadar
University of Virginia

Tuesday, July 30, 8:00 p.m.
 
 
 
 
 
 

COPSS Awards and Fisher Lecture


Paul Rosenbaum
University of Pennsylvania

“An Observational Study Used to Illustrate Methodology for Such Studies”

Wednesday, July 31, 4:00 p.m.

A natural experiment in health outcomes research is used to frame a discussion of statistical methods for causal inference in observational studies. A narrative description of the study’s design, analysis, and reception is interrupted twice, first to describe algorithmic developments in optimal matching in design and, second, to describe analyses that inform discussions of unobserved biases due to the absence of randomized treatment assignment. Specific topics include fine balance constraints imposed on minimum distance matched samples, sensitivity analyses when treatment effects are heterogeneous, and design sensitivity as a tool to evaluate study designs and analytical techniques.
 

Medallion Lecture I


Yee Whye Teh
Department of Statistics, University of Oxford

“On Statistical Thinking in Deep Learning”

Sunday, July 28, 4:00 p.m.

In recent years, machine learning and, in particular, deep learning has undergone tremendous growth, much of this driven by advances that are computational in nature, including software and hardware infrastructures supporting increasingly complex models and enabling use of increasingly intense compute power. As a result, the field is becoming more computational in its nature. In this talk, I would like to highlight the continuing importance of statistical thinking in deep learning by drawing examples from my research blending probabilistic modelling, Bayesian nonparametrics, and deep learning. In particular, I will talk about neural processes, which use neural networks to parameterize and learn flexible stochastic processes to use for meta-learning (also known as learning to learning), and the use of probabilistic symmetries in answering recent questions about neural network architecture choices satisfying certain invariance properties.
 

Medallion Lecture II


David Dunson
Duke University

“Learning and Exploiting Low-Dimensional Structure in High-Dimensional Data”

Monday, July 29, 8:30 a.m.

This talk will focus on the problem of learning low-dimensional geometric structure in high-dimensional data. We allow the lower-dimensional subspace to be nonlinear. There are a variety of algorithms available for “manifold learning” and nonlinear dimensionality reduction, mostly relying on locally linear approximations and not providing a likelihood-based approach for inferences. We propose a new class of simple geometric dictionaries for characterizing the subspace, along with a simple optimization algorithm and a model-based approach to inference. We provide strong theory support in terms of tight bounds on covering numbers, showing advantages of our approach relative to local linear dictionaries. These advantages are shown to carry over to practical performance in a variety of settings, including manifold learning, manifold denoising, data visualization (providing a competitor to the popular tSNE), and classification (providing a competitor to deep neural networks that requires fewer training examples). We additionally provide a Bayesian nonparametric methodology for inference, which is shown to outperform current methods such as mixtures of multivariate Gaussians.
 

Wald Lectures


Trevor J. Hastie
Stanford University

“Statistical Learning with Sparsity”

Wald I: Monday, July 29, 10:30 a.m.
Wald II: Tuesday, July 30, 2:00 p.m.
Wald III: Wednesday, July 31, 10:30 a.m.

This series of three talks takes us on a journey that starts with the introduction of the lasso in the 1990s and brings us up to date on some of the vast array of applications that have emerged.

I: We motivate the need for sparsity with wide data and then chronicle the invention of lasso and the quest for good software. Several examples will be given, culminating with lasso models for polygenic traits using GWAS. We end with a survey of active areas of research not covered in the remaining two talks.

II: Matrix completion re-emerged during the Netflix competition as a way to compute a low-rank SVD in the presence of missing data and imputing missing values. We discuss algorithms and aspects of this problem and illustrate its application in recommender systems and modeling sparse longitudinal multivariate data.

III: The graphical lasso builds sparse inverse covariance matrices to capture the conditional independencies in multivariate Gaussian data. We discuss this approach and extensions and then illustrate its use for anomaly detection and imputation. We also discuss the group lasso, with applications in detecting interactions and additive model selection.
 

Medallion Lecture III


Helen Zhang
University of Arizona

“Breaking Curse of Dimensionality in Nonparametrics”

Monday, July 29, 2:00 p.m.

Curse of dimensionality refers to sparse phenomena of high-dimensional data and associated challenges in statistical analysis. Traditional nonparametric methods provide flexible modeling tools to discover nonlinear and complex patterns in data, but they often experience theoretical and computational difficulties when handling high-dimensional data. Over the past two decades, rapid advances have occurred in nonparametrics to break the curse of dimensionality. A variety of state-of-the-art nonparametric methods, theory, and scalable algorithms have been developed to extract low intrinsic dimension from data and accommodate high-dimensional data analysis more effectively. In this talk, I will survey recent works of nonparametric methods in model estimation, variable selection, and inferences for high-dimensional regression, classification, and density estimation problems. Related issues and open challenges will be discussed as well.
 

IMS Presidential Address and Awards Ceremony


Xiao-Li Meng
Harvard University

“011, 010111, & 011111100100”

Monday, July 29, 8:00 p.m.

Human intelligence is increasingly being challenged by the artificial one it created. We are confused and troubled by what AI can, should, or will do, or even by its meaning (Michael Jordan, Harvard DS Review). Performance-driven methods are becoming more popular, be they labeled as AI, ML, or DS. Yet procedures without theoretical insights on how, why, and when they work are a frustration of our profession. Deep learning without deep understanding highlights the dilemma. Are we out of depth, out of imagination, or simply out of breath? How do we cultivate and inspire more “deep minds” for our profession to turn our collective frustration into fruition? Where is our “3-Body Problem” to push beyond our current asymptopia for imagination? Or if three is too small a number for the big data frenzy, what are our “Hilbert’s 23 Problems” to refuel our deep (re)search of principles? You don’t need a deep mind to decipher my title, but we need
a theoretical revolution no smaller than the calculus revolution to form a 2020 vision to realize what it implies. I dare say such a revolution is well underway. The question remains: Do you want to be Newton or Leibnitz?
 

Rietz Lecture


Yoav Benjamini
Tel Aviv University

“Selective Inference: The Silent Killer of Replicability”

Tuesday, July 30, 10:30 a.m.

The replicability problems across varied scientific disciplines have attracted increasing attention in the last two decades. Unadjusted inference on the few promising ones, selected as such, is a major source of the problems. There are a few strategies for addressing such selective inference, which will be reviewed, and many related methodologies, which will not. Unfortunately, the problem is ignored in many important and highly visible areas of science. After presenting this background, the talk will focus on two specific issues: a less trodden strategy, that of offering simultaneous inference on the selected, and a methodology, that of addressing selective inference in a hierarchical system of inferences. I shall describe recent results on these two, as well as open questions. Returning to science at large, inference on hierarchical systems will be used to address the problem of selective inference when a database is interrogated by different investigators.
 

Medallion Lecture IV


Elizaveta Levina
University of Michigan

“Hierarchical Communities in Networks: Theory and Practice”

Wednesday, July 31, 8:30 a.m.

Community detection in networks has been extensively studied in the form of finding a single partition into a “correct” number of communities. In large networks, however, a multiscale hierarchy of communities is much more realistic. We show a hierarchical tree of communities, obviously more interpretable, is also potentially more accurate and more computationally efficient. We construct this tree with a simple top-down recursive algorithm, at each step splitting the nodes into two communities with a noniterative spectral algorithm until a stopping rule suggests there are no more communities. The algorithm is model-free, extremely fast, and requires no tuning other than selecting a stopping rule. We propose a natural model for this setting, a binary tree stochastic block model, and prove the algorithm correctly recovers the entire community tree under relatively mild assumptions. As a byproduct, we obtain explicit and intuitive results for fitting the stochastic block model under model misspecification. We illustrate the algorithm on a statistics papers data set constructing a highly interpretable tree of statistics research communities.
 

Free Public Lecture


Mark Glickman
Harvard University

“Data Tripper: Distinguishing Authorship of Beatles Songs Through Data Science”

Sunday, July 28, 6:00 p.m.

The songwriting duo of John Lennon and Paul McCartney, the two founding members of the Beatles, have composed some of the most popular and memorable songs of the last century. Despite having authored songs under the joint credit agreement of Lennon-McCartney, it is well-documented that most of their songs or portions of songs were primarily written by exactly one of the two. Some Lennon-McCartney songs, such as “In My Life,” are actually of disputed authorship—both Lennon and McCartney individually remembered having written the music. Can data science shed any light on resolving such disputes? This talk explores how statistics can be used to classify musical style, learn features that are distinct to particular songwriters, and ultimately address how to predict who wrote a song of disputed authorship. This talk requires no mathematical or statistical background to attend and will be accompanied by musical demonstrations.

FEATURED EVENTS

 
Sunday, July 28
First-Time Attendee Orientation and Reception
12:30 p.m. – 2:00 p.m.

JSM Opening Mixer and Invited Poster Session
8:30 p.m. – 10:30 p.m.

Monday, July 29
ASA President’s Invited Address
4:00 p.m. – 5:50 p.m.

JSM Student Mixer
6:00 p.m. – 8:00 p.m.

Korean International Statistical Society Annual Meeting
6:00 p.m. – 7:30 p.m.

International Indian Statistical Association General Body Meeting and Reception
6:00 p.m. – 8:00 p.m.

IMS Reception Following the IMS Presidential Address and Awards Ceremony
9:30 p.m.-11:00 pm.

Tuesday, July 30
Statistical Society of Canada Reception
5:30 p.m. – 7:00 p.m.

ASA President’s Address and Awards
8:00 p.m. – 9:30 p.m.

JSM Dance Party
9:30 p.m. – Midnight

Wednesday, July 31
International Chinese Statistical Association Annual Members Meeting
6:00 p.m. – 9:00 p.m.

1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)
Loading...

Comments are closed.